var/home/core/zuul-output/0000755000175000017500000000000015144140470014525 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015144153744015501 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000350625215144153602020264 0ustar corecoreאikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs$r.k9Gf8^(sg/:]lo|_Η^7?.x{yyOxklWkN_ X??xA[gy}}Y}̯/Y-/zuWwWoWww_m?ov3c4_~UgQWBe6O,nwbm.]}*L~AfHe*}=paa㑔`e0I1Q!&ѱ}[-nAWnvt%H+ W$^y(e!xrxv< |N ?%5$6) Y5o? f۬?tT)x[@Y[`VQYY0gr.W9{r&r%LӶ`zVTngz2¨(PQ `KZ)b oHIܮW.z/M1FIdl!و4Gf#C2lIwo]BPIjfkAubTI *JB4?]lx}K$2/Fmt΍LcZ_tr>P˴ޞN2m,PO޺,$Π^my=JtQJFjc 9G8MOY:GTMce0hTYF;B6@ c$Ⱦ֠N+fD>%vz_uTaq5ӄp\2dd$YLYG(#?%U?hB\;ErE& SOFi9qwVGtwFy|upr"hA ó/碓 yiX~[7!A(%UJRVum|+"N'C_#a7]d]sJg;;.̺wZfL6#Ύȟ Kdg?y7| &#)3+o^335R>!5*XCLncaYB ɻ>@J$tι#&i 5gܘ=ЂK\II"Rb'O[l6ro%-9}tytE*,Cj·1z_j( ,"z-Ee}t(QCuˠMޮi#2j9iݸ6C~z+_Ex$L}*%h>t m2m`QɢJ[a|$ᑨj:D+w4rھxiJz硂Ϧ4Co9=]٣Z%T%x~5r.N ;`$g`Խ!:*Wni|QXj0ħNbQe絸%]zNdƭwq LJ;_ʧNs9[(=>@Q,}s=LN YlYd'7M qbEY QOΨN!㞊?4U^Z/ QB?q3yv.اeIʷVF^j=_Z{5v7xni^^J"ͦ>CMMQQ؏*ΧL ߞNPi?$;g&uw8~Y >hl%}Р`sMC@ztԝp ,}Nptt%q6& ND lM?ָPZGa(X(2*91n@^7rN_Ŗ׼O>Bߔ)bQ) <4G0 C.b~CףkB(*<[Ǧ4 V mD~q2볯Q'Q/L1+iY¥  %T%SE:!җӣ D>P.BvJ>mIyVVTF% tFL-*$tZm2AČAE9ϯ~ihFf&6bL΁?kMPc_Ԝ*΄Bs`[mJ?t 53@?jڞ(7h?cFnEOא&nay!%dqO՟wX:) ťLxӛ*0q}0L'd1*-B[aL"T 1dȂGl*?%|L pSROޔ8'mzX+`قSᔙD'Ad [kP=+<, {Z5׷!)'xN&}|Y>!0[r_G{j 6JYǹ>zs;tc.mctie:x&"bR4S uV8?킖,~0;g2NET݃jYAT` &AD]Ax95mvXYs"(A+/+o+{b]}@UP*acԇ&~hb]l[9'݌ylSO2<쿫lIc*Qqk&60XdGY!D O C*Mrii1fu5̕@UFB1l߽Imq%u LOL8c3ilLJ!Ip,2(( *%KGj l  %*e5-oﴍ8M*a~ff~6|Y,d,`!qIv꜒"T[1!I!Nw.v]zFh`QwkCVAg/X_}F@?ƻvT񟜾[mm#?,>t?}=˼l?ff>\fbN2p cL1%'4-1a_`[틎b=SSO|{krk{-3ss`yB}U%:X:@;afU=sru+}K >Y%LwM*t{zƝ$dYr;Owim @tBODɆj>0st\t@HTu( v e`H*1aK`3CmF1K>y5u=kkN2;;#N;md^6%rd9#_~2:Y`&UW*֢v|E}#{usSMiI S/jﴍ8Ⱦ/XA PLjy*#etĨB$"xㄡʪM#z?NwGj{VjQSqbq 2_^׏޹(*exBaEW :bT:>%:ò6PT:”QVay <UKkZ{iqi}_ ּ};SN )ǘ΁ՁҺy mڜ]Lr*?rLX](^!#h k:U7Uv7쿻d)wBu^-%[ R'l}ʰ (T$ n#b@hpj:˾kj3)M/8`$:) X+ҧSaz}VP1J%+P:Dsƫ%z? +g 0հc0E) 3͛rƯ?e|+4d%wfm#Y~!%rpWMEWMjbn(ek~iQ)à/2,?O .|!p+,ICE^fu `|M3J#BQȌ6DNnCˣ"F$/Qx%m&FK_7P|٢?I-RiAKoQrMI>QQ!'7h,sF\jzP\7:Q\)#s{p'ɂN$r;fVkv߸>6!<̅:xn<# -BȢ1I~ŋ-*|`В~_>ۅm}67X9z=Oa Am]fnޤ{"hd߃Ԉ|tLD3 7'yOc& LFs%B!sRE2K0p\0͙npV)̍F$X8a-bp)5,] Bo|ؖA]Y`-jyL'8>JJ{>źuMp(jL!M7uTźmr(Uxbbqe5rZ HҘ3ڴ(|e@ew>w3C=9k-{p>րd^T@eFZ#WWwYzK uK r؛6V L)auS6=`#(TO֙`mn Lv%7mSU@n_Vۀl9BIcSxlT![`[klzFض˪.l >7l@ΖLl gEj gWUDnr7AG;lU6ieabp៚U|,}S@t1:X _ .xI_7ve Z@7IX/C7@u BGڔE7M/k $q^hڧ};naU%~X!^C5Aw͢.@d!@dU}b? -ʏw |VvlK۴ymkiK% 0OFjT_kPW1mk%?\@R>XCl}b ,8; :.b9m]XaINE`!6uOhUuta^xN@˭d- T5 $4ذ:[a>֋&"_ }Oõϸ~rj uw\h~M il[ 2pCaOok.X0C?~[:^Pr򣏷y@/ڠ --i!M5mjozEƨ||Yt,=d#uЇ  l]չoݴmqV".lCqBѷ /![auPmpnEjus]2{2#b'$?T3{k>h+@]*pp桸]%nĴFԨlu |VXnq#r:kg_Q1,MNi˰ 7#`VCpᇽmpM+tWuk0 q /} 5 ¶]fXEj@5JcU_b@JS`wYmJ gEk2'0/> unKs^C6B WEt7M'#|kf1:X l]ABC {kanW{ 6 g`_w\|8Fjȡstuf%Plx3E#zmxfU S^ 3_`wRY}@ŹBz²?mК/mm}m"Gy4dl\)cb<>O0BďJrDd\TDFMEr~q#i}$y3.*j) qQa% |`bEۈ8S 95JͩA3SX~߃ʟ~㍖›f!OI1R~-6͘!?/Vvot4~6I@GNݖ-m[d<-l9fbn,'eO2sٟ+AWzw A<4 }w"*mj8{ P&Y#ErwHhL2cPr Wҭюky7aXt?2 'so fnHXx1o@0TmBLi0lhѦ* _9[3L`I,|J @xS}NEij]Qexx*lJF#+L@-ՑQz֬]")JC])"K{v@`<ۃ7|qk" L+Y*Ha)j~pu7ި!:E#s:ic.XC^wT/]n2'>^&pnapckL>2QQWo/ݻ<̍8)r`F!Woc0Xq0 R' eQ&Aѣzvw=e&".awfShWjÅD0JkBh]s9Ą|ק_;%X6Q@d 8&a)a.#ۿD> vfA{$g ăyd) SK?ɧc/"ɭex^k$# $V :]PGszy iuKVMٞM9$1#HR1(7x]mD@0ngd6#eMy"[ ^Q $[d8  i#i8YlsI!2(ȐP'3ޜb6xo^fmIx nf^Lw>"0(HKkD4<80: M:'֥P!r "Lӓݰ@ 9n# " $fGgKQӦ4}Gn\^=-Y5PI dPN6 Ozځ/פ|5) F[ڣ$2*%&h v%9HN H~Q+oi?&۳)-nqK?2ސv/3,9ҮT9Cef˝49i.2DxatC<8iR/ƬйR֌vN8J"iJ. T>)qaY4ͬlyg "]BvW#99`TegõII kюHLa^c&/H^FFIu`2a$mc Ry+R:LڕDܓ>Y:]t.+|PT6=qWe0NƏw<6o3mv8k vGOfpEOkÈWȤMف lOc;SR&.w,qk>MPs+Xh4iyuGRd֞q鮺]m S{}]U kV0/ŜxtADx"Xh4|;XSxߵă@pE:y]/"(MCG`ʶϊGi+39#gNZYE:Qw9muB`9`LDhs4Ǩ9S`EkM{zB<˙ik; JD;;3!4 2Y.$Dwiu|+lO:k$]ԜYLUҞ6EmH>azʳ/A+ԀZk"f`.,ל{=wh|_qYj5M{K$g}yV '9l%w:9^1ee-EKQ'<1=iUNiAp(-I*#iq&CpB.$lٴާt!jU_L~Tb_,֪r>8P_䅱lw1ù=LAЦz38ckʖYz ~kQRL Q rGQ/ȆMC)vg1Xa!&'0Dp\~^=7jv "8O AfI; P|ޓܜ 8qܦzl5tw@,Mڴg$%82h7էoaz32h>`XT>%)pQ}Tgĸ6Coɲ=8f`KݜȆqDDbZ:B#O^?tNGw\Q.pPO @:Cg9dTcxRk&%])ў}VLN]Nbjgg`d]LGϸ.yҵUCL(us6*>B 2K^ sBciۨvtl:J;quӋkKϮ듃ԁ6Y.0O۾'8V%1M@)uIw].5km~Ҷ綝R(mtV3rșjmjJItHڒz>6nOj5~IJ|~!yKڮ2 h 3x}~ے4WYr9Ts] AA$ұ}21;qbUwRK #}u'tLi'^Y&,mCM)eu㠥Ѻ\a}1:V1zMzT}R,IA e<%!vĉq|?mtB|A ?dXuWLGml?*uTC̶V`FVY>ECmDnG+UaKtȃbeb筃kݴO~f^⊈ 8MK?:mM;ߵoz+O~e3݌ƺ(ܸf)*gCQE*pp^~x܃`U'A~E90t~8-2S󹞙nk56s&"mgVKA: X>7QQ-CDC'| #]Y1E-$nP4N0#C'dvܸȯ.vIH"ŐR ;@~y>Kv{) 9AG ćͩ$.!б~N8i"1KФ\L7/,U@.ڮO?mُa ې!rGHw@56DǑq LA!&mYJ*ixz2*{_;IYJXFfQ* 0kA".mݡ"3`Rd1_u6d逖`7xGMf}k/⨼0Κ_pLq7k!dT x삖A7 u/~&ӄMu.<|yi I?@)XJ7{ޱ?Q]{#\4ZfR-dVaz./f+yGNMGOK?2_~3\z=y}^G$*A! IcuR.o=MZ9zu b#s9@*иrI@*qQN||Ix;I}&ݢ6ɢ}{]x}_o>Mm8S]~(EX{;6q6^9.EPHŽ{pN>`cZV yB 8yݪkIf-8>V#ہll/ؽnA(ȱbAj>C9O n6HNe">0]8@*0)QsUN8t^N+mXU q2EDö0^R) hCt{d}ܜFnԴ.2w⠪R/r| w,?VMqܙ7;qpUۚ5Tnj ۝jlN$q:w$U>tL)NC*<` `)ĉJآS2 z]gQ)Bی:D`W&jDk\7XD&?Y\9ȢG:${1`+i n8=%Ml%İȖb7AޗuV3A7ำqE*\qb'YpuHƩҬV nm=Ɂ-2=|5ʹ zi ' ׹U>8bK0%V\ t!Lku`+]c0h&)IVC)p| QUA:]XL/2La[Xѓ F;/-rtx-rei0hE˝ݸDt#{I} `v;jUvK S x1Q2XU&6k&lE"} Q\E)+u>.,SzbQ!g:l0r5aI`"Ǒm O\B!,ZDbjKM%q%Em(>Hm 2z=Eh^&hBk X%t>g:Y #)#vǷOV't d1 =_SEp+%L1OUaY쎹aZNnDZ6fV{r&ȑ|X!|i*FJT+gj׾,$'qg%HWc\4@'@—>9V*E :lw)e6;KK{s`>3X: P/%d1ؑHͦ4;W\hx锎vgqcU!}xF^jc5?7Ua,X nʬ^Cv'A$ƝKA`d;_/EZ~'*"ȜH*Duƽ˳bKg^raͭ̍*tPu*9bJ_ ;3It+v;3O'CX}k:U{⧘pvzz0V Y3'Dco\:^dnJF7a)AH v_§gbȩ<+S%EasUNfB7™:%GY \LXg3۾4\.?}f kj· dM[CaVۿ$XD'QǛU>UݸoRR?x^TE.1߬VwխmLaF݄",Uy%ífz,/o/Z^]ݖF\\UR7򱺹...m/~q[ /7n!7xB[)9nI [GۿsH\ow!>66}եl?|i [%۾s& Z&el-ɬeb.E)բA l1O,dE>-KjLOgeΏe|Bf".ax)֒t0E)J\8ʁ,Gulʂ+lh)6tqd!eó5d ¢ku|M"kP-&ђ5h ^pN0[|B>+q"/[ڲ&6!%<@fpѻKQ31pxFP>TU?!$VQ`Rc1wM "U8V15> =҆#xɮ}U`۸ہt=|X!~Pu(UeS@%Nb:.SZ1d!~\<}LY aBRJ@ѥuȑz.# 3tl7 ]وb Xnݔ[TN1|ttc‡-5=VrPhE0Ǐ}Wd|\aD;(;Ha.]1-{s1`HbKV$n}Z+sz'ʀ*E%N3o2c06JZW?V g>ed\)g.C]pj|4逜*@ nBID f"!!*7kS4޷V+8弔*A19`RI/Hй qPq3TY'퀜+/Ĥ'cp2\1: 0mtH,.7>\hSؗ΀ѩ آSNEYdEcaLF&"FhQ|![gIK v~,Jc%+8[dI368fp*CDrc3k.2WM:UbX[cO;R`RA]d+w!e rr솜[/V`+@;Τ`5d0ϕ_Lع`C"cK>JG.}Ε00e>& 2䯫vNj31c$ i '2Sn-51Y}rE~b>|Ď6Oj~ebIapul9| 3QtUqSCxTD7U9/nq.JYCtuc nrCtVDƖϧ;INOKx%'t+sFUJq:ǫf!NRT1D(3.8Q;І?O+JL0SU%jfˬ1lމZ|VA/.ȍȱh M-r ~[0AG꠭y*8D*-Rz_z{/S[*"꫒?`a;N6uilLn<Yllmb rY״͆jqTI!j.Pٱh s!:W_´KxA|Hk1nE6=W|$O -{]1Ak$ ѫQ6Plp;3F$RveL l5`:~@c>q,7}VE-Q8W70up˳ A¦g/OEU:غA>?=CۣPqȅlW11/$f*0@б 2Dݘrt +qrx!8 J&[V =͋A,z`S,J|L/vrʑ=}IhM4fG(Ȋ1{TT%41Oa'$ {6_[3$N llmQ#idW0dc`[ [ӏih~'xGIҘUgl(}UL6s@`_mR$<ߕkU nC U*D)øYe\]iHc_I.W($[V1jQMYիf+~Vܞ%~CWY[5S'R<*}Uydv[xZ7Ay[FWUm¿V$ddaw96Ⱦ?㛟GW?fr_ύx{Dt0.z?{fOob(ɯUBwKʰp.)am|:7__\4u02ˑ1rMÎ~lwxŕҎ,uqih:Xa='BC-Br(#ߨӚir4-q{@V"/ ]v؛3q8| *&0_/WwDh3C!~P F la0ﮃx OȤtێ]f%X`q~EzƟgH|E:ƟNkOvK'8w]I5C|Jq0I_Pzszke9:\kg pGQLbwGLB ;lMU&y g-Ȭ.EQ<}40Ky#`y-5Bٌ{DLGdM+AyTj2 <E|)Q( `J2|3,F(E UY$ >9Bc1gr'PXA{|e/?UG}#`.꺨NnդU5iG $X*d(^ 񳥃&]R]'Zؔ=,?* y3e80+OUyT~)8IPL9 Pnĭw?Y9Yu\4]4Ry|X8O(l E%ǣSQ;;0~u?Gòz塄-Q[%X ,}>vgo O>.b#40bE7]Рlo?  l  ?IX\}{=ɪx:ַ5^e@܋Bʐ4H+IBs{}=b~E$0Z. c4"=opq=EgZbsmc#},jA\9ۛ_ |nmZpl;8H]8ǫ+C$#] Kw@N"ޑP#sظ>9~qY7"!R.[@zU9ÞnS 9EU'3]4@&c{ ^CR,,s8GFr Pqq=^̍erB5G']6.evB w5ncF ehB\J#ǫ#4. %&jjV$,CL-^ߤHLWw+ bZyyW+> (bynr L7%s@ְq,Y^2J5;l"ߌhz=%#{wގ4M!j>n; Q[M0;-۽؇̃ XnN&<.JUhq$`e5Ch[eø_i=)zZ QFFJe;Rߢ}ɫOW,A);MQ3&*Le*mdj\E Ѵ0{ k';ʚŁi?i\{աQRZٗD8}=Jئ\*ny ] _ѭ`EfaoJ~Ɂ'AG sS@K|ʟ@ "Ԫf4ʎ3xfE:sUjX4ZUm:z<q!󰍉R&v{k.o 8$˟MUs&rޜr{d>"ppd鹕5Qlh_Z! &6xOs9mc#⸥/Z>ʐdC*e b6˃ Y}t+h_>H}εߔ6@HJe7:R&٣PFIC}swE_k<Ќ%]wCbIi kZY6_:qJ9-0ܔRKUq\;53t)xogC{Ll}y ,0TM=ܐZi-fy)R #ya[VE"NUgյ(4-u5f:%*NO:\_Omͅ/ӹ15䙊`K^ m,E[`kmmj>U:S2U X7hU,U y$Wt Q+y;TUNȓ|z;2XݭHsy&*q2-I"oΥϤQPΥ֒\nI 13T>WJ=k-zw`[6{3l.cP~vۄۄxC_mBCaw'Cax3/Ak[g^9 #mcӀmyٞ1:٩ܱ;wE!Ύ4geaƎ_6C6&ߢ w N_sm9_d9a@7s^-6wmgn攞殫L&v.w=w{qJa#s;׉@=B;Appv7tA=kj7 BBo{:L#fӄ{4a7l:isг[5sw63B5`O{vPu8#t8#TeF3L aoqڻқ3$kɧX" E-sgO6ϑs]l }Gz[97wtM\Xn/$(b Fbuw,@)(\?{۸J.nRwp^c)Wyс?Oѹћ~ 0 {;rꆮ\E_?J}qC/ BҎ+ g1Xg O؏k4@1T؉8+]}Nݻ]f @<+=;;i\PP7#!5[{$`aⅴ [8> `¾DGx@1ΟIs*/8iQ_hG`_F9䊾T{|7@m ~—NػX!V\bpxu8 p8΋F7/~@~Ĭ{粆x>y ? ^: 0(i:¡0e600 Q^1z/ހ}q|z#I50w%0IN$YA*L] \K$+*E'x6z`{}8&DHE_AgF+3vh`Ex6@}ԡ$h[vz|ig`*L"0 6(S tw)u tw)g=q@wI3]Ѿ*$s~P.?K?Qe|䘇C}&{=2˱˄ۗH>Xq;tvzD r/󄦸,p{o)I :@ͧ/ {vr.z#r,]ela_©].rG_8#WJL{߃$C{:Hf:ΝԻC{'txuRA0A47ݿ:mCC3:0R,)fjlmZGU L NN"eY"l,r A/E]" #jnȬ`2ywF=-^ZfPm@U ~ Zfq:c]ǵYH0|pmm._g gzE-d@n]UΛ$bW#1l.[ ؛RdEj_ld(fduRAR.XoTpg bv#)fey`5o@׸cI_FV[kYLuI٬ND(Fe@'b^E!s`&& &[*X`GpZַ괸EC{z˫uZѯB. rxj ,Py> ! l;^'YBF[`.R|{p=j,QwR:l4g1HԎ=_: {mAj^gXmmD=8iqw'/t 70igqOoc֗] A f<ڢ{74F}Հo&5=q.ܧp8ݿ+.F|pF`},DsCiu5`,M{$uyϨr}CTe3]*fp<%ѬaϕI67*+4E_h"'q"PpZ,#R8pTuSkn |^Wqҫ6֪biǫ`YWDp${~4G^mzA<߫ͷ'QGIH5[(̢ԁ'Jhpgh? UqSٵY< <<7|&yW;mi{ꭝVꈮ;x Y(guq&D :ZmgR]QuA]bʊ.,h[Ppl@E%i1Ie < e;v'=P'lB )eLʞ8lae;idڧރujiO#I)߁PgPgB uF'uB O#TI؁PwPwB uF'zz;N4B= v _'߁PwBI:<`OB w 4ܝi{n!tE6.0y4 Yq;## ݭ?6@q-H˻|4OV.7yUh9%rACxJyc %hLiGeV*|~)[]#GLj 7dR ޥ'k糬P5`x 0/*#?WypN =Qϵ)yӠl1?O'Gƃ3'&:%E>V\q*' e PzXC[ڧE?/,şe^eӂyf-LE`W-N5CV5gS WXoUct:GC_xv#4Il$_Y  !ݲ*,||+ʪ3IPi$Oa3PG*@3m'7"; Wu ù%eCd?ܖRF@xX_w2[.yjrc\w e`^V(>0O<Ȭ 7Kf]@a7jPUxO'KI?L@aK0~ߕq +$e$Χo6RŠPw_) %٠&Ӹ`y&g]|~ef iEݹXŜ@?"ጂ"AdH1#vRiI!s3¡:/a,8t_2ޫ PJt6m~ɞ0*GW_ywRU,p:X7l-bfQ¨a~8H,GzV{*KcQ<p[G()kcm£ǎ z%nHRB4iiI5Y`1t^ɘ*XdAˈ{^x&1!zR6[K@爫?|Lg +N/*ìS-jm\N.DSHM9F a}xp/RǧTh_,D.|>vͼ*9 VñL }\;iDf6"_7lBwDe +,zJ; NL9֕>ߜP<\J^ց6y@dwfࣧ ]#swgۖrh}Hοcv]:j]DEo35ʴvQӼe ç^yVn?q+V㞯1k% +)sЄJ6pŠJ X0iAꢎƒ jBg=k"G:=T-e pVM$DupN>Ob)6Zxk!nLa6ak/E1U|/6pІp[_mUdC v ]f IEml\r4"-O^%UnDpvv)ᙡNwƉ۞s#Hw@SVAbNk4L:8ABˣl1It*MT$1V5LZ$)Kj4W!2:"p|Fܭ,yu(%eYTSO\M׭޴EeɷQn:Ύ] tc:.>~pXɶAQ("K)sINՒ1YPT@+몹<ՋDbگR"̬@f]npN &lGfiCMIpfB8+EkɖvŴ"+s+ԒQ&*m 3/k_Y}eɲ90?#=!U%z:_ 0Oq1`p]ЫĈ"lS}D犁SNu@0lKN|ꮽ>2`0zSKxrH^՚|.}w_kƩOЗ~Ϳ~_ NbbU>4+n:Hs6pߟ7+7¿|3g'x-qmqeY&=W7xz_cz6髬34Hx'VG>QLc;?.o?~'v7Сu~b72hgP-~=SB)~p_x 2:uTT://^̒߹ 9)}HkC QCwjɞ*}Ϫ<ߴB?ž+?|?n$-ke,6]H?D_MHb~ih"H?Z ܈#02{iJcֹtyYSZ+I,&en(pZ4Wa:RUksB[ 8#ϋ()Km,*-g!$GW45׀K}'Xoa+ipJ$ I"ӚÍ45̓T'F {xY HEeCkո<LZaZ} ׺$veqJ5~ڐ8'cH]+S <)ICDC+)ƧIOO(rT9dʴr=w ^l 8t4&4j}ўIC`MICpɅ#\–8.sM1B"aEHby`i/ ~BZBbBvV3bκfpC#0Tq\\UwȨEvY?6[S"kV2LYΰWG< $[{X}ˤYg>ae[hLdbdV0b]%{M`3⛵>+bL`S !F:岍 i1a qp"Y 3p-1/ƕuDA>?dž"5Wr̘.[PZz 6t Fҙ%{Hpu X!TLxC :|ckP;$ΫWkxAƳ7֙hʶI`ĐoczJKipZBk eck-8|]_=ߑ0Ψ *gB꽮LdFqv#0g0p^ VDaYޞaMQ(%*tsQۛ?Mp_SDa^.|UzfX2)&D&W=؄V/J&p._&X&m̎[Sϙ(Q$FҙmGc9TvF[ /51wDAhB4e+F7l#Yф<5Kl;uaʫT&j ZfH:%I0c $"^\EdU0bX߮vؿKRj(|‹z TqdAlG)UekfL-IpcK{<.< I9 'D4`$Mdp~r~# $01HDO6۞>EnsWpc;Y,ADH:O`땑$8\|},k97U^S+ZB[u>PdZRcu}r6Hb.o=[. {/-0G\Hjp@ċVJfx!U=%o-;:'v "0h$ :#bϛl^˗Hlշ,nG{Í %-8"4 ѯ\҆Im ]W[̥Aw63J3ر=Z[r"`O%{Ik+(ɼG!U)h!T c1bSMw=]xg,lC MWU=+.V'Lc{Q ^YDǼ-y(t^;?$8J{U!De uR-$=E' NӱU8=0%es aB0b^3% tapwz6\Y 7\ D+C∑tޛ- VbpRޮYSk6ID޴⤩YxY)`iD} Ã|Ea{ t޼[Kܗ∌.53*\1Q&QrpsdF N= Vr0ť' M02@j B,7L9X&T=K1IlI#0m~nGy] ;N5I434S6J0<3y\P5-`$ Tʎ=E]ûtsb¤[FeFIpwhP|YiۓKG#fۘ鈳'?~Q ڲq`;Sߦw1E_X{4$vDw{lkͷ8O[_9-&\5fəa~ w$8x<0 Z  q c[EH:gvblVU$\dK I RWE-3mXqU~Lg[HpR# a[HΣ\>ؙ٘:쳃;B'vpP|n0Tw9%@̓g4˾Wk:,K:g+iIKWw1-VΖԣ=d儻J)#TS5y{5$1K+Ɓ_@:x}x}{u$8Z~U"d]wYTQcVGhw&Aw޻[bRZQ_IVe'{FgӘ$5IpA}$$aԡRMbvӐ" rIGn$?u/iarP7vl'F"KV:rXwfࢎ-/*;6)NN:1\K"GaY[qjѣ.<2bB f^=%R^2ř%S3z̈BVRzprj@My+=gXp9Nag0ΉC#яi`<-h$jd"-֪_X03F b\"l!::Kms6zOGn %YίmfD0no͹x\J  7˘)9X,3lgU/i.]`/4%=jJO IS᪆lIĀM5a䠳=%Snoףgl,[%NZc)uk-8|]*f:#f߮ ,I;YdV`5M,ύU|`6z5tOX;V!md (yR-a$\~{2l^.2Mbyw *YCMwk\pфsLi^裗[RҲ"sIgVD} flKTYy}c)dZIk-\A$I4xo= 8n78.j8#j8-9ਮk[1wYci=qb1N:[A^:Z(eKx8tOۮ?l隕Kȗr2nN Jb-~ZSh<=_Y|/,J^^<:8Ɏt=جFYj0YϥG'$LO% &rpѺٺhaN "JY K[x^:\#u4ͅf0 Xc%R/(yUi(2:tB _k3vj] j5b32%qo m-poP`(pR^IIK)uNc(Jæ683(#XU UqJ:^BԻލvG*/:6`Fh'uK ܘ% o];Fŗ2ypDy =? 䳜-AHp0 aZ` Db!%JZI![p1Cc|K+1xuʆ9QV|LQ&I%Wkm_e `{]?~׷(l6HMapHN<7zIA=䌬_x$KclG3"xHί%]g'^ &}ݙo\AWyIɅwͺ~ڭ{~z&z~sQ70bM/Fx~ ҴRCx%,~i8p ?5ZEx?thGր" Fo"ш"f%)*s|9O/pK g)D(]*#.sDrzFϯn=szF8lb<ftx|=l\ o?oɅ0ӃW?*>7&{x{^ ,:~٬G`)_G`??"^{?<2~<9Lx2u NP'ћb/zf2&?흋t4#ʮ`㠌ʾ?D{y?p2 OP8'Ԏ;Rv7 g: ށ! $ .y5ǥԀW~2S{q kn{8A͜,*z z\aEσaʯ?n5J$8*O,QUo/0@"wXY^#'ހ6t9@KotC|hgq΢(~>ccV%G,UyK<}WY~9_0>%LMFan\]fæw^}1B`n[c+Z"WVtIyd5ua(4T Udtʠp %+*&9KSp(x`@v.̍JPFy,s)2KaWqѽE'N\D_@[8?ޝZn\=t4녽ɷؕC{3䄼8Uiϵuг{E(~/P z؇R:NGUX@bd/+͋ǟű>Zg&)c-G;0 ^4N}a,՟B o~g/!~Od]g^U$:=tyW8J0$g,%RaeWQE;ӳRfhnp.O_VyT̥~869 q`DiY!Lc p(Vn}S!J>&Кa6vm-߰@_ ߜMׇ q猔(B{2Ӈ&- ZwGA^ s'$G:nJz\b߫*I(^ sA}^U~EwU3:ZrKHƅc*\Z\vpGBᬒ sfV3stùJLTĨ癱ҰܜOȨnj3gu9{]Q^OU: nAZ6[ Q{jODfSHE Р©1<žk·/ W׶i7s:Y@?w ig\IfR;PkE!P&bUb"n㡂_{Zd8J=@9n[Rn-y- Z]a{x G:*ZnL$OPl[)V%ԃ|WI8Dn[{*TbTwR$Ϻ*0د}% ٙ;=Va30 ng ~@,7ٖ_90N"47fEm=p5ݫt>VMG/_WHڋőX\# GVq-V(fd݋xXQ~$HW[+y>%!آJ㮲Ԓ@c?T_LME0y|=rɎGfZ HgRO8H75F$^Qc$ #<@VwpJmZRlTk^ c*W,ߵmqTp79]B\ϯ¯_.GoE=}G^nx.AHQ /G#4SgZ;3ɥc *7 j"3e*36>M ^(|.-.hN|olQ^8\$ Q,_gq Urq $ULU22@p kHL2] arTKC0-B"E%iG$i $CVݵdZJp $CHeKjV#2)r.1L;ZO4s-M;j$M;Lҕg~ӌC{3VTfa&RB Q+R, vTP VfƱ`ͬk[{54kyXsmEޫ*U['2B ޤK0wY}H"obfѴFѴ#\{Ҧ_ MT/+h[R~ZE"W=Ք53#pփ^.@.8|?< 5}?CLo^F$ͮLRJ|.t⭍̤,=$LF,/|=ozy?pdT'>XRbX] XEVO ƿ0KPbû]𡑜9YVp5G]p >(61|w$p\:q$^=&/݁o#7y?{e[gf֛ǎY/&67?xgTs+(֊z*뒤:@>n >DɯH{7(}98ѻnOA^ލvP;~P MͲuDL/_ਖ਼oâ{s]}0Z0)RŰ?un0q)XyX( 4o~zwRAKYK/sݕNymQ O^'M:_ Bra@]ߊ:5M$5KAܗZ[^FPI株]julM]gUktzMw_axe?8|$n,5:.]ٴu/9ؑޜLt@ Yz Qe!@6ښ@ LjtPdE;6MC.خm[>] TqU:dr&WAk̙*h=rŎ7>ԢV U0WAS棪 Bۗ`v7QӄEL*A%nT,}_gLPB@BPɨgh@}xHhl*VEe2^ A׌q 8w28fP\F0;?njgsJM*MR[ ȕY :ahap&!I & s4$7Zdd"EK)sV펼[~Zpt ΄ȩ%d@b~RBaqaP"ɫ0 [DpӚoX"dܰf}81ڶ$.m_RIڶ[`zN4kxD~m~J S#1ރ-xF|=CMM9gUd-0%tS1Q1qF*6['q2rɳ,eRhU\IznIl2sB opR`,#69Ԙ<ϩĚKh&XZqNьh_6`*.%X^)\s,sdT4UbPt*I$b 8^RJ3hP3NH1 ɽ6פVXa;IRvxuxzvV~93b CS6%0er"@tٺ`%C vP[њh2 j Og2ːKV 7X%쀸S!lyY`6~o=pzYP.zv M:2WR0SV[N{vJ:1#EU5 *ᔒ򈒈(W 6_󋒆Wa`/ 50]Jūgr opxWtދ(^}fb>1g-}tކF]`p[ᮁ}6O hAlPCٮ4vHAv}ҚjۈڡuP+"ճLkeRFhJ2Ⴆϔ8#$iZNZIEt)dNY8'rl/3hͥ5@Be:4E,ZUA90ƙ]H)0TdB>DȹN@=NbI{Ww`# ^BGu}~9ǽ\4JpP>M%U9p.RvCʐ6'[d83q,Q12IS`$wseJ2*[@AS+Y9`jD!VsŸ-UhAo9A,fĉDQgJ<׌1x Ic2"JEU(jBpqC([ArCGp.V>Bt4 τf*u U8 B--*ijRZ~!bD5z/g6N1o-HD_04XC Uj00ϴ eĴ*ôڞh-# 4_wr#ሊg)Qum'75R 7 &Ā-,`-P-ְ[3(nq5gM6Vihy?wPM:9u\Z[ 0?Z,h8osh5]A$ 揤qAxrwru( +&N > `%`z4Z5˜:;9e(:2Y‚b /ۥepmv$g΢uW VޛĶWQ=h>),A#8!z$5nq;h譯L0S"dZvHf~:Y <#0jM@a R=2Zԛrhfꓠ;­b}u/v)>2JwHwHZZϲg\սix`%hBN^k05`.`&zdBkF[Y-oevWMmQ@ߞW|:IkǏM ~#ਕRzX V*3L((RKUӸOvR2Ps߼[F?to?x j0o`[ϼYy&y*Z-)ôK]\=}~&RpKk43iΩtDj#YW@J]`LPބҜO/%IbbԈFmn+k WYf0ln9ZuB957'G @QKųmP®JO^+ `k{\shx LHu}̸ B* ivC.a X L (Ј%Snĥ`` !9ڝ@?g+_Lhu y"e9Nڌq(7^5S\]*=zyWvuj nߕ9p-kZ.|BfYotϼO5k !"l{jP߬g~aaY| OEA*UXY%d# }{]῿2Z{?eg๳3B8;7֝Et)"N;Mԅ(ޙ0ނ'C K7ͺ&x|0u,C:J Eb |,  !t}DÉvJCK2xpS>ʹtW;Te-9Xs%`^+ll>~G2#k4Zo0;f#ZG[5>n*C9%%' ɀHrw a!`݊Ժ^(?+ݹgxt6=jeoՀfm} H*/C5ӺRO-%dgojc/H)oL=E E ٛ/SXR y|9j^qچg4B*NZ-SKJwݥ4m4J/l)h+01WESsUt;pn`Z$G>xYb'/^ɖk;Bp{>t ,zYkշ!okЬAiQXN`e[tV^2be|9%ώ~чxLD[/bZSÇJARD1ّKA(y"@.dI6+Ϻ2Ԃ+1o_يbvP ś-'XɀnԼ*&VWuZve;<^ǫ"gr("ÕtasijE9# m8fsM=>Z締~,]>4%r6 JU^ݍӥ{q:(Oc`CHpT{/BKӢn㡤ZL)(J0 />!`: { ejaФnZW Y/W%18]b+6yP\K3%Ek1P_F:s8 Riw!Bf #Z9+TW3nU9(1,S>&Jspx4F^PȺgA 0#1[xmFk*rR@~s_X֊sXegw?CT@kh6rUxkϟ/g`[-˹&nf?Y?u翼}:tTtωfhG§X=Xʭ Ul1 ?YXj1(5/V?eN+0" o^~Cȳ`gĮVCHu T̈c ֶ\9"AL\ο,' &Y泗F@TZn^G1 [w M Mןxwp?f- 4/n2|߂Y.=ޥzj; J@m- Pm<}>uuUob2 LS!mxWg M(IZ7C| FjB(Rir.*Y>U2#?naɃ߆wXi9,Vhga j+t&/o$"չ),bځֲdꙑ_ –?H+G+WƱK6-BxaokV -Dᜠ>mE{{ğU+ɛ+o]Gۂ:j*9O^5#7ɊO%h}G|/^uϱeF]_ȑ.6ܨiI$۱PpVKưX:D,;1=THadTw;Y5jo~g]m6+*}.ln/^^Rd?&IV&flgk5Hfę(RNEeK 4t7O^<2t uPɡ_dW-ItݭС}qfW0b vEov lNU_y9} ]2Sx'&~SKR+aқnAX\Q;fv\%'7YSpk^ඕ׾^~ Uſ.g?7Yr>pM~7 oYaEз GAx[e>aUr=7e+*`5 ~<$݅.e;?o|c&/~ᤊ!V~O_"~l-PL:[}7^8~:Q"}(stW«ԘKKY:.ԇn :9pjޒ{x[ϕ~.*QT4hmD&w,e2 0)JGV9kDUSt曯T?D·4|3˙Q~>aXf%x6/D_/zmm:"+۰m.nf/; .ow˘%K|\_z1ab8ꭖCuK})1쯋Děc_4֗Þr2H_Ԁɫ{5poMrQv}n߇_7q+V7'KhC2: 'GwϻY ctզ6T~zQy~j^u>f|O,tV)t}K/b}HSE^@G:D.{g߭6GGxyI?)^'%xH[mӽ!1?; "D/H"'&y 咮?]u73te҄f,#ic.3Fpe\+}Nf]!KϽd>cCWayAe i'8V6e8а"4q!b 4?[C>}Sܜޤ :=)$v&Sq۳{#D_ߙVVdS|rON)>9's4;#k3ä[N@bj~/)T&uuigR~1:S[F\ru&vqj5Sǥ4DM }^0h UO}r5A ¡!a7bѵYA"E5*U]/ch.Y(y&(}[pIA-Pͨ;Lˌ2pƭhBpBsLtJ&yfxO BT82+-%ӹ̵*4e s4Ib0|J O|5Γ/*:pm`7wZ{\Ɂvzɟi&bz pNl pN;68cj)ŧӦ S#2='4;bSpNqn)tusZeX)chtK/fBBOĂeۉIAq9.1see͹ZOL"N:Jsj/Rgn\`,W_̺ WQ eR)%gALJIb GU?j yjrɈL&g 2:@w+b GMNу9$˛,Ml7ّ _+ !;祆#;e߆TKm*d{∿x aGm7O?J5w@Maf$7!(aWjE-NeCY#: j> Zx-fs5 Pc %c#^iv|Lҥ.2dIᝥ#Iq5>6˸m.њtq䝤k)ݯٻia 睑p ڽqѹQOGD֟YW_۰|lGׯO$HI$2K!c<7^k Gx1[~!n0T5'y8O%UZxñx2;D|ag2.:^+kE펕Ka7;o׫x{^ xK: 8*1$x1' mEQ9ȥg(+Ҡ 4wLdFJLњtq]kNZHHE-p¥ KfA4N(q3?n .`V6rN`l7 cB I?tSH;7~.lu:ffG {47qjK\y2ϳ:`ȻJ;Ls̗~G6 `+#` ;kzf<^Ou(b欿DLoSnd@N0qB wo{TL䷿} %5Wr(`\ .~pٵq+>(!*|]OI3=fUnR &T\rcN(ÖN4 y5G1e֧P,@9#I 0gE8/sqR$ҁ偠u+.z l憑< 2) h挗>Q$npcȻARr`m6|3rN65'{Lɤw/`#::}p)vĆm8L.䷍o[1Y. I–O'4;ogڨBP:ȍL۷6N6FTo'9OZd m7b樥xXnKǛ [V*9+d9]qp^AkAhhc={H6IQ4Y-tlWvGdXs 5c'Rz4xIu.',IT%CC#DTmt:;l{K8T5?.aUwzDL{QMJ<ݍYIVI'eبžj j IsVV;XzI0Wv*O+=Ks; O?jPںF>ؾt"҉&>ӟ+YVf^W1̖ݸ4qg7j> J}(&7?,tR vu ÷IWLWvJôP) EDzԫSK ,C,v J@!ϳo_zRW0{<}REbA[^{5[G@6xk6`$Ås{ؗqS+9#?rvfE8Sl@_ ^7I`gzSaz㕴ºԟ{ބ￙yXn_ϲ>g^s/]\#l6/[7*a mD%M'RB cI=zv׀=(žQ}jPօ([pģ,)߽y9է5VG&I}m_ez6cȒk!_Ij'rC/tryxvU}9` qѩm|F:֘^Oa\x/9j{H7M(;p3+p;;zCwa;I%p=Vk!Fj:BtnZ/zFUղYv*unlKlTsU>Ԧ+IuhLCÇڴܨޓ;bZt"n^ Hq,:5b_j6p.Ay*|Ex7d^ӹSрK~D,|tQ))qbf@)=EvƓ1zq uhc7e7b/0FkyШV J\q-tBW}5-.0 h̚D2tR! mTտ<տPvNR:gj=sBFrwICrboR)!hIêyʈϥ5)sWd5CM8VlTʡ~*O:0@ iLxB5*'ۛ>0‚2%I*\^I -O_R >dWR0 8Upz^!ኧ_/tr00^~dK0GmVG2rȓ֢h&&_4vcxs 5JYG+smPzD,Bs,7uabp=PPil2 en+w4O\5yՉy̯owmn"ȸ^$(I$3ASX-ih~3rlU-ݍs7?uf^,g_ɨN2 vmEQ9ȍw:xm1&acuOkyɶcҚ6^py.dyG![3\!Wdž6ƼV:%u3(iCy>Qݤ+P I7A.-Ų#/$Dk(i8C;fD GD##=c#}Q9Y1!œm(_Nߌ#Q0@ qUE/-D!DC] [DKDTf<籷ٸ-nsyW"kǻ_ĄPުXJ@tʮgdUHx0*cr/XGE%[@T>C/!TMgؔ CFc x P "<؋Z>˃*lUyZ`~ݷI CyԈCV~_P7_Hُe=YxN|8" q1Ḥ-ȇjLw.{(CFF!с=cB'tFNǻql fiZ-Fq :EKE>T|Ҏ*o%[㎨ )\p^I@`B:o f76f7|nx3y [Ry`ǵM+rZ̋yu޽fAǭj?pɮE&ocxg>P MH&ˆB>N[\w=| &}D)r~OP"܄e8~ז5U64Z_~]enRq b|IͰg o{R̂[^?h6կk:yf` *~ϛV_؄Z ̼晢!xW:MOX7/OlV#UըO'\}Oϱ9̗7B{fS`p*nɽ6#0 JhiM(,<Hpgs/|[˒Ľ*ɭ5=C:qˇe^ vel? s~KE-џbqt\FSw \UyX彿xEt0xp ځ/x4(ٟUɡߎ,e{zlwkז8vm+xRVWm?Y{ݫoFX?j0w{Gt3px&-__?[oMW UqYۿF5^ݪ9O/tѦ5Uٝ(t6_%[\؋U/b.B>[/wv/w k=˷{y?˞ŧV ͪv2mOV ~ M}uW}GMdሾۋB]`q 1yOE @wݕA*Mߡa;1.]Gԛ\W XdIpYCȃqRfCMF*vF*|t`|)鐑r2 d{A@bBsN(cNR0oPCV GGYĘ.*^y\)F)5Ojw-9sP92RΪڎyUCzg@sZk+KlY9K"bMfDL#":"V%cvf7)'&5$ס80;0#Tç3ϛ?*JaB}u S0f#t1T#ޑ~p򁺵#y5s/$\c”ђeKQJZ|,s,PՎXs7};Ëx; cp4e*i[!/N> PEpNd"G =9cP1T^hlck(򡪕&&/cM^RLvZwkSYzoȷv:Gt58atHxوgAE ,p-VJJ+]Jʙ61OP5+7xM'Y/3e'.nDz ^H M &`>FLFx=׺m6G6E>Tƌw?]FpO@R:!CUG!#zIلxC(ʷԻ*X E+߾GK&lqNEX6 2c1E6a+aj=*B3*A~QM+lt|'z:W'ɬ~Qt=wcd>"8?⮱kBphqDQNkG4EȘ q:G,=_#0(o5k!c8>wѪUEtZ=n1]*c):bby 7- _*6b߮Xw)zX7=:w<όڊk5#)V8i,9.Q8E[( b 杋PL< ;՟ /ԓ3b |#??!#L2n<>Wn;-¯%+Iӣj$Kv{,ϼ0%~6g!IɲBF)!AFh#zȇf %G|U13QKb3tmhaakЖh2kZ 08hiYK^&-ȇjE(hWUBڄQGlt)f>&b*p GEЪNwG|v%/˩p< Es I0ҥ+e2;f^I W+9bjz+H{JZLU;:`8-YJ's/JJ{s:MJ 6ٴUO{TMb0|/tQUuH9vG WŪF?PNw /fg6wQ 1 )a\@:d4y <PJ5໩IJRVP"ښL>h@?|%K Xve^Veo< :s.DaOP^JU/v2C}(ȇ|DHwr262.*˥:g4/GL1<Z>Ä~L|^ńt%Ehp i޹:zo^qIaDh;(*2]Պ]#jLs;v n͚ ֌l ZɚOI%νJ!H/UN3;a.rA萑pم#BY b-}{pZa~6; L-#hv݅rϩrsr)!nǔPCudRIXYEI.haK@0? x!EU5;Flv5;NdSrAv> O0Ycm; 'k1Mɀnc(WCFnlk[Xwp7&U'[˲;-7eN{Ek MQ {P GL x&SZXq9P؜7C!PmX1tN p lQDկOz-Զ4$ApO3m}Sz<<^_IEny 7[R q:d4a$v E>TܤtjЙt`tW^}=uQf>b041p؃]$&s8L\h<ډ(**7<~#jKSiL{CF.!V^MbE L(W`u#B=d 9`["[jpf$>8k)(;v޸un|<Bȉk=t)R.!(ŐU:K U|HAaZuHJ3ݨKfw-VMWx b)vD˻-{T㙰JMM𨟮{*i ee,eI3O&ƦeI!Yݭg*sRo§eiv6vLe\Oe4!ps>xnon3!j3:Edc-kP&5_-8idVziN JsAȱw>C_:!%)r=^[Tc栠"xTCbOpG}3L+'\mPMSیGVbpP3%QRgX.2 $4ϥ Bd+z[M*<yꥨu >m^ J PL5`?cqv̕feaIJ8%yzc_׾qdAFD(J-) XT]w2U;)2ҧ_Qr*UIss=k6EhpwpFH Kv ܇ JvK"-%wmQvEUŪb=ė3yT'hFbsKv]Go9oK3r:&{1(Rqmс\&K9ïe>G !܃c`ˉ3q?ֶ?YS҈*vB{|H@X1,q䋮s Lr9_)fF50qom,$ x8(b&R\~1Sg<'c+`;ũ l(R}BG큐raRE u9@mʺ`5ۣLή@G IE1XE.dBlϸ&3%MfL ~W/(dI _(wUWJFL%a/??3ske00mjZJbgv,1Kc"v >(I#z y b (ǰor 1߃c`roψOU4@ I2}10q:qG)EY*Λ'pA/a5ȩ:yLMX+|["_,قLIB(cFPq L~HVTٽHA ."(u`\A9o&J*u|JS3{UJDA W!JʛH}G iEWg\kIFRΉ+A!ZJ a,TL)1(qt_~ܼNA@/'v]ژkEVAڀ՜d"'{L ~o~ 3.ΰS!MG;BX*U/?7ҶEg;!aDwϋ:b6kQ "W8fQ 10qYLϨA 8w$}'MXCĹtZL} IJD)OV:gä.e"雱D cVKX ]5t~Rt9}F }7U`&*4/I ?&u_d*%&JI]V{g8 TO\=_ENle* PQgv;u|{],ԁvj3z-*aL'&1$n(g2R8j3Vo~K-50cB -nG LJTfǒ47+<[ e m{eY(ǾR]hK9>Q8HHxڸ$(` I&xFR +-|ȇYv/a6v8bnyp %nZ7 ǯqu%o]se߉GZe[f'X^?ymJFuJ =̚-sQ}=5I}Rk1q.=AA9M`7̨-Pvy&)ҟfe'Y̧|7XpA ?)s?-_< LagUjjg^EZ+yM}Ey5G[:{ɲM]WLW}ًљ^Nٖ;J I"M51xZOS;S" xeޒS${H $&eL|66I^MNS5[ZcZA=گ.qT)g7@n1.S)b+'#.>pf"pt#x+N weP_RM%_K0ʦ1O& 4}|2/mkǧ1 Gd,LYqA](8Y.D1q&y"$,,W2ڤOf25ԑ™jt:t<q#0v~ p)&IgqMj _iR %{ͦ_ k(x⭲ÚNjR[PE> S6V2|Q#P>R$`+Šqcc *sҀ}J\QC`vfM6ZRㆱx}‘L$%vI׻5w[Үǩy*_ u2M>A7f<咩50M~nohmX|j+4FV$VS fj6swSB:}B5؂*IN2*Tl΍Aع1XN'!80Vr~qx|IM2vጹQ )B xkz FC@.af YtsEF16P%%PdWs>\I,xNlO:޷IoY&4gr W9Œ,lu pKzYSDGyAx/2M &XsNb^=湺sG ]T7?@j9qe}TFl4b1KIHG'R,S =}'F$ BDaWkv?#O%,,['WXOhg{wJl Jdܧ pox/bG[X|&7GTˉu%+O͈}q\POp4f~sZX7vèJ5m+XgicaΏ ҋTE]_T 4@!}/(k{D tpV(U\Z_s|/;{fZG{%ܦd6-¥Z0]I[J5Л_.vycۥZwul@|Eg,jhRDӢ(AGE2r3Z jrSZ,K?^T_E_ߕtӞh`۞dw0_NO`!h| 9|њєprD!dpSCLIת̽k(,6,šʭkzS;1I^iGRW!R [Lz}W`ú]p>OV[h+ͳ ObK8͸fn+]{uKOw;5TyZ??UfY}3Uf.,|7t>z/; '`^?/})K3+oOa:f||_ʹtgS ;,'K?z9٬%7dVώP%W/dE8~ڵ|S=^"uv+ًn/(2#}z;D ]c(cYcbjhdNr6MlBVJT־O=m~9'-]yg >g ,A@D٤ xNrZJoEӹE3ٔ?ݧȵg\|Y Nm{ak4nfV:,w_7f]l#.wil&]^曵T/f&Pv_C \.}g j qZUMqKZOo'e5;j`|?*nsw\K'RX33mh̗ξ۸0s3zecTOQ'қvZ@[?vwFQ6C~@U(߽}K|NK9:_U_ӸM*_D.KW-sƓf_OҢ :i0:ۺݹUG?YxEe鑻.UR_ㅚΧCG㛿 rkX"drHdrW<)%¤'*jM9]_%ŅZ~1I>=w "TN *I=AOPRByuXG5(,q*b[/~KЛ[/ .WOc B%֮ȅLͲGLIe-SU9@~mX6pnXŌo`RjN77܂ ^Ecc:VL)\ep cHbL"$9hj1~hc`Ӏ>Lc}-Ѳ 4F,әxNƠ9q="HEo,lO$j 6(GF>I)}- PQ@\EM+@UƄXn|-Q ))َ52R1鐳i{W124Wg5\To2p9Y\.;k\JȂ;jdeeIEBu ` S.?nh1}}iҫFo%۽Ke\cGb{υN!Aq꣖1b{/uӠ͎&h]-* Jdo$Ǧ5g` |@ |@)?hVf%UP(y(%/Pܢv3`{;?0\^)RI! X\h,E`wkup#]Lڬν~5FXPj@Idmj2nR,ؤeUUaz9ex,91"f%uwLd?j&[,.[,epy+hkD.,qđn:FxjE濜?\cťސ)TO_T:1J'Q\aT%(\>toG{zאL+]=I$Wx!Hs#P Ag;=?qx%9R]p%0$QEnXg︖)+52VX ,-$\|s̵k"wV!!z*OVa]=6*oF4{۳ ͛i1=YgI^E:$9]YpuN7Q{ "Vm=ė=&L: 5ă W5x Fm7àN'g]mn _&[:Rn{@>UDY]QOj0M:(|~|pDgZ aNm~j teUi3Q#5"ʭ@:eQYx?f1Qi J2lYwiU-ߌzP?hSPu$1%nRͤ??"a#4$RO{#3}iFRge/(ʈ'$sy3סm! 3(:P8A\['0YWn mv8{SbؿtU9X"H^u4{V@+x˒ 32`R#-3!AGf``v-0wWJ|pzK;_%%7JoKcg lǓhI: 6Ҟ ̌v}rZ9cGd)|48(R$/%(1Lg3"a 3 j8.(D%HHjEqnǚFfp$+ab+潈1X9z4Pg.u6ڮ]VImށbjDJDURm'Dktā^CGaM~OWWY<\)֡_gzpZ^{̍FGc7xr*õڡ342#gY&h>p7|8)I* va*ʶr+S n]7؟83 {ܯn`/)yaaDE • |c(6aFZ*(:=3 tk&0MbMP fܠ Ljf(0v[K p>ֶNgG^⸤pv@/n$05LmN~*.ژ l{;KZ\c3iiv8-wm-XN27 g\ԉM,ؽy3 _ ;^&e2$2C#5I>Ēf0Be9pr3i7lp߼v_S34#}MaNٗIAgƅ-SIgFH[FFsB$m,9)5mVzYVKR342# K2W „$R߂Kn2]-(֊P.in#@Hmh0諤n'5V* O> >@MI9C#38su#GRv@9tg2e3rc95KK/Cr>C#38*kβ'ıjIs%UK,BFp8,eرl*4_@LZ+d ^}E=s:tI ^͞1=},dD)D}b&QXrSUtFfpOM*IJ:C#38_!9%ZUxGB_HyI=ps! =g$wb̠ 'ݿtHO 7FmH0I5qFfpYgύhיKt5Tjʄԫkyٯ}vJ*P <]gwq$]0gсBǵp%W1[k0[k7if8+@҅Z1@LDJ oN4̤CYܤ3428bja*~Ob{޴343gUP*<pphIP ~k@h̊= UIzRUŤۢcw.ú 8Xd [ņ 8V۶CGSgFc>?/0Yu˼O73^N^_N$[<:zTIRs 1Iy9tbW6?~$ ఫԧIzW0  Jk Wg8`kkaqIֵ2QZ(KfIAB/M҃8 P(hKQ 0>rs`%-(rB3d.z鐌\e{M *T &4'#ide.\p;ʰQ 1dʨqͩ].mm0t騕&%ʗQCyjVJs@^dWNI \+τdR'tQ/<24 qc)UtL0:G$3<]Mʀ-i {Ox@@)!SH<2&Zu|idg|:9>ȋhe ʼt%=">C#/8mh+q2E#] 4C< ? z}T eb񒐚FC냛1Wg0K:iYLN8C#38wRrqJ0\8,JVaT,yn=/Ms+JM桞t(*T vd|l#% f_Urc[deåIvbtI~%鐔t3jeM ş9T[%u{fy=(7.u ghdTꢌIM R\ZR>FQZ}](1}I)x8ǰ32(32ѽgRr?̠kL;IyQ]Mj-"?Aɫ=N:ESSd+a3Q#B(5Wܠ3|8ob/% \|2+c2s&N @0]~ UdӤzـ77L:LFj3b@(Ai*0`S2ۍ:i.ѻ2=;0lfj="tT`Mߎ2j`TcȎ7\ʔEbȞ+4y4n~;+z^~.v`~O+дS_?g_~f=-\>>mW4iwB/Ԓ{̓WueK(ƗS|6J^G*qh7w7z@SXVֶͰ!)޷y2UYlsM Wse$tC݋5$[#v&瓧4SK@+;UnŸ~[jx*-ΝVr/<_BN|x/+%χ{_ 6}v_[|{$q'"2KJj >١cB)Ͳr{\iggVOYs{.;l~Wuk1c3G ѸϽ>k<[ëY^B^15WXd4]&yߖhG/Bm$.0}F1θ^>EYFE! k\s /$7w?D4/>_Re-MUu=2[N}h#x2H쏀wq6t~O?y۾?{q.vU,v %1q61(zDu)(75a-rNT˻ǥI'|tX~c(P_{uk6\Yffviݗ[ǍJ`{IIj՚&)/o8pak:d{$}~yBƐF1~ }}UbˇgҽMqoth}X[_כuuqkYa:U_q;'.ɄqQ%7',dɾh͟e?2}"8{DnN߾r0{C޿U6~&'SqnKП'>sv!wUH}R &.Ua|3!~~,|%MuBq,W2ު}^mz^^ 4JCݢOed=^H>>d僊3eh#%gc?a^:I+Er=+_U˛ĝ[a؈"Ttj!&@DR 2EկWԄX-*26L=v4(ӳ.ڣ1ogׯqguYw\iTr}'h,F|wUS0"6wy"@u?#םat iS Ƕ+Q#;W7#g9?o[0? Lk ogfL'iyǤ%6V:#ʵh1b pϪS"蟽)wLE 4%gqK2qt&-/-je4u`Q i)IOyE0s$N|*1 Kd-^3U FͯjpZGNfGxڹqgه)jlI~ FWzA8.KNZ 2h\b"iF$#}.|Z.5b2Ay6Cye'_q v]cRW^ Fb{3.V_..0]ۍeF2UYP੔˵ߕBۗn&:e>7R/akRxa UY=|8ry8fƂ*@=:-qpj-VDy "F1sIl/if{LWvwu*$g'(*<6ϧW9﯄㔊94ej9a ;%&DHKa/v-aU(- snU>J6KOA~yH pa) LS(6 ÜG1baN[kjZ$ʤVaW{(e^5rMB运~#[ˤCXfedLpǔaP-G@a )4ǖZ \kg!fH25T!c-͒6Kڐ6i @(%z7V-S:;|p3mtg%+Z+o;.! Vng`;QyjB1k+= JEΣDiΡ$V@,|e3^k"dU{R-PRzce)끎K+! AFc85(Rq5-~eMfq *y5*:QFSIΈ`v|=9#xv.\mY֥"ԫ.l`oy _lw>+89nfoI#d9 ڿsyKK7hXV=g$֌Ą`Hlon_?u#oF~E AQC歙uBB=koC\tqW#=c[`CRؽ4@/9szɷLLn I>j g6/*NT!=??bIw$J VcK (6"DYq@5 8gckvGtosmog2k8c8RFK@H (SCOUϯe=nfo=A:IWG% ֮w|ɍWÎa+{QֺnAZs[%{lz(4#aA8?vzrh:{?A8e]UFSY hzXqB%]=%٫l}62ťl~$ͽ`#C~snIRR"YX/G/0Oo;bGͯ+;ڞH'USO@S߸ս|6ͯs6$חZ;{*Qv1Mɰ)TiO}`FZZg'"SVS^ ^NS;|zPzTfgaM6']em{NnHn`UckgKF,&\N&ߑUk5^*kK[Er<ɮRjlj!/{Z% 0bݳ@;EIQ;;}IIeYt,\b?~+4L`{Hy*~&J=6)dž=$NGEzRaAH_xo>s aCz{;Z}ȗtc;\f>phnmCqґ}<ڸ~m\ˆr@Et'xt'@@ qyEtϫ)p%pHQD;ѐq<ׇJߍkaw18uTڊO4Sh9ܿEZy (r z<쿗c à@y<-|Zkau~Eg3 /,=p {8GF.`hs Pi4po/>p/Y ; ^pՇJ.'u"Z:E.LX#s}.iyUV\L)mvqZ}(P1>=3y@#C hFΕb?ݽ`@C&͗Zv0 ϗؚv5E*L?T W gWa@\XVKN }A>MB(:SR퐯#sl.eB\W,*<v*(ń.XD(]5֧UG]QW0RQAE3^"BDj$ɳ &D-pA1&i*c*[˅.pu 3Omĩm>jd[g8[#옘bsOf83yTG_Τ11b.qpׁ4ր&1ki@FBg^6Eh&#(uI.AhYfP 9.T,F ']'ȗ+_AVa(HwrLI>}2u%H®'Ix4ؒM2-T.^-:?r2̫>{+GO"~zkB@@ % >/0O-B)k |hPwf)s9˪DYT2K+ DZzX6ٕk:Ϳ *'hq:WMlef>Yvw_}TJx꟩zكAjo9oο[ӿY/gii!Fka ,VT&TdWf H e;癶G"L*Y5CJN>)%8Îc<[j^- ,w{Bhi*jY 9r|\4R{pd4h WDt/MR7tS7{V4l:7ݎO9gFs:w h} fXu^h|ld6Mfld6Mflym2 0V0 @nxTT9Ysۡyևn]f^>̟6weY.TI,,nlu#l{Mv<UX'_ߜj+:2\ |RQ77?7d5[ $, *3έcެwon_d?xQnn9'/_=.;2`iiʃd *O(Uʔ 27c4Ma7-Era.8$Cucf,sa9c;'*%"S)k+\#XH J:JZ"qFɧnCcs~geJխV;~S(ßė}$k!Z | IJ]7Q49ई“+eCo:TtA6`mH:#y"}D,D!z]//O:=0[G:}2.Gs6LdYn,COB#~xX  -J* aeN̗Es_~ލ^]7 Ȱ|{g;91J 38 r-r7 VT@J $Q)H}6+}Tf< ˣ@f9;o͍({7-8D7O}w@)Oȣ{?gDb-h 9G)JT0 :SXD{:m$nIbkS;Sk#k;vmyn1_'? uRܗcȏ*dF@꡿cQCh\kLaV24K4IeA||#>pڲAe>_UZ?類H"i|WI CE,]Q)5r~5j`[ gLTBhC pˍhEiA[3U\1qwAЪAAYx^_25BҖ\sB(K$RZ'umP,smGuaϰ\/Hx<ƒEP99*G )t|Q)V V[̸(  ZhSdqQXA(ZqUS0تf)7@=3m5[äAa[.HcƮYF !QPP0,!\i 50dV!֮љᴌ)-RZHJ4 py,Is_7޻$X?NWWC6܌HB,%V90@XEM+!r\XnT\ń"[xvU}wp.iCn80+j<]18Cޞ/MR_5;:wB//&pen"\$ݜm@+# ~nww˟Vӏ=Lr;&8ҲOa\zty,H 3f˲^"Q(֎H[)vt_C8Qz襎:˞_ғMm/a?F.wI\efOZe2bBt~W`Uf|@ͧ=7\|%QDS*PA@N*#؞`Y6ԮO/, WLs߽pUAM_Sx>1./G @ kHA/HF-U5䅑1J%Bg^Zw#l:];<N.n:^ݭGfb c?:c?8c?cU5D1մq887Jr 9mjC&}'Z5Oޮ/O#7͓8=AJwoOfEhn!))` DCpk%ZƄb~Co 1!7b~Co V촙wL-$a By'Ys8gFpHg4^~Ax]n,AzU] 1䜳BȍFn4ScS6,H/uREg !@OZ"FjW_*5C^b;]a=n@Y(-fL!2(Rςnh2*ŘnY|ۏ-?i:Dv s1%4j^T!ӧYJẄ`oYAږ²2KnkTb4)<eBgp;TdlsYɓJmʓ,F?REh-\+MR >ǸDq 1r5^K,n뽰ݰdPF >v~]_;:R N5w \':ŷ[dBe,> n&o"`:th2<K^Ib3~6dg@9xc ۳\56i5C/IsLd*ibٍfh Ojq2~DBk/lx^ykCQQ-s.&ƅ4c~\]9XW >΢ݥ^u~TnZAS_,oݙ$ߍkgz]r}U;9xھ>|9IpHVlK.Pܪ<zۨg+ ubɕPVT<* b6T{ j:6|AZen b DHчpə "'9x'SAucѦd>hߒV`n`TRk`8_ȵrM12ٽL힡gI=fXʞX'9 ~:2;>iߘ ~@]9=.TvSjƭFyvtn~(۶7;n͇<9MջLgYo5lVgo=g:;b,fϋQ>? [ƛ. |lD?,mv1;դ >FɌ0įET ]A-"@,mPZJfc|Ռ46_\ ëiOʅlTΛ둤ɩKҊU[NFL.1ٜ74hY{-ơk}r]w[*M/X;Io* Ie+g".x BaRpL&Ȭ\:N爙EckaZ+$,+m| V$J$A(А0%dQs6jXNpe֡lx߄ 2+1g1m$TTRڀHF\)D@#Y.d, 4!c[JcJIK#E]N"ybtvm;ϴ݉#feeKK%>vm?2 ;/?׋ޔYy>1MJdX.IȯB0JfeJyN#pq$I|}$a+QYDg2+Pd:E"3Ѱe&$q,I~}$%I# ģmIГ#S$$*uVGTZƚ#郒iyH!d+YJ4*\%G9:g&$Ü 7"=\"ZR.22<(Yh$q|4RqhUise҅IDFJ*" UZA0!&YdJ $H4rA#c LF>AfˤAq g5L֚fGrqg/׍QB~~ ]><[9^Ѓ FC% W> QyHq,Ug<S%T`"a:+Hk!JYz{v fp_j*865n}.BGS"5'GZt`+0[W@^Β''Ha$<"8fƻǐLnW>~5κ89tkz=QWX#֪cQIUNXy{tBWe='52N>Fh9iґVhH Y%H ПHp:Ll<'|ǐLli G뤬BȐ3>GT5զKk]PkAN 5`L2@`dtt @CRK>3 WA  N>!Q$6I1-5 @MNɁp%=+9'>T GB$|P3Q1J!xTNhv5~#? 2 X)u9GSP\9o f"wWAVg~jMRĶEfw9oF$@2 &wL(|kt$Mz4%/:!HF=Iȅ54Fk2κhJѺ|aGײr y[:Ly Fnۂr HH 2ɧL1 |㫞mͫ>cy) &Jېvc K}Y~$R ֙G!(z#+dZPSs0!DkpYٻ> cRpSa?T*2MB |+?cUFUĔ׉/fܧibcb^i"3M=CT,FT1__SK;@+uؾ.]N:EnIvxU܎_w2Yq{W$Z+ƞ-Ɠ׻| 8.5^IA^_vsF,B0K'Y@?x-a$EV 1lzVYߥϠo[5 \v3?/!*yk5xX\Wӻwv :$Kid)̚T_h4 `s#XZYh6.KYܤ;YeNKx"Kaz;%'*tn35ܲb|tg̎1;<cXcgf^1OTZHzWC$UmRRasn 4LIfҶ9d\.AQܧ0)Ix'mj8H3*MY|q ;Z>C#aj5dޡٰrítFIsbufSj{lPj;yo'6 ݘ1E5:gbRsa:nx4J :M":SWs3mŗup̛q7 eT9-wY*ⓧN=ӥ 79NF`O`ʭ; B 鿂<=-v}w1f Yz !cCy3^|7}B}Yv򴸈J 9K^Yk,SEx"[c6#}8tDRH B*L^:"]-0E22M0&;d2%7-PuuY,A[m&.i r 3Kl;u$_L i=-iwHgi::RJlirk.}tf;#O׉_.WߝвWNKٻ_|ycq6z,[]ʫƘ"h DleF͝QFG ^T SELf6g>FbvuuSrF+i]oSlβ:J kYfm:; 2E9/LZTo;Yrz3^V!u7.}au c /ld‘QJq~^Z?fUHU?sHM>0퍾H hIKl[jO96. l{gBLɵFQ[1™]e| zS77CUrt4t`2ɺ͠}X%,S+0]\I嘺 S.)(0El(#:mqkrTat9Sr>]wRDNڅekd<0?e籩Eբz~[7o6tP&Jި!@qw!Huܮow7f L1k}rLuD U.㕗+D- <=IsU|lI|(nJxK3U`̓Ujc2L 3 $B.J顈ǩ>YMW)."1B @zeJU!Ra/e(xMKZZSӺW7O'mk~T=x ݢdb 0|u]-{pV6vZwf.{0U~#:Gip0чᨒQi w.DmuȶQ[\G 8TUAF% ި>_)_JS9|.qn)ۡ{t)oݻYt,Ne$\5@},R(xV/tg%؆ Lj3ۣk`?x~HǷ?o>-2 E${ 5<~}hMr ͆2̹f÷˹q\W )b|r7΁|ZxmX# i{FI鬊bwnWLo>i^X'2`dV ( =vPj LIH;I|İ!lvz6XV:Y?F'FpqMLRkʢSHE%uDi  ̒Ub< -c.'q!4G@R"` .,rɠ{X/*iC:|iytLLr`$ B($<3uH=XW N5X4.UZTi)dIkYۮsjCkE:Hlfz^,=?lͶ-ffAwPHPkK\iy 'T#"᧠^W0xdLK" Rτ[+0)q:R 1Y=bStlL^-Uf|>X{M7}Q$C?@8+9vZ&xA%ULx uA"gX$ưyB2@4 gMDۋ~,إDkmgɾLb?_$cHiXשY` `X.WX+v˜F:S#'=#ʢ 'BhňŎd&#QHYu2LmJ~m5 5e4zl5ͭ0tv e煉*%m/lh& 6T-go-=ߡCFr]wc4٭xU 6jvK׍޽xZi-m<ɍS+GS7I@h{Oi_k*CjuQ=l͖}y7e^4Ժ{Aw LwyfzHɢ=|t]h,[,W'pk#d(8$} 4[d6+CA)45$R|Iqt};1]M;:wq܁I!"̫1/ /> [6 .@/aNt-Aƒ M&8˻ yWLW|oX9\(=XBnJW .g㢋K ?枸o>_tR!*zGzbm|5̠ ́("#l y8օ1ARa.T{gyKYT$%,i>A f-|^^V> #a߳?2FqzOg$,fi-8!g\+yxs>D >@\q (DHLp% Nkc/f"ɣ3:Ys=kRcS9N?`.0 |# T)-VJ\sl΢z_|Mݷx͕JiEd B)N N9ťoW12#XP1g&ϹZ񥨿v䮥e$Z#w Sޕn:(7nP`e/ F3Lx$ND^e<.D!VLNtZqXBg @)(S:$0 rtmHeup aHua#^ wE%Kv1{ƬU$KN Nɂ]˻Qv8R)E~H5<\5e]MI:E RY \0dh@X&RZbxAg UHh"WvL|W~5nP FWWIW=]>L_nN7^kmB_zy?ڴ[׾ErX39U\$#څ!(8P#4Z"K bHw݈lwhnXm;~I+pc,u˂B~y/ DٲyЛ/Ro'Mv bәVw#j/3`j(n<se?h|Pż?ɬ4^MY3+߰NԲ|1ޫɮthRRKS7tj2~Ox%Vuc*sq_7L9wÖzPBc*6ib^N㪺ȷY@ҦjWEzU$WzY(?Db_Ql|zǧUW,h7f'ԣ ̎ߥyiU`؈WgյwJ}sp3^X\YW,TCk?z\έZ$incd mImǰ7ճ|q_N_EupV-ѯdr~k[4+[gG4#}SA}޸3c~W+cltdE\g ,Mnc?Ap6|H-';+cFkmѭ-4J¦Xr/DŽLFb;c*]nxcE\_r(΀H3N $W&a uH(o<\cDm$,[0"1(xr.iA>4:0~.,$1lTm1PA1[u W7='ܪ[,S܍~eŜ7!iio}O|BQ+o/ ̛չE-yt{Y~SÓo'ŗͭoCߦfC&喚U{i*|JT"뒡1$J$C"RCԵdH%dP2Ċd?M3f:jT՝Z!QJ-{Zn#U~8X;Ր1}E*۷7ۣ"tL7;+gճީDIU'WX v`'Ob< &/C9 2N+!JTPd(1+t`Z`Jg%S2 X#-U6v"/?LxN{IZii90I SbLY&PYqc(/c B\KYýD(Jؓ I!j\XC# GĀΏPb4)v8pHl5#YkrZP(`diM\]dž ӦGn*?~"UyS@, V"MCCXF %SR<£"8(qq >#1@ OPt~ ")0]"jG $ fqc(OvK1j*(+PY3t %sJPnCs2@rq E+Q\31"7 KQPP9Y1C bg0W2Bt$Ǥt(CxILq\J&A\!hVҠt~ ",8%"(99 `͍ـCxz7J kHH| 6D'?LxͰ,>&PKH`60I HK<Fa0MĆ+Tb'!'XUDSh״tJ+(8B\)gZ8y+6x Cx5/yA_qށ r`,j Y[ep~~ !-BY)8G11ʳJ !%ј#(ox=~"uqg1:BpGkDy2 c(0)YFr9 \u>{)5CGP(^#AsF,y#v>X!q^ c(uydAw)NH P!q>pds  c8x4EDctZnTJC/P(cZ|yE 8͉Nc(v2kG >L" {뎡P<JOx C|FK@V7BB6x M9d,CxKakB3$G.HHKLQΏPa_E%1!]Y S- %Sq07_ bqɜSj t?B BJZ+nG!JTl^DBq CxɋQOJ%F*49b% 8wi o [?q93գU]*Mջ75XD \5a ޙgTz <ϥЊўs ˇ_1[.4S|%ؓȌqX\t?kET_֗I?dCKvvVirܻl>9y"]yS`“vm[/nm0[׋(` Hr+ AwFoo;۫D0;4VD9 8xl̈́WNvS_Np׫ 9ao@^6(~!vh2Z+:i)16PvKi1*sz'ĄڛW݁YSիo-ظl&TXg3O/Ӌ I.8VES޾6ߖOڿدҮe5\_"UՖٔ6 zC2hH>WIL w*8QVVuۅmbj]vyla6d_4}#&Uȷ$\sY-0 8 <1B4M6.W[J \ؕ_f]_K;_?:^vz\^nr*2._.jEm}ίqW 6Y;}tJjr^hec?-ߴط(M6J!VSt,C:z<{vS8L"q`z! .x4N@F}4(Rs[0N@$_"s˖6X:τ#,gN[?RaT ʐ `K{θF:[9IeNN|~Nևe޸NQe#pMe&7_o:-o\M0M4~-W(KpM׷c:<%ɂ6o`MVJd1D1a&06he|eH ${ ?%[v#Z,7"oS%#UJ0v{ f'mF+A#EbA "&X n=P׌9KU1~$rS.61.aid!0KM nQcwc "9~(mO"ձz.7y!4WHdD|+Q+pBH"IOd!1cLf' Q٤>Fp~ #DJּ*?B^ۚ*!OU%nY_%0[ˏE,>kzf'lZ7PG}Ry~s/Ť-ט4j`}ܐɥ|0>`<"kq/u[®Sދnoo:c5REw9fs*qik~Q`GmC.`OpvmȒ#i`-v䖬d cdW"`n׻+)94Yyxd`P~_D( +p+"tv)cll6I,t[VrD-6`M@ZX.*ln.gJ.oa }+JFߗ/AL-I ϓ<ߘzK_]&Ѭ輳P+-9??vL@q;}8J웮A(@`A@f'( s},gvHґ>RVMN(h+[{x]/ޟS`6snd&Ϙ"c3dBoJٜya"檖&S,'%j#Jɕhbk-kLՄfJ҃x?kKsƷ` Cke8U3TߍmgDmym] VKm|R Io~ucHc}LWX~EaO]t墳}ĽБLmkpfi& 3J]W@ũ|; gnoMoFN4wC@_|tzٵUܷMUQwq{ O߿_}?o=&ywO^ oD_$t~yTcޣ?Z\-M2g\X.5v>?uYkr:DalwSoÛ-iN^#Nd`6Q:z: .OV0%'z$R;%,YyfӳĊIK`Hcͼ܉#݉mB g:/FHUͨ]80 \,Wۥ^M&ut4v6=[>5}jjYTq1+& ?|%iM7REwkmwb#}ѽT:5 |)-nju Ca|67-}DtWHU /#"`Rgݟ>CcMOߏ5L}o}pѤ[C+33osQ? qKp>h^:$sgl!t.=.vuwhf`k#9oB!gh՞xy܃iR7J6|@Wn4 oZswR,hgO,mcx֗YHif_Tr"A ٽt걔hPo奜oZ}~f@x!rF*rT#"Sq:uȃ>(h׳U`iGFH-{/lrLXl #EyOD DŽf]S>*3E4^D8%bbɓ$#WIJ5gya# nTS0U˄K2m~2 nUFޖ"8;_y.{ p,W(rD)*L:p6K+DΰH+ʛGy[tۛ8,";\b 2C;uw6ߜȿK٫iFirK}UJ*(x0IPq *jQTdNՖHbjx_;H؃{=, :"HZY x$ )[X1Ly-#chnDdƇٚgOc_7=haec3Mv;J!@5oϭߣBhFZ;kvu&lN;ng:mg3c}H{{7rkf!ՍI@h{թjv̀.&Ԇ*@tՔ^= xe .vhB JV_W]ݕ Sr2~Kj+Q7IOۥh.(oloo$Br8{+%R.G9U4\NSt}79&'g:o3G{VB}B))&`SN%im7 ,3~3kʋv]+Q7i'tgvY7*3M>yڹA 7 j-$*)D{##a:0- 9r؀5؁2˾" \H3&zr6@?G0yа{ bENJ" #$"pFBb%#1))DXҠ8KFbR:ֳ\4hZѴcjڋְ#(E /bns/' X};fz&*uC2J"$+î= z6: HWG!41Ĕ[S[!8"%Ҏj+GD{Xڬ[Eh5GJb #Se;Q!(8v0)&8 μ9{%Yo&}ŭ |\EC5H s_3ᅒ[H$1Ɛ d3ӮϒW<'MQo\S̀)Bl9Jtk@{K=ueB]g8Q RYU4.QXb)p "JFc&&I4V +Oʃ"J9N(8&tF[d6+CA)45$RRDѱgcw]{ـ1rx|x3OٳOƖ<+/aN 6Z1le%@H\pw9x:(cօ2K8Ys f?uA{0jN$]Њe[^L5<w"Ǖ 4;Л|w-(~URgx"fC 2eF {ҳDp}W/}mO|ҟƍ2YZָf:r[KӡKo:dҏu{;etaR/1",D/5eDDL ` Xy$RD_{_/:Js!>!`E+7瀧[hacї)h~ ӱeEo.c^h(q̜BD|N2P),2A E! ,0+39r[S1?FlmmlZ8]BQjJ\/^0 B ϕRaZ7+<#f5@QAsgт#*$5T!;+s8t2Ǿ{{/Vt>}߯(f!>d %qPJЈ )|z’+q%+1(/#Oܜ=a {jDfO^ cks`= h^1uef0AjW^9~lG^Ʊ#)ȸnEa7]ikhex{,ӹ2(:o6430{-#chnDd:/:uf~8>^l[G }/$,J1&=^",GGg$A=.5kX@SvGE387PNroD>.9U4\Nk%M4zi7zsjʒڴ yDޛ!T{ ϧu&Mu_vdr^' PT-D'/ūjJFbV!8Կٻ6%Wz9 6#&X>I${'8/1J)R);bVg(/"%[ٗ 8l :%*I% oCÿUqM CuI*'sחc'Ai~o4dya60ce *!'$&DmȀ}AT;-( FeJAQ!3Hdi/Q"!ɣ xcFSB$ZtTF$AK\p^CQRţ,ŭ-q1wqZiga;`52'U?+֪zX^L}{ 3#: *H\@D 9#n}Y~Y7 MXխo5qΰM،ysThMIV}{bFV]~Og7Q"LP&IPoDsic0QAq 2xdh4}T[HyaG*gILlU AJs(v|7{o8n䬪8.<\`8r%X\~09D;B#%"("ՂtLw2֝j'h9)"%<$ pH @KDۏJi40G= F\y:J&ijb (w2DjDy &g F8-֦X]%wm }xϧ-Xaft]kͻn*$WNm*ϮQqקViҕuj6˟wҤl{m$ljziz?í5O'{-~=t\,X|X5lXۚV]inl}u>^JV[g6 nJB HVPqKV+Osg}}~92+WƍAtH6j4JBkJ &Eh92΅H[& (PS(v4s<8}SϚZhwVQ FbznAi+48S*?LІ [ Sc).m%Hj*!e_7=9y5KGZRN!g8!. I(ӥ]ſQW WP9>sk[͈V&O~|=>?x3 N b: wo m팣qa͹LۏP{Gb|H b0l0@3Y##8V2 )5pпi=wZȥ2zfF]VMGÌHX箯G(R=1-ӎ{;K`~O%N-_~F>߮i 9T>슿%h+*ʛx|@ .*"}]:KS,7<.|ݛ???ޟROG]|/sveSPb)643o~|hk6Z\q8EDF 8 /.= 6JiM<,&""fpTPPicF_^@u vƾIgmEԹ \xeطKWvtpxOuu(72rZbV;S4tF{GIXV=$^ȍKLEJ,5MIDuslB'A+* ы>!#[k{X#cͶ9:(|i¢s=OւНC0cH*/gA@E/Ly>ʳDB$%,-#T" gs!QAng_%= 6JP+54rgl<%к"T n)tEA) rwǽz ˫-Is|ow4r!6Ji A@=rw: A.-Q7Q^]Σoi(n ,`B~(oK-k'IfU/kzӼ'^?'MXG# :hHRRi}ǯ;WYu_{7_=ToYqT8ٽXcmbVhY1vLn UU."+e!m+F{[mSkO'3 %nfL_'U(u9[ *jE<-&b 3&(S1ɿp]o|}nŏ(0nWө<|H^k޺"~޾ިn66O: e?m.!6(Xql?N%wL,8 QJ5"VlKn%?ƛռMRon(1B >g>=) B3 ve+he&ߣ%N)IK @-m⼴q䎾;2bY5ScOdIETԡ:,%&ws1ir&IH"V&F6$? q\1ITJ!4!=mL܌"2SOiK@BFͭp_X4rT-9JfAdpS$!9鄧*&&yE>*H!ꂈq_F#>-I8k- 5WIKÂ@D!wS"MeY:!;QrFg@bK@gKB 2X,qȥ7f.>P %$hTݎq ='arcU Wh y%UV )5P\ ,!hdRh9m.D2I-GjAC%" - IoM[Y`?L>6 )/h^-Hj6'm1 WHSl{7[Gwzmm1D88ˣ0iuoahT㻋V^Uqƣ)}6n%; J]Yo[ǒ+^&͑z_a&.e$*3IeO*.GTSG&[}zTZr#^٠LeAWmI(cP]w::I!, k-㛪 r7-QN1T)мH0i1тFI"#2&".pZqp4p!Wt`o}3Ӯ'lvݷiVo՜i%J_+vXDzB,4Ĕ@ 8J,L;sSM ZU<3f6@QAsgт#*$5 T!Y goY ljI2n+^3WaEb٧n+gy"R) aXBrXH a!9,$b惘cNI!9,$䰐BrXH T!9,u a!9,ǿBrXH a!9,$䰐BrXH a!9,$8u䰐BrXH aoE`BrXH a!9,$䰐^"cTAMiu?f@x!rX\*bRb*SWS<5U)Zd(&Fxk% #%NG: z#% =!"}*XX{))qJ7DdwDðq gBrA|Q9@ն1c+E1?n"7rUoq"NK71hc @.H ĺ"yAoo̦v0n6&{\ ^ ]){D W;b=mR4LDXq<VT)iJesL462i)G z(@ #;rLG eTjj @j^ˈiDk45[!-a &:toK Z[nߌ@>F])Fl׶๕[t8 a< kͽxu 6fvO׍n`8K/3֗ wmQosP5B{-Lx$k(4cH23Ӯ+\Rvqf@K)Bl9J\$+{%=?KsB]ψT RYU4.>Zb靻#EDMI4V /9ʃL[}"J9N(-QBzpEu4h"%s")[9-*mbK*6<s)fkH(=QSI(mP~^yt5'h"` 1G~'5}墳WhC7N/#d?WMӎ́:XWhKfI]z%T{gb-TA/j,.~ʭ'{Nm*Y^--4LabPsI~Neg}zB#S)1*[8/4WV11if?$ytFR;] }5IXPvGEgO> ̂HB9U h˽) }p7drGiP=399z&ygOVL׼*O>V,uAGAA{S 2Rm5 Uq˙TSҽ %OQ*&)b'8>jK ^Ȍ ^`A ƜJ*f6q~;QItaq.(.£•dt彥˖Uㇱڠ2K;\JF`53 j-$i4*D{CJGt`ZD"rR.[^V%aY1w`f29$ rPBH\kc-qό~Y.ΎfɌziJ :R Ku=ߔSCV ue"! O t:B`W:h|iI5WwwI!ֲ5UC-f>L5 g4y0z?fx{q|:L rH# be}sa$EV 1l!k gǧ׵Esv* ?to4vm76}-?8s%]VYј 7Y5.ߏ-JRw[}hA(Io2|aE._B.k]$sd,y:\󬋄n Vu!carZf'}zI vgMqIF/6I˭]MwzM#Q}]Ѫ +:~|k+=4l3y0ul3GR0 ޲@[a;66=4;yҒNȁN :}5+7-+{Q@(5.o'.N2܍~ryޕ!( c N|pư]GgN1^ZOhi"xayx.uC3.6:45O!E?. giI `rsm~׹Vd|9?XvqTRݻJTJЈDXiCXנF~TjrZANST}w mCʚP L,L ET:a Q׌J!<] Y&x%YΊ=̌.c\4JN~?vn;>_0N z] ?8$7wDwTϰ3p$:v:~,OhsǤ뿛a p?t^3eǡ{*^&Ur%7SKV0cIM,&Gv3ݙ~'բ6VѥdbԚN;DFk}䜶fG0`)W R̡q&ݳǻD:?į"\#"@J Mšj3}%I*G|fzI*^hܧ^o~L*J)@]`htnw1Ymy潃AzM҆yЁH%|KT#k .t9L*gn3{X3L*D6OrN\/U&Y8wS+T)dz+>k:l~AۤOwݳ@BR((BVlB\dQDx~(̛;Cm %5pɩf0yX5g_QiT TSBs iE scp(UGzm%,!+L,7fys~ђY}/ k`ƽL-'Qm{UD|-99/trz59Qx\l|F,M3l?(@J8j=!HrR{!.;KiE)KQLt8 mg{݁?z׶5$Y™iӴI`o/5SeȔm gSCůV\OPj?izwor{NN>Ep3"睹_Rdnۅ"7mv0~$ZPIw>IOz ~Y!N"J>f=ͺ TF?u6ȮvU Ot8ӑ`o1gszF{I>FP9s6ޯ^}滸eY9ޖ,$Aٯpᖟ6غNzĚ*Mx9|$>~xO~2/?}r3ꂖ{w{uGnhJ=7{4ͣuQO'|s%%ܿ-?W+}j!v]mSx(8_1HT{"-c aJ=Jjpƥ0?@{ x?u1 ?2 ^_ο`/ԡu0{\Qk RR!4oZk 1hN,aӪS!9qf+D2)Ԁ6 N".Gz8w^&0){x@B`ܧBxuRVdbGF+VъhQԔ# ݓSJGew2H- :3HAj {"41ڃf@hkjǡf[yY,=9Sv +IwH/Jhi "FJx3A<%л"TIn)< <@U8c$^ !>gV^W~'=ykB-96NrkwW?y7| :0M]N{/={fWN n;|fek[{6Ez!ND--{ \zGzFIWo_]|]sm3\ ϰϧ֗hUw7 }ޮql>8qn]7ZDY wLuṛG]|\oh8x5I^w051sќ v붱-|]mo㛲x =ᠳMe !ꋞ[͏ ©,ZBr@?IsnBƅkf3m\1\|=yϫ~9*+R! "8d#iT2d,(tMѢ92ΥH餍I$BAQs-SVTXw6hd@ksK b+|xb=9z >*<TR1~LBm{+|i BrJ%$) $&tg17[lOT]x<=_loQ%#=c(!Pb-]TW7|P,3Ko^`:>A%x"{mݪYE"vma`R9k>ٹe1b߅tr?a["+>CID<4$hF7:P41?eVs7(ve23.Uؕd| 8~/i.DCYwg/w֫I%t a^2MJGJ&F ZhK%Sp#t{E~Gе7MDUqhk^esfP .8)C]ë!g&A) L"&N&:hS2,ZB 6Sa}ę/ʂ}\#9 m]=v~WW =EYeUoޜ&/UA缻P~ƈ5ss[kU \ɫGPS0sef̪U1Ku:?bUxfXu SHozXvRSOZRKKʭwj[e `ZNZSrK[VryU$Cs3:~^#.G0?pt&ףoS[fNokɲS*-,ֹ%IE_tE𦺁PԸ8Ny m}zye"As\ې%Ն uhPM"BHA%eќ#wgЛ7קB[L6v- 'R,l:y[uҬfKBz]CjAϟr94)cƝZ9wF5-`G˧kHY\)HzUkBƄbW^X~b[֑<W_LTL;K׃WC  @zg(oD|c]2fSpĸ#ݏ֋1YF; ek)(wFk'<8ElIŃ%;ONo:!}rsI.״`fU3+{{s1M|<0=cQT¸QJ Ք 0A DU` hH*:c֨h(Zc\XFz $^ EKf T[GTw6kJ x4ޤ.g/V4:$}Ianz,ۢ`{srB# 6M ABXF"6N )@F)> N;]{䴠N@D, )<~9 B5SN{2h"NIkI@?rwH]2H}zVL\Zb(ѻ+GU*7?ENJM*?sI3p\+?sT@*@Xg 3W~\+l噭̕MeK̕T~\*fgU)*?sg̕Z\+?sg̕3W~抢*?sg̕3W~\+?s +Pkbg53W~p.igi3W~\+?sg̕3W~YY DmV~溏_+?sg̕3Wt4EG 6L"xc+rlInJr)Jc4sINzR䈘oͫoߔ߽_ui [=ݙ7: x g~T Oׇ9KgBRx\ZB rYtW7]*a=2L)4j0RL1$R9JX \0%[@uvu`+Mv5T ݸ޴WZ>ZȋW޼JV-n;-h4[<4o|C󤐗4Z >K\L궾뫉m[{ud?Ng^IQOJ&{mIOӈ$gِ0,O:lSBR-̋!(z{Ĭdr%JRt: w.9%_KyR}FOhԚ )5DZlgMV(ID/"&S5Mg٬/Ho&t7NMc/^Y@WG1bPwWp2+l2aHB#}LV9Y+`D2LdAUvqj#Վũm̺U},KjK%aTC׆:nA$ ИWu.KG4Z;'^O2O0`fzsj/9ܟ̅=y #ij aswBH&bbj:0t%D/b~zۦph@HxXBhg{dW1s+C*oԲ%W,74#$y)!#@|c9NZbU޺fI#wrY1$NnHns=_|kIiJ-j nzɥ⹬f pnvUL 5r1Υ, уEm>\ ,&ÍεLBkp6KƛQʃ(㖲Pnef,SYx|vZlkۇ[zNϦq^o_a8o\bИml16hiV:*!)\NҀ^8-C &CR)mLVd3 93u}p6K0HΨRw,y/{ۅXiHy%)Q%1E#Pha7€"R6+'my籋<gRP+Iw\&D.RNAíD#MBY XүvckozN9紻'awFB?˟'wJ@JJYJ W! >_y9EOjͮZejyIGykjkOF? c¹:cS`pG腵$&Ad xᢵQF)U= W͕yi~N}4Nez\2N|HУw~''t>B4JC:xoۏKZjO"TG-V-N'i+Tm.ӹ(*z{'uzKVy9R3Í-5/_lUv].vs>Yc(j:RVutN&}Gi-l&&_8N;+QgVK)CCu9ErG1)˝BiI៿ ? tqxYY}gϲd*Xk@֊!KYH"ZóqvjfYezf@OZ,R賉W]$Uḧ́ez -eKT\vXB"o#\*Z xN Gf"AųjBA[x$чshETRƢOB$tt Ce $QWeY%*9K-i > #[C)hz2yNh 79I@>+'E}H0$SȠkOYDyuٓ>=MkzL~Wh3>uj킊%Cq>uR8{T*\)ltA"(\YW;bޙҵ̺מU<g9:gdfAH?Y)q+LQA[ Xc`4֋ tTH32U~dK㩘I޵"엶{l/Cݾ@-6/C[,y%9YbI֑ddHPgf 9+=CT:Y\)rc_at|VeQ-a7[wHi?_#4ߺ98Z`h;#. [dd!.4ki}RDCY %>@kJMYvQץ_j_`~w^uNo;clwϸ#Dyu:E ~w`Oͯ~CzL7l\RdzC%~mA[ zٶHHH)qz3̬<0oD fTIV,5Ih7il1L?@wά2 *R h2aH u?P9IKT@ rAs>?DN^q,Ƴ,nq}T(q Bq6dMiR4E<%Iv OȓCjlM\H6L 'Bkt YP@3Xyv]O3pºS^/iBsh9]*uNJ.qTQ!R%!eɥx[.6.)B2bDb)'4)D)Le_R/לk S ;\:\eR^kCAt~~‰g0NC;ۋI\>qgC}?$?wcnatM>u=#ٓ쩎Wyuq">tV|5Qz4Gr꽳rvQh׾ww6>f?"rV S W:m`=H:'7t"s'JՇm"iv!sjE2]%\og[B||}t^QNU";RZᐳN WQSoW\"Ë{(5 :KsLcFo}\CbDΉvw׽-kpPv\rAڑoI#]7 ÚqEfy?u`^żB^wǛ&'T㨌lu6ɦQ*cQGf2> H'vL,nۏuGAr1x>%h+tV;q`x=1PNRbTrꯓ)vǝнB!/{~>ſ~凋߿\˻?Ž^%Y Z>Dy<ЖǛ ͍eh]r>$Z{ ױV!1~@\N$N/me4[][U̖T5HCICuYng`.E q I H2hxX?t)}FjB [ 9+^Rk!(i7&5td ZԿ ɉ?ʇH$@p;4ƹm62Qz^By 5ͩ,{dl}T?*udü< 2losIyɎXdk_Cze|(5)D'RȣRF![GxJ,Bh 3?;t]w[w$*V.K9_ii7_Ax9si -k3<77ڻHz潌Q rQBy`'*Tv=S^hH$Ҙc~aڨ5gR əD<Yr #g$*qvǔ s|nMe;1È}"/5$UA$ V +тt sQs#T ('2;Eo$+t6ql1!Ze\YZPR|7GoQr^eXjN?ڜDeӊ $P#Ik'\Xt hy)w2-$QswERh:~4 @ 1P |R/!RV'S.yg+*D7 n<]w.zzsueuFր疞Fw1\X}fl8ft=݅wzp3 K:}3I G7s !eל[Wغn]MZo?͖p߼uز–Ք[wv>Χvyf~˛=|t?]x-[+Mip^ s;ggMwm)O<ןFxm'ڡ6`lsKmG͙_HeWErXq\NE+,6Z[TsVs`QDoU2190F)(ju2|Y$Rcd ,Es.%eKuI6 HJ %POm'1L]ZiSblQCWKd@\&6bZx zr |8)=ϖZOF;'nQg:taw `4;gȝw 2XaG</UFY{_/0`j]p54H T~sI>=O '+>D HAi BIMZБҨ-SpIdrسe9=UHJd24@r47^kbBr.p9+F)=-:宗QNL+P؜QExSڂ&fg:qL8k-S*! &K"ܓcteLL2T=L0Rd=II8 QQRJf,Fv͸E=u$eu$.<.\z35Fw0-o~' * >+,$$F)-t!T ZV^╔"@]hr1< \lq&W:(jfNH161aAX$)}_Sq<];ڴ֦naX`(! XB4 "҈`s}ZCr>LRiTȐ & 3A!pǛ8\,I@P^ blׇQ? ˊdR}5"+Y[XBYYC Apf+H)ZzKY@QF.;MEJ7\VZI;yI{v("^XP&֑HdJSLR5BEɽu2&DW"X@C9E( Eɋvnqz2%Jmt)'\DbIRNVn"R SI)}.Q= $!ŜhWaL^X"%)C$ $F:>)FWӊ#7^7ܝfߩ z䈐l ** D11P:< )W!9%I1_/; C-\ %ɵg56ED9L7\[qRkj&# @j Lg0ftr.YIHV(-T-W'ZߍfLDSFg sp 2vT2 w,qg?U+c'$cCkmdݩ5/0=G>h&t𩿐q^{~GѶIqzwKWypRNx=Ȥq7{w9+8ݛ_zUopyb-[CІ~K(Png83~/ ƫ[e`y 1?G@傩DU p&4TsbY5V׋vO7Yǀ7iΧL[5Rͬ]kNfArI=Bn ȪwΛi8,\PapkE [l/LQA0yjrrYV2Hf+`O0W,VBȂ 3 F?X&j^^ޘXzݫĶmD=jK- ~;hYh-LO=L+Zxfp&qsSynqEGϐc{Kօ+zx3v?MEl9mx`IHf Se';6q7RF6scXtţy,TZج3%IE_ӕ۔tQ{PԨq8:8)p6i04sb2DT[&x9ԂEi!E0-KdQ)#cBHhTs} n)Q6:c+Bvzݺ^qjeK Y+iXJ(!Dxx+QBU*«'Z*ÜD'l]w(]mpe=%:?^-..1&|z1"|4X =h୍c!D9o &p( mrJoX+t +;clwϨ#Y6.r^|7Lץ'[켜>;|0>*#? &/jK6:)VB@*-P_ zWq>tuz{V|ۿ|y0z.B|!g::x4" wdEq H7!rFF-|m4yOgsb7y BOqNE3Q'7pu!TL ?C c&>e^n/|S7ɷ?&|s]' D}?sI!<\C3?)"d+(߼f[d!\vsntOJNU~(ػ6,WB`61 dq+""ZQrSMdv'ͦoߺstu' ^/V7;Ne7{&lgC+7cJxaT`oN`LF/udWgr-sB_p4:@_3gG_{o7g[:+gSq6гuN; j/7N޹KRcWdX|[[yu_o>~@fn|=Apyˁ<:lQ98[7x}->j@3|+v*7;SOVuf[zgG)7mj;@gոcTLM0{elbiN>}_9зoq, B\-#Y:Wg;3d:j.T@uFpU_./7.<6%w-xEȃ߳ǧQ;p6w3K#ulŶ`Uaj4,?"UEGd;+vnÏ3pP-jL=v] $odyu5_#\>)#%eB %ilh$]5YYU.G$v8\H¾_yt/7 njF|mݰvÔ<W:J}Z`+e_.5ɩd"j4ZxcZUtߝ9[FYm ֩z뇚}T9e#iQc.h-Rm*SUUgUiե?0lߞ.p Jn=T0nJk}ߨ*Ds-(!p5&u2NfrµAǨcNN7cԚfoVH:SUHʒ7ђ4h1"'Zឮ;]M"1M)'jf :#M1EP䔄ջ{pODKF=yc#,G@똕o΋{w(qLBM":JJ0T0[Bȅ0*Sm-Vbx{v Z>O}tBb&u2bI:OS,HUƝGQQ%cNօM,4~PO/'r.>n {Vf$jZ:'$Y1ڢDsEEoÄܷJ0e=,Q(JI26)P%G[GWh V %YA-Nij,I:'Eէ |hh‘)*li_cF>{(.Qc 0gM(tGhO$ARlT"yˌ@4ZE=flEJ9g'`"I@= `-iЮl4#`BI d2U X.]S%5ZAN:M::Y @@ަ wf+%ZM=nX7zhl NUBB Bꕶr{FKQuER]ꋨZFȼL!Gѯ@HdAQh8PZMil,@E@H v/ІUTGwX|p.#h8 xO\ng̚hN3ƄUL$/JvH')AP"/ڄ{aV9 ;,.[ MZ|ou+B%h>H`"2$F89:pԥ: (}vG>+&{@\WhR;p1'84;ǍLj,(tF\8g@(z& H&zZ5輪 @6oB⸱c}@^BB%>y/d:[q:p2Od]_ ?/AQD" <–&k)ANE"}٪{^Ls@ Ѩ/сwK|CZ6@ @~3 hcJI%h v% ѱcEX %@EW@Pl5k:h NmLmezŖAa8(@/"}R8f}]qcmBg"16#Ƃy0) !&}Iqh-dl%VmZb~L>C:gN& h( `f%u-A)5&mVu52I.V (4yZ.a* Tmom0&Ǎ܃gwv.Ŝ~٬i׋Ւv5ufFU n=\lJ''Q40zP[;3 5HSp$Jhuܵ%5՟j%'ՒFk7vS<>>6|IaFlI*CwCR^Rt CۚbCm>z%''2"V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+>a%A 2(`ROF k@@x;V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+>]%ho'rkt@07h*%P?t(@hȳ@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X ;S=ji=PKz5{ݲ/5w=,o}ެ澼i¥UX(sM V(kݯ .=Jq #KK^ 巛rٸGSaA[{ТۃհS**g,}|U*a;]|5o{Ѷ 㬃O6~ u+tu;hr5;~. s?ײW⥛Q1gL; |{v1&Y!5] 񱋖 aȇg_"9zhvj6Vg]GT~Ӳ1Ǖ'~?7cr^og?J˝si +)=z%coee~|D\?{Wȍ0_v2d/]A^p_.Z#9jٓ-vdIVKDYⱻ")XUa`4ӎ.؟ko%h[l{kk[9#d2qH{KUߕ_6o/VUJR!NU-~_\~v۴k%\H۹fSh}=T]k[T#a}נ$o6}_>`>|5̅2 6$[@^՚z5oj69[trr6_~AG_Ah[bͯn1輍FZd+j+sJ,O?Y9[S(F JNddT`;C_#2/e]l-zɾEkTtqfd$*GJbdSϘ8P=!YGx4FXpѷ, L*1s[^< mU>]ao * r-> \a<<շ}6> [8M}*H{cYƴN+MëVLj?5 Svٿ?n˧3yby\[L{4c38s *-niv(3|GDT2+F>ʫ6 3-`P SB~ Nݧfm-ۍ_xd$kQoF|HQ ODrńf]?X6q6&Gv\G $ncr[4|9V03pOխq۽ n&ܞK+n ƹi/P vQ/<ڼ?@8+y RTu&mT1(4N%rEb]yao/Aw7 ~LFCgΛ;mgǾ+i쁿2`Xn)d|a*P\C Gs;xz'z:v,YwLc#[rA>e#ʢ 'BhňŎxHafgK7_}7\Zцf+UsTNZ [E*ÆUJ6Uu&nLnfN:fr6rh݄5:z w1q' (]@tQ^_=tP̆}7 d%fn_}TC͕7!yy07~یyȶo}|T?bo0wmV iEm'X,fPޗxt=a b7Ly˭͏d(%me:0HO71}NA[_PIL  /R҂#>SAYkyET>RFa*%~OGsR,䱪SĐXgshgg}], Ij s-Tt|z úzL>:y$IU򌂆{AM$4J@ }F"!+0$8:cLzDYIѫj$D9N`.0 z# T)-VJ\f348U4\+SP0֟Bld L͟p:qxzӳQK@ROQVXj %hX0B[ΜZؖP2R:S\-&x#32x5sf"Tnd&~mkYQ0 9ʋuXxP,\W3K+-1[F(?n2(;.P`i/'/wZ(WBH&u>pJE l2=F[*Fݐ? JlR!H9$B!#a:0- 9rog3bFǮ36PckcC i@! F-8wZH`03O)c)m.#88f`!3 hG9AQNq );2Y#Fu}ٌ͌Q?dr"""ɌC׍x'V5X83t6 }1)V&D츲XѼ6V!` `(sRb%#1)DXҠK0I̥쁙18[_.DVaI%'+Yo@)2F>3cs; }s5.r]wS4z8L0+&A@K LQ -9+R 3{ɚjhАsp C!fH `RZ{1[,} *%f318{s'yn]v*(Ց(BZjt?30˯W}EI:E ,.2`օX$(&^x\;]N{O`tuҭ5\٣!P12xr 4z+ӏ>xq|#ǃl-55\͵Ca2kt܎390un頦{V)oScJ)a#'izk^4VUgERLVԩ7jYsOrm;vtvDZ!U]r0#@LRj fccPvetX9fOo|=t6{X?K41,d1#1-BNw Z8?,G-]&| 9gEӂ@ M+y0]. kX_s̫ Ac,,H$S 4t y`ǰ,Γ%tB{b}}toU-9_us~sj*;"T g%QjJ\%K3L;sTF쭖&f`1`-ƀ&`(3*h2:Z0rD`*:;C6q6kJx<Mbýկ{@^7k6dȟvX6Eݮ,#Yb{jY6GT>PSd^J9ا/:V5 o`IȻ7?•L]å?'(O)8'ؿ?}^L+ty?)jH^T DA^JHę2޼M7_5v\Bƿ 6 * m.reK_u'Ϻ0{Xׅ†c}z]dګ+ ?}CM8n<,k} /^ wТ^A٥9=ϣLT^8)0gU}*y t6pU\ U:\%))#J9:\%qA3+cupWWR) 3v6p@ZN)\B4.\$.GWIZN\JU9>q59$֧WEW3V3O)盡\4Vu lo9}h_ƣI]Co43*J[.0jӻ[SHݤ?+џog>H-\ C'h盪&}qe,:uqo՛F35yjGo¡rcqR|Hh@SUVKsWoo?{ A_/2&~w[x#S=gtyξ2)np0sH[ 2TJ}êYH3{Ƒ Mo՗y:dXg/k}SBRqOx")Yg]SDqF,\&jOݓT03RW`q>9\y6L! ߪ`WDh=)t}̔?M>|_<\q`  P[Qe@Me"sM8evR yGI#f)s>t% +dF*wkܢLkA@YWf^& {t~ na"o{o h7FհL1ﶀ淉|5/h6sK/u=G,ٱMGN{o2}RG>hMߓ_f܈#TTk _cHj[eWR쇛A?o5ҖOk˧m-gmc#ά^Z)mo:<;iCV- @Q++I+A,*&+t䈛 :P9Bi =p-,,rlG}+.NnYb{XP E>r>XչWbpn]SnK[DQTu%Em+'Q|6~csII{'+߉)AԊF%O`M"{ێEKAd@wTW +Ss?K碈B٢1>29>BBtƬI1Yo -F|jz~Ϯl(;aHG!IYb|E:HaQ*@4ugME% 1kd4F.$^Ҟ-U1ܢP%Ola .Fv ^^qj?\;[:| "shbɑ?|v,7gy$ʳ;߷|&)' JΥ|S/'-' Sŵr\!k?~?Wi"u%V'D XBqs6+39!(ARnDHAI0gR)m>}|zԉ||%y eJ+M _Dlb b2#*1g2$As_i T RM+lo|N5ۧ2wՇc?!Y>h2wI(9Jp8R+A]F_BD%]`L E('EQ2s 6F \(\86^8v\ Qh]P`HQr1rvźȜ(w) Mk n)>;[{g^bv-!w,L^HΓdI$ƒU,Y)mPF(7B_u‘m2'uB%4sl&qю$\'ȧ8 G1Q#jηis7eaX>`FM}r]cO /c)m ;k׻Q ^_Ğfߵ$ 䍣HzrzOn38Op: lc;]~w[+*0mT_o'WSi$d.NwᔳW WS$mE[q&$?>@sYxݝ9̈O\_xs;~vÈ pO̵Ors"d +ۿ_^ jPv&g~iX46N3,QCj+W1[Aw7[Üx 8+#g=dӬ YGy3#d$O > }v+urD^`۱vP(5{Sί#O1%+t)L0`x=9utU:J *S3ynr7W($߽wo>ͻ)?A[|/s1&M¿"@+Sݧ68lj.SO=]%a޿/ϫz?jC< 4Ņݸt|#-U{T]lNU:d=TlqF6l6 ?k Q_$xb 9xew]2 )gJ۩C`h/:DA& iӉ(t֩{LV=ɉ@HF j$9 {ƹ }6'VܣW*}M'nBTb<~pI\}N>jޘe9j l󭎓o%rrbF^#OIѵJ =S,?HĂH ,"`+Ts!Q݌Vysad |DFL!f(;O z/lLLJ%])BT% WģA`hl_,a=s/.\ԓ 6`J'RH@9JqBLRBp9"Q*1$$CsU`@IB̨5q9B h)ߛEo;֩ |?' 5H5P!ԊPgH9ZJxƷr2V!)I~咢 l/=Z Ys*5&hI\z/@%'A˻XPyy1uhU6HB:L'Չzk0=0Rnn'h#NRkAs#)%d ܲĭɹVNHƆq"#ګ>u>Ԇ 8w`#Φˈ8wZ.S)hcZ%̐efFE;EP`7B(+auB!q´Y){d{nw=K'+_pQGqL\9iaE.p Ԁ$ȵAsb[pHe-tB.Чxr7z_8|ȡ5lߚ˱Ц|l=xj-lX_۫2L!ra߹BUN:50*# D}~J۲ei뇄/WգCPT-Mvu?2YaVlqLeʳ^VӚ{5ҌI4cTrOqYs!]O}%r"%H@HЊ^( (H*&xG6,IIGH@O!Z橏¡(Ɓk"v7]Dar Eԛ'$Ai-X?^T+oOpJ}4I2L4AܨTDr&1IJ"@ wlU 9sR8ez^ 8l :%aU Jn)a*Қ9;nD)Gх8㉺HY]ՅՅ _Ƿ9{TF~5/z/h56 >0I9nR kknFeJ/k3UIS}a׫"#o~3(9"GiQU"h4ЍndeBVp΢txLEOQݣE#6$gZk`C iA" F-8wZH`gRbRH]4U}Y1 !f t r ǐ2"9e}9aSCR{F$-kDrԈGn;)88Ebӥ?FL3EpZz `Sx[bEZal8KFb3[XA`Ir4aKiZֈY#.&'_gkd[Z֋zqU㠏.A(_'к^7.݋Q^G H z@`M\J2y(EDNjXO"Lju@ ~ږJ䪃QW@Ħ喙JTRuTWOP]qL^L}Jw@T*Ywy9EEPހ_ `2u XS,c%O)Ux;BVWoβ,w |鎒Ijp(/xJѿw|TBYr%5(XIڂ__5~yvthOjt42ng6oidŝBiZ̺Ä2}M"&)Υ6J\5WX\8=y>;ŶHgrrZ ߣJ*h-xN.NAښF0Ά>I*톽 /B-Bc4 _@o1-DO7"_gj.7_3/EeQ71}ShgKϳsnќS/'N4|L^O2?~k M^LfGJBRJ9xxa"b 0(4X@I13̶Pg'IV?NN !S.Al红ZRE*]!lm6LW 7(my7晊~ ߟe+/yeqR$md,Žۈ66Ǹ}imN~ e h*M0m9^'1xe|EkNv硸&k2Q+ɾ&]O5)0'ܓd9aL&/ ?ES%Z"kY"MyMKiQi^'a/V|)M\K]Ƨ:?e(4w^o|Lr*̍} Dc2c`zM\eštU{,m΢bvn0z҂u`;qZfk> #@OVwp] hЌRTRWgrR"qU.޳m[g].=bsX٤6`eZ QoJS*H-ʕ]&\gI'nnłM#9 0pjW}C~{O&Z{q{! i)tٞ:@7ۆmCTa")FX#4Kġ8Blc4\O*xa5WT\S.>,LrGϲʍO9xw!~޹1FIn`r99QgTaA&9 bHR@CԌjKM'rfNj:Q龫iySTJɄ&G]%5>ubE]%jޫD%;ZOQ]i9p@0+0:uK뻫D%cGuݨ+pEq+b?MEsA~{lYR"W/[:*9^nk3l~ۨiJb@KXBSD:55`UvZX_8νvsϼ"*MPJ*CqXYmxdc=:9;n1hޣj[j'9:Tl4j3aYWTR-;Z21g2 楗hWcXE<'dl(rU^ؼOS6Ŷ^M1"w-f[Y]E8\"Sa=X~{]dme%ea,a+Ƃ x)l \DHr,wU+ (WKM)8F.%1ʂk [R5mW-he|Ld7+DI尵R 1g7"+uB&{5rEs&Bc((`S7D# %S)eW,$s`@畵w⎗I# DJNRd̰4 Cb$HD0 A LlƩ{A]XPpcD)eQXokL"p icfsiV$PD J׳Y8.GY;ښR``qb!9Al F4 R-VR!(H 6Z+BU?qa>^u{tI>KY"(&ɞũ9a<4.qed [qWG-JY&3 @rNA{`!G Fcpi{םe't>}6 ʊN[l*K U9ui~s60>dԝ~d%_ʔvgiS,eN'qqAw[Cnb9M$_ mS*~9XRe |=]{˫ɧU612ҹ ^㧒T/x[qR뿹Jϙ 0WDhmO鲮R õT[DBG i'{3F}v/ucW:d]c匐ytf2:|:8ܰ#~;8+Lhٛ?_إ׳yO<}g_ɀG⒛6q[T fVr~8s'_^߾ǯ~x&󋷿IyŽw&PV?i޵&mu ͺ5Z9)7;Rlk5u_)BepA !P.nߥ%Qr-FiMܭt ޡt͖̩|cQCEINg(n3`Yd *0wHsIM:Gf)Ŭ=Ƨ @;hlHT M+cĜ!Ba̴ΪHP`*=wd BԿb9iNη<:8#bDFQcIl\0ͬ`JRzP,H-<p_ e&,MTɖfE}6ji)j xzx+'dߍ\ȵ"FRsnT=1.pS`n;hE ݡq /)ݎm|>DoM`.9vam1s=v E4A0ɵ 7\ 3f4j|HΜnTk7b˻IEUhӺ<>ur YoV}|ɢ j݇1u淁Ʌ.#/}s_+nZnX`V((Nn(h}nӢ0]o˺EV0- |IWmTj^^C)#2} d1tK~ ffӕ_F-X!rF9*gRbrAsw+>֭.=G=w5!SZ kτ[+d>RtH8H @Z[=<K]躈YwsQ2І뭮-ZXlj48 Xz< EߍE6n͛ 'ErD%m9o簹

vF>v_{e^Ԏ/AthrR t~{3)7j`AR4 3I)$h=oyEBpoTI;Gj4젶x d{Rm.hUSl ѨՒ0}9,7Dh) g 0δvT˼{%>#R_~M\L}ND}yW+0@}f?s/&sӫ.Rma;frc/7\λ>.sPХMe(5BMj34zbp#6ޔsAaq(jmgԞnr6ƵkBVT7il+:`\c.PPhR]_URԁldyFǡ#qF]/Sdrf]qceC@Jb5 ^Ϧ\@lɏYchn\x$Z_\8Å-J"I;@тtE8l8/hwUf9uCq㢟qq]ibYq RGVw5f}maB`[SHhr~c08`<  l`_X0v!O JgKb*u޺KpR}TtO\7e]AG9+MC@`z[S`5noʔ0$DN^rü}(U>R)ySKo\5O˞Z1Il)NDCiK I[ugUWR>wE9Z|]&a'tOBD)~!S1 L(XOy?e&zMsNG;V|;C?Y+ߺ,f[r_쿤usp4 g8 C>svؗC jTɃO fR1Q LVڼ,y2cybyI)ͧNһi^bGc߫e~s~s4igy ;o0z~篶r-=cuo+={39;옧s'[Ӊkv7v?OlŤ_FJ Bh|Q| E}z}үk9 LHwd3qݵL_.ߠȉ[ǏK{tt>{ltcw]>CZm8<_l2]_JY>Z4EB%~vRuv2Cmbh wŢwɀЗl9KlJ /؛Tz{j Z0hslc4×s)f63i;6=VuLuD[c6_vS Z2aD٢b^mJh0aJyHG]ƺykRyU!#XEL%& /ނ"vjBA3+oр>VXՄ' 6ׇf5囼.|.Ki'GK(s;N^3)KO ?myzNq фJ'+ߊH'Ž9MμWy Nm"=: )kTrPGj҂0$>q4b +\.tnnУ()QǔbM+`E'\kqvt;j8^Y|YjȞnmwer5r٧>k;-%<6Rߖ:=r~_2GҲO6SI 7 DBc9WD-&gW#6= xI2WLdm:go FVꦡ5Pi:*$ĮZ5XKgk 5 ;v`08 XxLQr_떖WeS׈ߖ}?0z).Ԇ\}PbE̊ \@s2jLnz3yaGKPjRfiŨGl)?$ìPԶQΨ=3]mvk7hׄ&nVt-I{\,"7/-ǡxB,Lf -h-bMFz[~Q1JB1IuUÆWIF 08`Dt3"Έh/vXDِ5@~XM׳)f%[csDe j::pa!Ffg et˰GįWUjV\\nis03.Lˊ S9Wd5y'\ChVf*Y,5ʼn&g\<. C0 p>jbX?V-Ch;d hQKB@`z[JUv95Wޔ)aIhrü;}R^![?ֶ(E,:XlB#cXbcTh KWT(ՔPTulYRc- JTe}̍,y˚Ǻyyبn׭ޟ=R_?B|9WƁ\Lْs 0Td/ÌGAnk3auC8ZSkA+ 2jb2ȉ|*4)ɤ,BQJ[f2K@uNbNWZĻf݌ep yRɠG{Sj.n?=v/ӽ}7u{l\sp-ycPM4#i5Tg8kb>0xUg+ސ<6ɕ`9wm$IeSRއc{ƶ̇CS"i*RX<$ [6+"1oMp|E?@:6Z#` ϓqݤW+{ 8.9Ճk@ǘ.J hNa&be΃Vs5o1?I1XEn? m@Qӑ]n !KM8dUU|wqÃ#P4c[18 ;żpj k-ZԺnr~JuVN퇸xhDՄPzsP7PrtasejEnM/ PUjzo2 팷1j+ t횆juY0GM_>_V#7JciPȷH0(xS4x%Q H)u,$Ƙ*Fp5lhq 6Ѳ!TA>p,5aD6Ai vf%$l4xx28 CɕՉ+7 OWpMwWt>ױs >W~.rFm^emg^xӅ7~q&_<LɸMRg#u{҅ϳw|Q"g&/Y@[+k&LaCҸ0anq6ϟ"D|kYob^+v'0?.s7&=}/&EO7Är䊈Ì{y< qS~seK]ո W<0smt-A @U*!ä. ӆ:Pe5ۉ,C, [$J _RO1CXa㖀:νDYc!F٤<"88[]͍y:eDX5#)J|G)BM*AS/7H)Y7NmuX X2 e9/%@߃}zs^hAYsYY9B>6Cсކ> fj-5 n |ͫݤrN vRUP-E!@}1Rb[goΞ/ Z MLi6b gUe]K]qqP{-oL:6POݏP-XDi)q"p,^0 B ϕRaZfœ6rT{9[h8Rm)豍O4zc%vd(ET>T \҆+\"(Fj6zcB)7j놬og߽-0ye/I˥"rbZ f ~d?j 1J0~~VLe.]r>)֞ ΧB7wT'ECLQα` -NA}9+ SwnFpc18ylS  ͊N[lB\:$U_\8oa@:"³gB)ZtF*QmQ k0"10|\q SHq{ ʃCԧtVOˌb"8+ A 07krlBZd\]z2"~q/w;λRyZJ?do'__%F)e&群<"O7e(sUwk)`k*@S(镏3\׿߿כ|KL˿|̅2.9׉ "j1|}Ӛ447m*MӲ^W]X]iJ!n|B`u_n0Rc];\]l N3**Mp8w;uc?Ef gexPzy4HRkftFW?vPA `BZxm:!R1F)Y L oïL=a^S\!9q{*02X+Md&7N1x =i<.[Q'ʪ=et ؞:y+{ 0wÔ&Xy*?KB\qL\Wj y, lny$XK՞Raw35ӥ@f`k#9oB!gh՞^рmY04\`D$+h[f8n5!]ckH3aJ@4GJD6d# 'Ob ^fyNfS^w^"G7~Tq"NK71hc .H ĺx1 =Uo&be{,=K(v|7NJoP%%&IOǒ'Iѩ'IR &pB8B4Owo\,) R5\sxxKiDZJ46ҡ S#A<=ذ#ʢ #;6+C0!B*Caj3 &Xz-#chnD46&ΖK$q͹^'I9Ab:XLՄKϷp6_v47yvV솪YxqQ]{]BmIqSĀ?.>0ΡtT:/K7P]?EP2yWnx9Z!cMLb5*"}䘦QV .3(68ufb00s$$V8m02Lc1>"'^qmzarIT9WA&#2\*밊%"K"2U>B6xbDMLci RՉ2DrPSW3[d6+CA)4q!壵Dg"#c=hkFsbP[v2.fV]m&e{ә3Qn\Ǻ=~ 093XiY;@:mڞ5V$TrӚ)c1XO ǚ Z16Цm$:! kmv<f8zg?6tߢ=X'y yJM*t3žMgZ{ʑ_efW`IlbL`Q5%d'q/oQ/ILYHK]֥O& M h8p -cVf`WZA=1]ǭEmdKH03$@˽z! xJB{Rܬ 4OkC2he9:g#I'Ì&)9|HO&tD.VzyDd7FH%%+a6VvDd8&bn '܇$d$&ߟhzѐ+=(**k]Ė,^ 99נ,a*(I:xikr*ˬ|֦, hU&eDl &͵+@62VV`a5 XU,qY\$ƛ|`\j/z?i^o]p)&0 Z0QƄJYcYJA6ٜVݼ2C%'dBeUB@L&uYCp:G,[{8; iԮ6ڼELͽB-'! 9ш٧!#$)Ex6>8L̐yQ!IȢ(lDTfK%ɩFxXMx8vgj 0MZ)"ʈ(ZDlqGVD-)GE(aDg}љəSZ[GeO,gƨ@rr' ɀgB jH1'-Ho8R2"VO@8%ڧjZ).Be\-.Juf9@8-3ah\bNx,M *L8 e}jڱ)x6-@XŬ ~`VkC8ӒE?wh^/bL#Ҡ Bs1< #]2l@рdqFANWȗQyKOcb=: ^!/<2:qϳ H~M2ET ںO|K^@UUM^Fo[9j7Js9b;bE)*VUzߋ)n6ǫ{/2j)%neI$h▍R6^%yjzz[Lɷqp3NcNj{{1'hSIfFl Q=0d`5I+],*!K uDnjN"Ȥ5HAP0KnCBBd3N mb:2}fP5q7 >64~OJb3u*ܓ4v&5Cr?E}dz[+=Z-f8BqoruIxR*)5ûɳD r BOfњgl7)ejji-4 H-i&8o0jf'qI|!=v8~ģu.\#E嗛ͦ]bfvo(2YZYG >IHxpFL PRFb%=n9mcўkYiqx;/hA, єhܡ#v %;D XF+I3‹}RM76o$|]^Xvץ(Ի=mO% s,DPmCʵdyؚf=:ij;/&.{h#"tAH |j^;%&-sd!Y :y=.g4m]n.`,^ϧJsMߣUM*')%RJzL,RRNo-ܬeRP<6J\H,GB8omT&K-Uwh9S-,El5ZIᖽY`m GkT0ωsH): J%T >h={rkv,RKvb]i1cV}-DN|='g[1nvN-;;('x|KN̈(q5RЀw8RM9E`Zf2RޫHOn&mMȻZhTX~ɶeZ;bZFu)Yrԭ٠|VgVAVElḻ!;)W]KyKy/ض+YEH vD|`Za4`sV1+c`")Srbm,ѕl+C&)͖֮j15t;!z3f-`B:Cd:/LEDlI"qy Y*3F M%xNEoˁl=7FғQjW&z g;yg8pkJzֳܑgPg9bo3T?+Ϊl* IijI4QYcA9R6,UOY]YnK-clkJF0h!X&@J$CF! cFi*tRY!\$88 R* M%[;ɱTDv5q|xd5.tw$Iezz{}e]/)c;~HkDIqYs-e2賋.3dL&oDZ{4,]K2+>@`l( utQ[lfiOKM(Q$5V7@Eg ,=jCRBXʙfZ~f۽>FP2#s:(xNΠMW4 x<=L}\E|8 SwuR3^uHD#vo\GM/5\r49 Vw_FS6.i8)Tj9`>unӴugҼ,3uN|xwyq:~F0`j.̙^t2]BZˋ?˜ۓK{uOg˺˻EeH?c`2@u/V襽 K.u*bltFJNJa,~NY͏xg0 U43~:?]~{OuGjeek,?bY$IthóI4xUM?W';{~JF}?LJw<\'^.0D0"$SNwDYC65z:U j~~_͵ų4Kb3"l]JR.+}lq lN5;?lqFF߮~6?^1 A^-IX̩l)g8$N1҄$Nbpks\s6ZO 6&C5OS#3$lƦ>yVFCLL y,pA2(!ٔA52W& }M 7*c50Qlx)[Q{k֭.y2LŶ3M0p/=^y\j{'~cU^$}'bLLJy 1&,(A黊uu^=j0Y$WFkt ;2 wT,yWk%y26jN]{,n|f{%׷G1} Ԟbfq`Jͩ޴q4Q@|Wa*~)Nn }599_}3]G/3 E.g^v-}|lt9's/?|4iۮ364)BJ' K$J/4KF̱Wo&_܈.Y](4h) gӦ7򶓹d{ד& S~7'Ɵ~;\;e f_nga47A?t&QPLסW^uH;W\KOIͿz$^6s{6_*8H!Zk@/ o4goTj6nSJkOk#\! ,8L)C,EtYRhHrҝ ][o[9+¼mN"Yd1<4vX,nS"{-%`$[r%ʖc CSEǯȺ eUbL:'4,s w6שk# ͐%H DgEe%;UMBVD qXm|%OC6T[mtek0؝xŮ{1t#Q7fW{o֎~oszܻǽaCp> =,\&quDW[WlѝMT.їfLf߉yׯ$f:ucxE̪o|P]~~~~[v耷˫Nw1e4%$=i O@ f&9)8!hQ?#w9iF;O|>ˬFCKt:'4ouy9^*$Ywt2b34?4vmٺ g12k8$Ὀ&% 82@m02tРBG0d$9[@w 46|;_ޅ~C S-͉;<] =:=Pt?vU؝~}Yl;^w}u= +Ntlo!6e {?Wa1_,x\[wܺm[os2miM.;ںۦ|q˸G+-C߸wcG&e8x~QOVF%V5_67fs6Oy34C͇oѡ϶b Nt褍wЫBh}-[đ2~|P58͛&^Wo@xB(ppׇ"l~&&Z @P/TGu)eR5(tx윲^(s\b88s ^Vua)8S(,y籆|"$+ٱ\B!T }T$ b+]`.Fk48B)+Ti}!LBJ$0iWV[8'լ{o@Z;z)z#*tO4Ev>AGnqY>oClC`Xg,A QkкGdY:r/ Z~?ic޲[n4/%CේeHH$=f'ǣ^B3Ӿ7OwR;p8Я< Wv]u9zOU-kXwqv^Q8\P d :P**XCFaG%E6N*.ȗWUZIWUJ-z5pe| ~vzR{fzp8iO qR:y^peWv_V~eNbM<׿ϏoʹibkLBk^R=]jdOW#{~y?i5V4?y~FU(+#::"3DH*`0yQ*Q7{l;6#MҴ "ǽ'|QNIͥ2&il![K<' X+  &Q kX7i֝9Y'hٻ3jqrjvsyFF} ݟzNG) NSM7\.RGb'*IPYL11:R\fWwRY#\T.Vk)7PkըVQC2VYLt>_r&30>' =jYj|gM~It,^9pQM2dW¨1ll1Ttq(ЮBV }+EMD%Y 2ȧ!O)!o.Q=0erYxB*>z8Af6¿!'Al8ɶn  j<3yFLȄ$(,P*;MA#)NZ& eQR aXcgn*a}N{zmV z0hp@o@q COy o/a`|0Y::ަ:Ec  ^LJ#<|4ϫ$i7}`"eR%e \Rul{F- v $0> gB83mցwS=2lO2~)%o%<]V7<,{Aa,0ޛ0&>m, Y %:GQkN', Χ2p&aQ )~bց׍ 0kSٽDMFFw O'kUC5~Vj!>^7>\ru?>:kʻ&6cvHc]vR|-yJjh:&TmS %Ug QTH)T\Dhos2`@ Tkdl&[Yʳ`a38 QE JH>+ z?:tt~<2^9b\lPz1Vۨ`l-`IfJRƓl{Jhj$ A*YaNVd %]ΔZߵg7bIZ1.iǡ Qe *,Д](VF]|Xf0uRJPT,4ÂA4[QeXFfI0qd_x,L"Qy6fި_ܭx.L?ED倈"noɈ-z].C ̆@hy79:9Tܚ|.jk;!mQ+`s҂JVRq&Ig X)F[бfm٣n̯:0.N鬳1.\ܦ!0x6|U^Ɛ \Ri(`SP ㄳcV.iǡx@XÚ/UO7xo?vdk*8;r wlzs-^cң]溔''5[KpL|"BDN,CQ!3bp.AIc!fAY@|yeOJ1zȣU_0fܮn_S՛hy{㿽// Tχq`~qVe|*vP+lvlylz&JEҮ*mE>OVο{Qk@bT:-4uRAż Au:A`ə:=H ,Y<*_ SA" Vb!!B4%" ml2hj>S )hnRjT2F+ZZE碀Bʌ:) ^Ie$L}Qk3q >v7WoJ@_ w"4vf]|ˌ-7Y{o6})/Ct/{Iyҥ9ZzH$P @NJڤ`K1kNUj< Ų>D@j v !gaHI"L0 ) 0ld5||{.jwcM]% jyx&uD{i0P|.-1KaTAo)4@F!)[8fu 5ML|V ?L'\adW"3y O>_ܻ xW=/b4+0bS)5PSŮf BJZ29f!9Oêoϲ,~S kv_т8T2KG֖e- 7! ^ѷ]nz?+q9Jz85@m`lM#d̠&!{wǦwkxNiwK-Ns\]Awo Hݧ=o$.yD rՑhD s޹d6Ro.?/ì`"-"'&hjzvۤ7 CJ+[>hI8whiKWHr1s˭\nٴh8 -OBaƓ)tr; 7,kIP؄'N^S{F{ .(9D9gBAY y2uRhk!eKf+4%f U 墛٧s o&W3Iܖ=Vq] m|<\R0^@x/.ow}gA(,o$; tސTP-By:M(ڡF؉B}h鳒"RVX/1"'Ϥ%:b?HLuTJ`bF7RشMKj>zO2]RǻUQlU,QuV690>8xj+Y?X}ݦO§ێXEx9F[=@Z MZ#+mK$)" E-"%˭2>Rz[g }fsAM܃D=!"[gMewÏiXVJ14S4ig訄 ʀ*Jc6'ծ,[T& ƜU`)z.+3&^2jШR EgYG$m{EڙZo}L1͵9"#EJ u{K_ڧՇѧ@\ ILdZL%s*E /t߱+X=ō.4y/cԂ$BSq"I/ɔ("C'e59y8s1$ςk$UA$ V +тt sQs#T h$8179 < v\BKgkm)ƚgYϛӨ@X~F  \ց0t{ϱ79VBG-`.>?e L42J&&h8ENcIbnX G\J8.p $ϗmT%J,Obkô҄7<+ɀj rWsKm8t%il|t8c%/1[@nՔd`yåJbA 6C$U4d%r3<$cأ;(cOڽ]LzIx|ث/v<6Gϱ أP2py >#a4B@ɢV>ġshlvG|05u18p{c88-Mђ( b칋5Ȝ ;ϒ\ ,ݍ¦~+ز_MϳkY>]z}N IqM]&4KN)/`6r1uhd!$tT7FȵffW6\ŀa0` %Ն0PD2#Ck4-( 'PԓsV0 2fF>BC(  R=;j]tCKBTGQ t\S>x!W 1J@1>M Q SkN uuT֐$wJŤ|T"0h]tiFsy91\7X)zS]΅1o]?k0޿?_=\rikO /@<:8Vɼ5\ և[rvQx 769wo8B; Vy֋k3q=UxPZIRlt~zXԝ_}Վ%ٯ-R%y3<(*Ts^ɶ]ѤYFKB`njRҽï1}}⛻e"8{( #ș07A_]\Nǖ., ~ co/`6}|-jP%7$n6567#fW4ap(|8߷=yus596VNnjuc_`l8of@k#a(R/3Nw;V]aUyZt785mzMM뒽n|w]VJ/څօ˻qck&>>se8U SQP]ivc Fށzဿ۴1$1)%*&b 7u&1$P4vA󯎴M)xu|D2 8j. NHC`1fJX0xDBL&TMʞ^L'=Ξy)[Q'ʪxj ؝:y+C0%WiH/NVZ]8%")hTJ JL2ri0ݩWb e8PBKk1RNj QG!"BrxP%H%)ƽ| In>s+lg1j!imJizGzp_PJ2+KIVԓUf)U~*9L#$lTB !&~6jIзj)` s,NvL r9pu̔_D/ ŏrTEG\T6 uOF)a ൢ Wuc(I&A1qaOWk?#/E4"Du'P4J:AMJ*^D *BP]Cܬ':%yߞax;imߏyLЩF7I?53h[gNY0G%W//ktuq}♧XIJQ.T` "jg]dV3f2D)܄'do_^ /6 `akq$91Z1;,Ld kѲ(tQZ1x$(Lԛq"s%PWs>2"_;j~>fR/.p%8N3*T2\ u\RU.N*ףF@W%KMR1`% !o5$+51L69f#5&錧@}2@ Xg> 7Aٲ$8)qxp&BVh^ocރ-A?7|q/@L6e3Oނ <SM|A%PIy1)!Re1DV¸HМ>iaQp1 = =g塚ׂze4d<G|ҜGN) 7H5;mΑ'.Ftdqj+0Sw7\o.&:\/]{E'熂N-24m;tttzKt0~!Ts_ gbR L!W~aYfz9 80MİA:JF8vHHH ])u%ZIs@t<:ky} Rd *TR^ VyU5շZZMզf^`fWm/B,qU?ݸ$EctM|0UVT3yY}6?X1׬ l=-aXAb3NmbLNOۨ#tp:FӉ@0nmΙbFI z``lFSo= Vտ Ɏm:@sgYI4%2DC6hT*٤M*P;ژeFQ9^]vkJ z+)Yg V 3:wP/~8SvNZۗ^>ENUyޝf!]&ӳ; ̅w|?}rFn*MZ+™nR^'dI2 U[=k'K,^obsz܇.NEtLoP0oݤG/ ~w ;?GḴņ9ڝkKwvt::0/*Xq{>sr::ݧ o~qйƞvL7gQ|%/4gd-E=N> ˢ|wWfZ;pޡ](.5]/5Rdk>CQd rEw & !hp֓PIO!ȃG 8dhmN2)G\R bNY2YMX9or< p}}VGZ=']S]v -X!f\+q%v13|o>e |sz/TKyy J" QeL N7ԩ&ļ6dät>`Ѩ.KVR ]<ݹ}*:?r s-,<0Lv= 0U4Q,3 cr>>iNѰU![Gy}2oMk`5vK`sK,Rmo<} tDz#ьҧ/ L uJTEr `y}&2H.=da At8TYYdqU"(R<#j2,w)(.C '|႘Zr&s9x'M2?&Ζ^64^7;6a8BЎֳ%- >w.wWdF;QS>ܖY|P|޾I[fWvBmVrfͱ/DhsvGb [mSwg ]>!0Ͷ@zoubuﵒ\f/<~<0w[]tmŴO`<ͼ+W }fcm cQFeNBs3C_W ܓ%g\g's%rlf) \2k4sDf`Q5PSX&193dOڛb}xLDeHU1]sΘRK|qx2Ƴ (;E99b\餭a2E).i}b#(O(I {R<+N w.Y墄{֒=n j LGd c DN dH$$n|pP!CJykр~6"BƄ$S4XmR`AsJ( cNLz)={]{K~[E]m-/ݼ/>r#%2O/`2UtM6ʳDC2@+:z9㺲eY-hX 3JˋQ+yIUF2=pѨD++!$~6U!vie6<s!oNHܥ7O=y\Q|ΧnBpB #qЄ )B4N9@#c~];[>1A /Vsc1Q1is29㓅 cj[ٻD%SKF-ҰWfk92dS39z#CI"DI 5]DVo < $FHKTdl٬aڲxğ܅-S(W7ѭwFO>)**oX k]Whtz)w ͹V` *&d`uhkr*ە֦, h "yL6g &͵@62Vg j\VB`uXB'f"Wws޳Y<Vw~@Gpv#H1 l$ңQ&ʘ4KfT0 'EIjoE+n(ƞO66QLL&uY(1hl݈ZlF0 &殠vq[Q3ص9`"GIeхshDT٧!#@%)ExA-a3䊬($dQ 6j"Xr3OE22Vg3F""gƝI'Xk|EeD="nė-9D(S"!ΔS4k$NHzz< ʺ61b d#(ɜ4\&D.8RVF͍D#YB Wd"!ҫE]Z6Jn2.{\\7uf9@8-3ah yd.1lMVM r&py@}aq[? sg?}v[Ԃy W3|/T8 貵ulQedIX}]U=Gvtc0es10F[ibr*TP"Ȥ6*adYrLI%ti@o%rzTM-N MD?^OB\ovf&9M=t9mVL?=9잹7!wf>1( R$%xAj*"J&+3tEmQl,u#;9F MDon1B*GVۑ85F qIW[{էXzMm+^ ${JE`tI\SY,* EJgh1ٻ޶vcW}X6Cn`?nwqf}8 Cjd[R8gtldHbkQp>}4?h *lbZNJyknB ۖX\t~'d]u5#h5ѓGV@`,dydE D L -t >=KCvZ#Sc.1y!S Ρa%F5tIxj瓩՝ݦ<߆gwsAA6Qquxs_(=ͱ3^Ș3@hbHSL"Ho )4m>5%A p5R( a$̍`Q(2z 2ڜT5{MsF, nއaan'wae ku /b\CLzcI{ʴ?u\[=c&%JqK@qNQ}8fg@x'tg))4pkXv+GMUHn4D6﮵/4!n~HQNK97ocž M;n{׾U$Yq?O7\r *>^EF(2ˡh,9T+9 i^7+^܌kW^ǝL1̕Ar6튣Dmi(}Ꮺk[rcKJojS3bs3olfUXB4:fbGb'wmmSfcmou1MnVB,[sX#aM8?LH^3 V*~\jZ)~ڟĒivW.![_"C+=jy9Ϋx]g2jP IfCg{}I䏿OO7Ooqa߽wo~~E{LS,d]:`2_oډZMS{U4 5G=jjyC{.[ [b C6*q_RF&T?T!f+p衶Ң ' 0`'ݦ?eguIIdȬ:/_mԑvp&Fڐ3\(9s28*Ao Kd8vF{GiS8?4!"g;JD!l(WH,HVDR,><la'^>(/.O7_=tZxrD#I9LFj36IRk\{sQ!KgBR&qE :& ]hDrD?>ΐvl"k02'*L:*SHFKGŶ׉m,q@XK{9ΝR[ftY7J8|VC;BPO8Ѡ޽rKY+xhhJ^6A Oi^e3[7R9%\Cs'fe$x-˚G,1+O99U#-hnCLWY-:w޾,Ѵz_-Äseib^o;-ϸIva+U_ a魆SϸYzi:ʝ1O=2ωw% uѕI56IވSH:qYpe:jQ{, Zru@G8ƀȤ,D8DQ`lO,,re(l7q/\a2;0j3pΏ-6xC߄Џ=y}H,f<,=*\nN%>YL6.գi[gX9{}8g{wrk&>f͵/vK䌺Ac.&TKfVz{4ɎJevdVH%*̥ujM |nQJPa6׷xn<;*Nu:r>3rXn+\9 )%y|} CL.`#A{7!{8 >C}}~9-VQ(>\h5* d'kzɓ68M’ޏ!Lv#s}.BJt>!ZB&lҤP"ܕdBX 0k7Wn4tsOC#PmAß^Ug$ֳSࣃ/1F A냘M0aMkw0S0u~SSCNuԗ#T^ZX ZZ/Y8ip:ԝ >ڠO p 櫯?>%SҐfikYestF9G`N@AuzdT%"7ԕbJY&/"IZ$ $)KZHX,Vwv/t1n9 ~~py 7Ɂ 7ptN~3rsV}O#ne%KJKsN΍6%`:hQ:X]Ǚs0b\IF }Bp#=ZLJf֝ݚV IƁPPNU ~(x%=c&8'2^? F7dklN$'1 ʈĮGE(QY4cϣ`2!EQ@4%)p:ŬB1e>՝;bb֮&jm^YkNkwvkErꄲɅs DTctPJreTՇY1XYȄ "+:k0hD6dg&K#dT'ZaևQ6[$Ѭ%jDQY#N#vqėڑ%Z?U$4ęr}diIOGmM,gI+ɜ.E Ѫd ,i@z*t5b{2ѫ'SuVCe;MMf9@z[QA 1$3)co*9`!gIvzzTa58TP* *4ۂǚяvG{Ƙ G?K㎊Wxf1\64*sll䱡m'07A3HP]My)/6eɜ9ws!qlfHFs,t0 J(L6Fc(4%EU Yn(Z $`hb!!B>;h+7>g/^W30v)1dLX"\ЙH ^[R kaI/D2vk8;k$\ ܁܁x@nQ-TyzZݓӋ/V MhUtM6ʳDC2݄~!jQ VzB#&RNҢmy1j%w(uT95w0ZK@AAku~IfjS{`!mx3!ɼ#{喜˯.pd?|% ?7A}38\IiQlU=Y2F1O,eE`F*L;s/'~{Pd9l<.3(! l`Ƚe,D'hS11½&&c{&QF%@L\r0EpR pFx%2qlI}f2֝=݉cs=}m7S޳^A?l͛E^sB<ql3/;~-yjOVzr9eJxiI pˬ!8V `0X͝F8F(<N1t ugcș]w$Y ٭,(4{gZ;*mwV+ ܞ;-U "oĹ ! ZQ'("xB!nR=wl`;w.6&:3זctOEd/s#RKc!y # сGIO0a$E.E쀧׾Z!2* 6 '00(hHH3O< OƍҸTm:WF0g0ǘFN&beèk`!L1 fYFQͤ':MG!}F= ~67% \zZ\(}0nxpDYڜf{h)Tp)(!kaPm;-PXω5:x-D<|{ЇԈ_9\ D{N?^/a'~ΆʩOlR)۵iF7YUe|[֘b0)zO`dc]2X#Ɉ6W/e-8b_/mґs##OPxʼFO$*BI8e:DcLxb++sz>W|1{}L&l!Dn(,Qe,HFӱCNrCRNewwwM\(ANj4oXŵ?pPc&M5cأ!c4‭iU"bK(^[LzQؖ, 4- ߖ9Ms\~u/nKVyM>O`a]HF_X>8gLZW㬆_ i~2%?414͗/2W&+8L0(@f"qiY"/ bLnJNJ/R!)9U5ԩmPC^P0vY4a#A2rVk/;^sXekaI\ۮTn۰[9uZ u79MuP.->{y{oC#8\뺾_7fs0myI>콁Su.,K>ߏZj\Հ8Uт:5YJ;JZ3HRcV`tU0UdSoip8&|#|73.BS (9'7 L*Y8\^}qBZY)80̋2(XKgl\J,_.YէL]}( |S  XDڶZ]'`e%oȫf\ G %=>קޏG?k\Syk~ˎZ^wwOpޗJw3zPvSz؃{_:M.zZJJbDgUb.f\ϡu"JiuA k.?\\ ~&k%;wLV*;O"*KJ~pdwWT0I.;M+|1p*D.W%\1!$W\y1 XOd=XarAp%!R] \%sXK>wJVR;+!!W`%/\Usd%S\CLh._{@PSK%$")p8_E}'D{_>fEQCks DG{UL*>`ė's/'$Y+>$Yt5Eq:-Q{;c8E*Lr;z Mơ6}>ev GYf▱߈1~O 5q0uy ͟'E+ -ݮL>?]F(7N2e91rѹ~L_>_Q+eR1EY1;˜8ؙ>;bk]C]f_zRC#IH MW|uiF0Ky&Ia猧}2:IT}8Y}wChWߙn F @M{8X~;VV=K]auzXVauzX;z⨚cj§NM8xGNydIugF6# <QS&oYcZa0QsÎ-}y;?T'^%U<&FnާX` ڂ~ib``1i]ҺN` )*+U,9Q#m}ZK'+줍ߡ1֔T6h[>'d\YdemRix`,f(1xf23C:sQ;=rң}JPp!iDKR)fK"ts/wրi j7'd"lNrpri˚+%w׷_Z h3*8LSD:m%t[ V¹n%d%)C;[-2gz"9_i:EF&czh+?P^"MO7KY}44MC!1]EedVėqE9KY:Em\rԕB>P u_k 0$#BBluFF B%UCɞ4575atfzM4afˢW^~!m gT*\ƪ<"wfMXe3DhT2* DYͧTŔnUl#TSҸ]m5r뽞5W݉#rHgm8Se!}7ݯ>כ.TXƇS@|\3&F'9cLwIdBbor y]ﬠ`JQ$@:\@. `VrڃXi?.V8dʃ0ﭧ'Hf+2Bug5RK-V{ixoNWSQdS;^q[\R-82݃ȿM^he&1J b>?Gex|x4][/vQĬ6-(;wFU#)r$y0j0FfY> f[vG y<^zr9㳻&gQ{$Wr0ێ:4+*Ƨ>|U#VxzqCTZ58-}?(pvfYqA.bܿ]-^_[qClt֢XeN66mlUjƥ 8>q~wc ~`bb WQ,ӐЫoW:Ox|Wn۔xCfc@\)E%'(~ӈ)Wu#aӊ!Ɣ($)PQ))cBo5ubLg:9ڪFĆUǍՆf9( (kW} [jaYڙV-ܥ֍[ުEyS'6ja*y[V-7(xb emS^Ͷ`P(1E9iU!uk9;O_IˊRC/M)C횕_Fx׊2]-vϝ=Q?IR m' Qpʞ !d؜XP`nw}%fi -=;VN{"M^! BUtp)MsB{dU7]+mk3w;&' wXt}?>9IW5u$otcM Q5Bj G9z 偞%a̺rp .k'QeїL!6HV0:V2P_PQht&L+tě sMh5>zDOӟ%>ߙŷoBwR"ߙJXns+WhNC2G=Û(.5-TB[h M0zPX{n[D DlDKvVmvV"6Dh vFxY뗢Cv=Hwk=WC6Yv5~ØiqXhˬƛ-2kYc%y1(-J 43A%~WLJ8??e *D& "H!G:ȹ_Pu:~<b[Jeu'2RY ^+W2^{}w;Θ'Z .%_=Ox2g7GST?"*ɶr"_g`',2{ր#BL(j\+D-bEKC%*ʮV) 52vF*aag! AXXj+զ|cT&t6^ Mǿ'g_9b*548k`)XBfTD'0 ٰJ682Tc/d%tJ bhF3&J8_d-H><튜;Gy;6Em1j{ Ί22*I&8%JE`1KXt*v,H2kҰ4!cM&"eőjx F5vs?F<֔?6ED1"{DYN%lok]bmH ,٘w@E&pQ:ݭM5 lN*gbd%jr)% ҤZycD쌜5ʐȸ8ifµMθdS\4qŻ8Q m'I*E(RS"1G0NxH-xwl1fx Y}n{;q k06>{ >XeZ ֆvsٙ`Jߙ`m=9XRM {cw0;W\w*~ኩK+h!?\1(`WR̶Ur)aWWL]`4؏ޒ+7dt?xdtl_ }BK߿lzkdk.F6ljVBh x۱!t%XJj폂J-mJa5´ !݁+&ehWRĶS]_`++:޷'$H`L/ W'׿ \=ZFtPZpɫ€U[f|18 ʖ|%Mr4?ǓQs9y Ⱥ-LD+#n E&(6ERoEv*y߿ݟI⪪7W'DroD߬3mf___3_!EH60$-H/ e.F (\:Hɻ"{Ԃ%<5INKR?/ €o:*xo]<ˑwferVjRo_~k4quI-~M^N-Шd\c &8Mv2:t<@&3(r8].Vb|W{(}jhdѸ%yբvv=^E@9$+Yo&,ѐD2XoKz}k"$ѷ@ڂϷ6_q;C\yb8AJ2Vbu) !tJ6Jڱu(SY*>V\SC DM^ v55̿x-X}c"V;|Hw;&,C !ɏw,ЩfXk2`C؟ *$2)d ܬYq I9F1҂$FBɨi,Ϡ$s6ZfΒ9iި{ݑ56Tz;$G E)n0JY h9B !MiJdkD$L  }Cihb҉c_q[7'Y:(C?N;C?t ">e8+o~)8 W3V+ל>21)I s#6AdJHA2RI.}^3J-TC܊vzR2@ۥyJvKA|(gq^-(PJ Pb*Q1@h%Y"c'Y{lNbg.>EiYp,M *WO&`j)`p4-w/]~n99L?M:8M}.H=nD'=aHlɰe!]0wz9LoaB`Pia͝Mo^ǾTe*k*X0A(,dcGBR b<ס9^:A=ujVkR߸)y]PW-Q a LȬ,j@kg%9bU~hJbU`A}_SCg2sNx68{ vg{ټ[mUVpFȭͧ<ͥ/Dh{եs;?!1_̅mPJ/epN;*.:(lͶB*Pur6j)U=jUr>̦-n?>:O~7z_Q1t }{c[s{a/bsKs5_={Nw7ozls^<<8!,/\W\>10ܗ`^ЄUq4D}jdQ)tѢU(18-f5 Hlp^9%^CLv#s}..Q8eƹ,Ag%BW<6Rц\8sBBB0yS"i44VcO/_^Z/bhlm뫊Ly4.x*PN4tí,4ްL?,HS'8)BNmzΞU6hX2$, , J(s (4#`d5@lLWUUQhhUgvf^aeqN};,~|` `{6/=y%775L~+9>R s|U>ǥrq{ZZ4'!0j${'IGqҩdD썵|@c;Px As/V1dfͣdFgA [RVBHXz{}L>ڑʻd+ڸb y95ʦd JFNVA[sb zR*>YzP@I j#c5qv#c=R iƁXX],Tg;W; 3|7<._hx<4o9F*e> e6D1)%3*˂VJf$f[^Ԋ,i3& wh2 Ŭ\$1e}xZK݈eb jWӎCQ*63حRQhBB9@2<1:mp%Q1!Yic $Y AE25X. VӎCPUCu>UMۜzgWr6ZکqYU8Yz>9K٘lØidllв˥3tA> w$MD;v=#%foff. 4SFJ΄7/1 2!i3#Lb&Ξ۔cs9~.J81A`>^xψ]gZXryX[/- oe.Lrh `«$ZJ0}L%tςEWjC ِ>kMDK =TAB}?wC2r|&CI t6s-zbWr !1!#V+XR c?8Q=7E~xy~:3.@fxCp4ѹH0}^e΋70"͗v%1 ?{F_vضf1@p& &p|ںؒ#IWnɒ4V˖6W7U,*rWSTl7iKY mWJq]nSGumdX_MA%Kda ێah+έ %*Zկ1=#TLHS& RD9OɽyM+aZc/ݯaXNP&?rf5 gB^z_O\GN\\t{jofXl0@sZ:Wzc{f`9Au{C01,[5ż5iUԬ^؛5MU’;/D"r_Hsӭ4R,ꎥ6;$N 'B=`560$,ܣөQJ9βъz|`8yt‘ fIZ[A@BDE <'Zcwx?>R{؞.ߋ/Jlx%- a "hq % uxNɀ y$f%I@![h.P鍧J1G oɐaSIRbgGR.j'WS6מE#:}^C/8L ,?0hͳк`H: ëW>9^G7;1 _&>`Gq7v}wq+|<8uQrGr-R^g wpџkpJX>>PgVк"&k.[+oU'Ϻ][vzի=O.7^EGL9`:Pӷ* '~fr=+ +^t=Ebe&N{8{7ьի Qހvzcl]1ؚ8Yz, AI^~F\+_8Cy! LR=W ]:HǹbIwT#JF@}0\QhFZ-x@k8XlGzhޥ^n_{q J2\GNX)0^ATZ &i.Y(M6ra!-X\ĺ3HRC-.䗓-v68p8IM)fzgtF adl2qR.?($8uRMHxƵQT PAd")!Ȩ9eb˱q/ [־䴠N@D, )<9 B5SN{N_/eDzg֒vÒ;lI =inÕ|K68[}}v>B5pm;t8H(F1!2L~2DŽ] Oe0stwL `q(4!x±p3? W7|p,ޜgT\]X0A+%//Gw`u;o'2`)lol 6Uo'+q Ğ#74j6Io۴O[z(RC l{C?G'GS9۾gcNQ9mCp4h n̰]}ԊIÄ U%+ õa1LYywV~gXY6d $4XNN &$Y92ΥHe-$AU <SGM'14~34pڃXܓ9k,="c҇6hc=aN{3=H*!zB&#du'c2y*&#V,\ ՙ^H1<xdjtꇗAe2<.3M[ n,[i6 j7M5Uav>J]ګGsK=MÍb,vo; ڴjGe:N-*:R%Y0/ }aA"uJ4R6iZeMoUK,U^Cp5&ƲؓyGyz{Y%ykuٲ-q>J٥f.(!5Q'=iM\B7H#ZN΂ǹ*D&qfmh Ѳջ5v!6d ǓIt~=*ʀyWIOC7 E{6wnx7DBx%ʋ&1B \d u‘Mk! V>7A/KuxVi-9r"Дj\B˼Pϝ߻ A 3O/m?4M( LX8ѥR;BErjXcGD"N(|3li b7U ɜL$**FrQOBTrSB-Bjw!'FkcIyTE {\B `NE{\}Dm.dY_8YS} Hq:kr)ςJ;P+$7D|-):qOS@e 6#=`_3rlx.noP$Z3+yDg#1I퉆]Ig쐾?Y0w~CIeɧxӻ { {=^FSeNs#<JaSwIQ)Rk+q )-S(UYWBҽxvKso՛7דypvF3f. ]Ϋo E8?_Oʐo`ֹqZT5e;.aXd 1Q؃UdiGo_''TcuȦZ*cZ9)NH'_#I_Q~^p9Ǵ7VM)a~$NI󏽟wviǥ>%U^ #*zu|Mt=pjbP9.??}Ûo~pJ^Iy`{Ă;N .}p|UVXߴjbu;|s6P_J߮?J!ZKZyc]owe8U SQPI4 é'^߀n61 ?r7<12.6ONA+ԡt0{ܐQ5H)Q)Ñ;HD)*:#SIXK!ZH<(W"Xj@P'@g;QgJ=je<`!Dw!AemmN3(k9(ݑ]D}xkb ݟx0H(Ë8GVZGE]tBF/Mq>dB$%--"` ')y\:BwYC;|*v ٨"b$JQ!;DY;S+B5B,Hwg3HyOZ~e;VjAqqLNEӂy]"Mv&HiRj'9)2/-=$p ")T- u?,WDhcT21 (2DjDy &g F:S-ҍBnJ׽GApevuueuQks 8 fĊoݶy9v쪏n \\w55>9m趾 qCeYn!F`TT""T@lt󌱖EA=݅ R 8^y),JI0:Ir<M ~B6ކ qs'\PoD$Gb)$G0!#b @'ka7Ik>$^WFqhEP:8V71hB[* [I=@#m 5lԙ4O(kWfX?]]ßJ.VIsY2@]/0kҔM e(L(S".CbiD?lOե A' w5E~ i aX+q$wmX)Mx]<\ڪM3ͺ&or)iL IqE)Q UٖM\>g} u1+b B˼|A`!-4{~M87 `\Y+<1Ӛ78XC'NC =g _|г/YX W1S:pYNsa%/Rt[$YdO,ĢG2znl $81IPDB\ψ51ڪkJ*SXb?xBGt]Zf3z'={[w`?{?-mP|3QWCBDK0cOvІ|EΕ`0 .$F De yl栎|+Y{$\X%U +VTIQ2)ZCs.%Q6hp5t>&/^ \HWͷKq"ĩ-ÅW*\fMPҠ[,v1Eزa(J2>";[yp$qưd/Y|ZahV 1|(6O, LقǓȡ\"8-1* $ɤ2rZUj˙&)t7ܭ "xIRn$?Z)C^{ *$X1蔄U1H*MR8*Ű B  Lo23^$rvqÔ ?LO#6 >0I9J(} hF9")Omh=M<ƞ pg6T^XA1TB6 DҥOJhLcAbc[ԦQVԮ vezjYr1Oh(Ѹ䓣Q1/ H 3U%E0T; T̐ = &D%L{1`A Z[~5Hha.'Ȇ(F/7*P4D=hG5/kM Aq)Kg\p^(G=ZL(n>;, EW'\ǴYll0.WxH!XMT ٨Ny]H:hAUK q(b!cbc[TsQR-!;Jhe PDJ,o5Vԕ9"9&E)V괇#1|VcɄa5rgmqc@1{OU mnBQ+H$BQ" HAc٤L0\{$b(D @'DLbl4v7EBҭ=[0ou}d<{qZ-z*giT|giZ5Z\`fi8(G5ђ[N-B >Xsޓ{A#.Ȓ)E7w` [oQ24 .mHv5?}rm lVײxJAz LeaIvtC\O[wyYCL65g/k`.bkf+^tև9-]Y%%ޱl&C猪 /o'sځkmt&⚻B[{}xp쯧 p=ةvlwER ̣_ Pw&1'l]yi̦&~^񠹉=msł(cPwDFn2bӕORkvMPkiIz&FE0QL'A1ܣ^\[ם"ԓ#a/qT#x҃1F 9lHڒ**g;hPM"BZ%Y|3Чf7ǒض>eyc y`(/#Rk"|r 痓s}76SYq)hQJS{4X|UݺZuV\1 #wI !1X$-NhG I輰f5odsA x:a'([ ,y*Ʋ88J NTm?71/Cuɪ-&1γ%>eY$1*Ћ֝6 __o6d{^>O]oW1VF6N m],mݴ̓v:ݹgSIhZD, $)PŤSR C˒Yu(oޙ_t&}~F N9ث6Ooy=&.یEEovzm'=ʕޜABM"58pI 9No[.񖷨=ɛ%sKK?1l3lhŒ27cWܝX=S2Sc9-xl؛EWq{G~|pXN0nwKIYj|E+txT26Ii ([|79S4Xj#VGjklNRC-*UbGy h4MKNﮝ~},)ڽvE|>泖\]}BJ g>?t<䇞г {wC7J++X3r2pT*Kٰ;\e)pbn+p45$nW)pp4i9? \=MO+} "XjWu4}onrNDN(տ٢/g ˟7f[y)7˷msTr'[0k] v5خ`lW:D'0-$˒WOI_='}ՓzWOI_='}֖&ĨNt5pM'\ t5peGsvhxXӓr*^(-eؽƳ\TkBYcRU]m7dody8gˍ_r‾[ڪ\;dSdkJRg&]קul`C#uZV:2yknOi2SMf{IMf vIzdf} fk֚Ӎqr{`2us d1* @DdpAu&,¥W@F"@m'C6)֖6T.!Y04fњy#kB?=厷VYuG86Qի5G۫^YZ?{S.ǒӺ8!r(A\$(%-h#z1$9$7ىB^|(aVHSS zmKMpvԢ@\ tu4?b$F?}uX 0]={G.'QWخt갪{Z_)~Ẕ Գt)P0i:AB7靠hXJI,H"d! 𴺻^kQ~?RDV}pɯbG>=ggO-ۡi `178T\ph<"w3K'+Ny Tjv[~93yYju=(aym4.gvnX?}Ӽ;=LOW4;\ْ[:_׌ZߌkYy!"'=F Y;d|vmզU%6V׎nZV$-Hmزer_4e*GՀw"df\Y<| jO~~M?~79z^b U!.hry>;1M򼊇:,: q4=>ʣS^$~û?C~~/޽y/=+.ݺ.}tݛUޢi.MCQ'|{VC^_ n_>BšKu clԑ)q>g9f~a$GKFY{m3ǵ"Q|JCp9󆤃C RFBQrE 28(u:@®__YӼzJQX$Eтx;B:e`:Lrƃάid`g0BiէG}kFĝZ:y`]v&tk})-kE z{苸2Y>Y툵KF͛ P^}6Cz !"96xb1Zp AN RՎT4v%:& !ڄz(qo :;49Ęmaf}y~(QӶ|h~z/DB<ջRNV~}ԀQFL3uBFk/{MZ{HEtBR)YD0(@h\Sf4V9-\FTJI`T7@Exd@,Q"T0kU1'bͺD1rrX{_^MAWxPEp ϟvT}b/tzQ ]$ dc#a-}"GSTmx퍄6W0NVr-*`olskUOgd;_.h0&-|ctTR +0h0C!F6AFȂud63AkЦ8VȎu?\F%D٢@HjP$T0@%ŠcAR1?l֝xf&uo#xԥ8^VvquUuNyC7@V7^yW;r6.{Փδd̸?g;wrcצz"V"ZG?=n\*[Kw\-Jos7设kty2ϩo: K۠n\71#g}!Yu%c@BN5R B@V~>~~9/VUO[4+K$G-IBjN%P*{(XmkkkcPkS⌱ (Y&jJ.7ͺ]?g TBNI%x3|zb=3=$|!3V&u %5֋p聮k/]F=}֢Uէ];ىVo2Y8EϨ!K8G Qa;Hc0<1۴Xwh cث/퉡LNEmN0aVSpFY "zM,@^k5}HT-:KX_Ic o ox5zP|6PJLltEzilkƀRB&[߭!Ee&x "d11:ll_gLK?sB=XcdXc&5v8>|hUMQyդ:S(N9c`GL !('3ɚ:$ں88$j-=-Ce,!)I ug;2*Ͱd쉅VB+,|R,sSek'>y"Փ?+ųɯgGlIBI[:#YGYJEr֚YF4EZ#.*{))*ljx* (W2FvN1.ͤc_ԖQ[=0؍dT%QhU )K*1`5J %@*:abNJ23CiXΚIdH8RMV$"OEY&^Ywa/"T` $`" A^wBK`,EʀԜA/X'B!9a+YLL՘Mpax Po}.BL(H佰2t&፳ J6&ͺbr6K{@sv\2GYZ_B΃`-"ސP琇MR R7R]&2J "LVxd]87$wZ<>a z?9eb}|җk褒WPD(Pb,Ao%l˽U/z2ViUdLwI$'c^"MA*BGUtՓqXc.)W^b}ӡvJm^}4q .^ 1O(Oor2Sgɦ8k;5 zRWh]k콊7p7mߞO_oy YQ{/e&"QrbU ; 6I@R14~d{r2l솿\mN[@^R 5T7)wTx SьdgDԝabѺ* 7Y :=AXտg5Z[ui93X~ 2E0AHٕ$$ H謫fjMH-LLAM4:gFP'$FQrET4$= ֬;;S^otHsXg9,Njf<'_2ɴkǺ61F:zSX%za5A^xd8L fjT,x^X <;D@#U),:E+D$\6{Tm)΋/kEh@mUˡ @ZY^Z9$!ZhḓC17v`'LFy#+12NT7K``EU[qojwaK86O_hI)`BqJ).I"e#"<)U Qmw&⏔#gNV Dq:}YO MI%t+ A2 ! x;xq0E\gP?Ӵ}rBRI@ʬG*|fֱɧWԝ+KqRH}8*{͉nqG-mɰAOrMͷ DaQ%ʥIYZ~kyd2_ʾ-i?·Q@#IgLt_-PvRǃ߽nu;{Ñ'sX\\ %&USJW`5:Mbh./M[O]6<{{$L_h6jS~nq7ɢ|  zmR',|$Ua׿ea۾qaJDN콓~S'}u{jX}P.#(##b52-@N cșp˂CPDQ^߀ WRi|TԄQ'Sy'ưX[GgN1^><(ų2Aci9k%嬛 ^VpRʓq;I>)S% ZiRuyZ] MN+f׬Ojɳ_+3|;6<%\Xx.EH="~{~2NJ>%/2d"QT x)Y޲3쟜 U8ΚrxW &KBx7CD,n[m*$O] *ˆ~:VQ O?ˊFQuޚ'.& 5y[T_3;o (}]T ^:๙s;:,x0?}<(juҳT 7'릠QJ)s"HC"74ג(J.P{&}۝mK߳(F4Ά b%ȩ@mBZ)aG,3܃I~ +7‭iU"bK( g&:%H8Ml ufMH:O,˙\ξ^)){ىcƞS墜UbӘ׌%c圬r~tC)R\i@u.Тd$DKB<^m6$Iod*LVW%xS~ yT;n,nwG-[ݓ]v:ZGln,;NPXYx-ySj{"O *9YMR+' ZNd5A)d{YL?sT0iHqia*g64( Fq.pOiZ ɩ[87ìǏW-{6z`c4U3N.e :4mBFB --%҆~!c΅u/b&gY5!Zd( uLXl|ÄZ5 d oٍ?5< 竀AW/;zH>1'±p\ȱ0M N wXY$rEbݞya:k]0d3`&w' obk̓e7oo`L7MV*;߃,-EJN&b` ho3#d_C2g-mTPL(ݙ.ߙ=b1{ J*1{`gFadIŝbDJbJ-.m44]D:w'݉{wⶺYkyET>T p 2ZQ4Nc~qTQ( Ȣ`@qbZ D hl1pVAB$Wr%L[c[R[Ï` Y_%G|-'b̿쵐~O߷jb̹ZypxW2u>7/@A7x5I*i*g(Rp#!jS`tt蜾8_'ZR&B9Fx{L)2k`*| {V%ef#. V j2[pɎ9vU7f:ZO#EJC [=b@G<ڛKp`MFLI߫%݃%0 ȶ* [W#%|V*nR}RAMK N]oܵZ%E൲2aDBP5;Lҍ>QmQ'ՠ&x$qrr zxO:y/ O]r޸ 3#`\`k(1\iGW*ל/6`>=!f<e;A~~܅ᔒ[ tƂ\1\Kac׭%ȀpI.5zu+¥&LI@ViǬQFYJy>%lF5|8kNS:ߢ86?!'H`|>IUbImzv\j?w:׬cV ,Q;Q孖$!LHBБ L\FÍW,Zs3n*KMopU H)@*(fEwZI%pD0A X} ݕ3Aˡ['uY!7g)S!B)."HSgA!.!''!;nO폵5HA&1"XƉ0HA6pi– +)Fgf)ap ?/m<_B8c}NF\*?uz\sPT_vW" Xsŗ<"gaT>`k.>0k܇(1ðo\U_].7sqf^S=JU2vu9DE:0'S3>:hN!\}I%uYZGS,Rm@RCV+r61Eb:A89qJU!+6uQ5OnF7To^ǣrvS`.̗v7vgn팣Hx=;En'tЅ#t{YFNv.'^پhM|$p2u„G A~"yď!.z NTvSey׿h]J5tyܻwBO+YB;mEbZ]ܖT-J}_Mݭ^?&׻ߞ<~e߽:})&O߾z{`]F9W"$7on /ISCxb -urĚq9{qm#G/7E|10nsvo3A@H[YIr<`[-[Ւ,St8Ů&O=E4bCGօ,k[} clm'lrLqJ%F3.vd/l=]c%gd4Qp @2h^w.O<%1o۩Ct*1MI)Q)5K(#u&1IQQ蝺/:@º꟏q W9{o`tQkuIp qFM*aWh+BO &4ή.v #tyY--<]vx `{d{Vџjw;#V^:G^}[]U`a\iV ^4gQĒ tTZ NL2rh$O)u\dkrAypE%&"G VpF-;SJrO2Xzϸ [_R0B>VqP]Q5F oﮚͻzit?Ը-c_[s|#|oor1"qwV\Udom9~ SzFY5k˞5/@W@#W{Z%NR5:yfM4Z}淝Vcz"N^ƫ6b dK8B o5I㏋z\)46&1asZ8~S8' ",σpw=8vDOʉ=4~hEgJr1}aoi9].+s [՗Dd% -z.4n\rpʹvH2aÑff)BO!y5 "9Vid0L yTkxHN$vIz @/:k Xd&($1g';K TG8qshd1$h,""+G7i|kPް ȕc>n Ȇ(VW1X 9nbAO /[C AqSt'5Kg<`DTQQͣ =i&4wSFblGCu;i싋0.{\ܤ*Gphhji*ςR&c#h)!NXMCa1ea< 6 ~}[ӛ,^46rG ~ħow;BaMAp}3E?MxRriL>oYn@zcs3pDxn6|0h)ٖz(8JBAMB.I22QiD9-IROOfCO'!:eHeAU@ t|&@LD#Ot݅a3#Nj8-T`r-gۜ%#"jd`#5.ub(ͤo&}j1gw%C&Cۄz)* v_*y7=l/qg_9qArvN" )8$DGQ,6,pG $L㏇}z;K7 uڣ](Uk׃$&x9XtvT *ċ'5 Y QLJU|B-y_GM-D$]`sϼ<] EԺ1bW?oFA3.o4|)//^ZzsH#U/ʿ ;r]5?qtv>6D%E)iOCrL?^^\׬~ɄoAp?5eקoSmFͯ_̀^S`&aV^:9ƿ*ƚS=ay-2Nk$}~fuEp{79g ՇT_yJQWsT-tBn1jAcL#ލfiW?"zv\K6?_zyp͐A5SSSALj8&jVy6.6v&GM`Jml@վߦZypحryxfzwo/"}ؒےwo?}^25 XΖcZ4zw&'\ͳiA歷[P_+GS nUc[[F-m޿poP{x`n޵]䒻m &ob\M ;Tǖc;P^d)B*"g$} !o5GIٻ6#WJ߫[q8n%{l!1~H-_xW=3(CTS1`T=TOuRS2 *C,Т(2՞v4RSJ17.}ȁ8/>۵+wR/i]YiO`n$OWsz7y04Y: $wmdQ^?__4zH8C .a_?HZhIN'2äkL*\m)Y= g_V3 ?|:ȳ#Alfcbo3ȫa2bҧWc2`du_6dzalt~ kO-/|PAS 3_Ldz|/vмY[LpABIz|r80YLWL'9?YUJj'<$0j^"7+2VJ"+RhS;߽z|̅>{uJBD,E_)yN/CR ;>.ۓ~bq,7\̇r.>3WPv'[%C=iN<׽Wjsr읹'r$aL=A8 %R 0TsW krnڦ3ϣVEZ}:)_}ݓ޺'ծ/A/AYl{DLKסV\Zz+g(2<ʺd8TFkzuj(̫sGR~΋cIwT$W%SPZͽ#6|0\QkgDS>ػ\6BW|bpޮ;, Fy<J`)ig蠄JG*Hmh ΀-;*:c֨h(F.FzI=)Z nѨ'goYO'yXbk{EG^.SԎg46Z@qr#SVvxqjvY:@٪ҞxSD=S]̫P2Gu9 ND H HH!IʉE)"AQJm,P6\Ób4sSؠ,\`0JRQk"tD %Yhi"YJDҏ3:yGgDEd$ChJtw1Ͷyf%-ۻM*b \x**TPu6:ˉYcey%lA) (糽6F \>56^8t\F",ʏ4C(l+9pc rNpKW^ 3,Ұ*5&hED *o* 'GV[`D_Pr hz OÛy^QU|[1˷H+QmY*.=>mrZУr꼳:1S:pfS"EtF*Z(W2Xt)KIzsc$)]]a*O5cbgT:E\1qөM:D8yK;o͘y)Oml> 牨IM BMj W>/sOMR}j/057%n XU(Is\ gs Z#PG"arN ;k_N>3Ib ˵誡b^P $ׂ*xtF:%hS֏mJ&v9"RH)P5ܢ8 SbZ`'d)"~#Q ]@UVڱ\dYF7F{! P?b٪{6BQQ&Syph$8T9fU)%~Cm XWӺiewv7ݼS>A1=9>9d%m!Ƿ{ ur^;m]-VTN-UeE(**g$>MT-gcT c">p)QiH!GW--N:$NBO94d 6[J A42b\ӌ#PXc)k_wi}PcTff}*;MnGOkȈ͂LCi#Z)Q(8%H 0WV/O5O!;{60ce̵LpJ`&@B6 QDX%Sh7JBxQͧ7LI{VEV NBe@k^ VGF8Դ{|ND_gbo = Klp gVXc*)Q*'Fgoh&{R%)mVZlr0 P% YBQmc6V_ϲ}mE?JL Ig7tR ܞqP&]0ʱzN"[$! D%`1B2H@-=:oeX.E~ NS UREP4{-D0(KGãws@A%x=yospӿw!+I@U9u{9VzcYy=}MyiQ?M sQ.Ĉ~=!3.9r4QI=c>7jZҏ $ ~Gfzp5]~r}a8WIՅ&=Y 2b8ѹEw:$fi>#.HЃFI.ѣ!32p]gso_АaADX|A<@z;ǫo->~[}&~cWw6TkV|[Ƿ]1Xuܑ4(6ǻzA( &g'k+ |W5WKB09&^-˿LގP07|?.Wك%Nsx2qJnGx۟&?9O?}%pq$t?Trӊ%I8.XkgZx8Z~MF|-b1 WERV\YU Hȭ"F41^|8|cJ_MsgG_w4[FsK TmzA.F7ml;ָ,0[Ԕɸިdq־g]5m"WX\&ϵmqֿE먚,w6ЂLC^vi'C%_'#1H'=*zl1͂:7#F`9r̀N:qXE*'˘v;fi>r.QZ>VDһ\*pwGQ9o=w1XT7{FbԜ*]ERvԈ<41?yR߻]j1nl):Mn'.ծ2;$jW;69ӥL׭&QY'3<'fmf&kO^E;.\m/XwɺSvE}mM7 Qb_?vws9ؿP 'rv6p Xc3tIν>r"Lgg>B9Z6q!iKB C*5ѠiDkab<|ʀ^@z\rӍc3_S=V=cRΓq>)$ŐDi œb`~5ͧ.gmn2(%i9NW"*ZYYs|ɇqTeξj_WuHl?{WFJA/ݍ> Y7g0nLE$e[=USDxH%*fFfFDT6J?`Ơ -Vq<:{+1#q>7q!cJ~eqc*h%3n9%J8u'ꭋMb onGxfӏWcz.¾[` z߆+{u: 1-1Zg 2۴ثW@hoIܪcc!L{)c.ҩuΙ"g$DsJ7ݞĐ- Kq`n9cObnIK,RZD(G[%eJ1ń-הY0DaBOE4ijLkt4-)CI wȋx5I%f TCp`J5^G(H^!DtsRWdniuu`9U ]Ut_0Dۜ- v=vՌNZ—i2/KJN]yS0BYkY_f.T᣿^~75~r_=NrL8nԲ'vz7*[䢠Lؚ,YJrV3!`켄g?V >(rSFx@3` Ih,V\nyQ86g!a0yV{~]\GEQ=Z];(9t]sVnw"T-R0M|SѯZpcv߻Ir?:KτJP^?ȹ(9Q{Jc1F1sɭ`)$ŤR{j^Vm:w&YɰJ⠔썎%)3%V:n BwX_՛!JB NEr]n6XeIK9WeT ȟ`ߐ ڢRc(tsl܁L$0|uXF`IQN)93Zc|&zGe:Ǽ 8fb2 d*zRg R(*Qom`I\zvsѥ^+EFE34Us>7wOU5Mo~ܹ{ÀY@E("GN \({ӘGcF1hm 4wF-Q1,%!`?PB25Kpm䬗v3~7K\g'+_/:jh]LQUdyd) ]jſ~I +0ӊ)osZ䂦 IKX$:O1{?+Jٜ=1o8t?j oa'VD>Z/ЬVR&N 0GM,+LjnD`3Qd>^INJmeB@˜9[d6@"ck+q`Usn]nGW*ל/0OyPDqmEߓ]sWqV r3Z: )%)of- rXp/k-PN\#՛ (WEA&y$LViǬQFYJy>%l ^#j> ӘϏOQ_%J {f~лeWGG3֟=b4oz>|ɲ̴h`7QP `L"I4B 1H^Б L\FÍW,Z's`@S )ԑ[~p H)P*-{aQiƝVc  B' Lϸp=V JB7!Y6º þ=H! 1x6Jau1aE6,(%w#$?}mMR``qb!9Al F4 RH~a˅#li^AڦAmk8y@8ߪ<>넬gϿ*=d)RUٻvW%}UfEBMEc]?' WsizR"ȟP$S<̪Jka` 9=0iݢ֕ S2A`J::~+aYiC(XK U1&.xТfX>_zƧ EդC^]%ףMǗOWb9KwZnQ` ߳1Uog-u(/~>6CC|L̍NS9TOvQ-k'w.>xr;Fžʺ {b|qO7˺!˻axi7!^`D&}oUcbiu4e.+g{0l:ϟ{}Y-P 3hT;ۛS~T'WDْCr+QS5i2W5PT b8c9T;=\w\_yuyo`/e\ [F;P⯛wI]]Cbt-u59%y_)D܍/Tb;C˔Nci֚xq1xfKTދ1(顢Ѥ G'#c`'rY2$gND N|[A I{(.̂%dcIWMt;|LLÚ>7(~` OCsKFD¿澱j @##$іX˽ E VE|HQGD mR5tbw\#[ߨX9Or-/;8yf߱r_80cBc% HQaׁ1^Rńw XY$rEb]#oe=h +++3VLmoo4 ,`p.M[ܣ NyKLw߀G~)dyU/l>Yn]<+s$[>N6Q7httĻ+yR"LEruasg= 9{ugչ.C.cCž<TQc裂]Q-5JZТMZo笒Vk/QJabt*֙ bUfB}"y'*ZW'(W[?7(a>ÃZٻlJZHL ͉۬Z]V m$e>sNIhs*IyWqnṆqlH|c qS^qϼ"*MPJ*CqXYmxNMJd1hs9&r6ȹQjE:ERU)$YAC3b'j:ST˾K5K`ј21NHi"XN)=D-td :}A)5ou8*ɑ%gPWQW.=!qB*@ ŘJ*QW/P]"'tU"Wj9FǮ5+ʕz`]*KNF]%jуDnKTWL`-Oɺb AOF]%r;ubt`0QI0U@ +.8ҧ\񩨫D-eǮ\4+kGb&ؽY>H3osA#S\AA!+ZY |iSLT+ݚ᧼Pygl* ےik CT^!{${-:).f)Jl FNZ\tP,ؒb* ~ (:Gcs>۸ɔ7nrlx "SӋ:i<|4R U zR=T SIFȢ.ML]#732:Nwuq456/.هA`2/~ʖWg[je%c룫se!tgܽp^ݷ|~G ]hߧ{օ:EME4wC^?P߷vwF&o~e&S2u5ߓiM#S[m+wvޝ:]'޽No-ߟZFzfWטL[dg`2iĆȘ٦1c:3#~-N0Q<ͣ߁*Gqq+KlJ׳]Z7ȼշa{̓ܭo{>X={ׅGS^TGegQhmP54SW/F0J>)tjt*Z.F꺱17ksPw?9[ՅڭcgjO7Q|t0oh(Ef/t1$ؒyͯ̽ t d?sf5Z'#9cov{ 8dk9fmḴ60ڱe4tTFk?z6G=FpBKf{m!4̾[NN}ܬf ͬ3zg|z^eB~}gӻYpS}-^|u5qjSs+B6W3hb^-5-ȟ91}NnBF{HG =̱<hՖ]Bim.yy K>s|oI}ξ f޽Ǔ2x^Q`wDs=9ІWbl{3p 5c6PBW ]=޾Qֱ<pt8(N7s.'tezi7]qM:}Dcl\^pnemҙ<%+wNgg%]#woQ>~=R7Gݶиt:>;?~jq*N%}LwF886wn'K@.۲_ޞ_=wI;ɱ%mC%9F.rΨ(Gqw!+ŏB稟_Wא4~f ⃯o.Z}Ɔ ok~byntOT]=p[rLltElv))68jer p=fuNRhF՘= )cT MfrT/VݫRyO0v:X4IO WjuU*buXGxvt*ĶՂ6'ѓhm tnN"TrZ(\ #4Qdzs&Eg&Ѣk|I5K3q"RVݹ2PRܼȦTse=bi؇ ٹ1kd-KN z7 ;"6Z^_wDzY+EGh7~fѤ+C)玡(}w!0ИUlR=^bz6;. ïUh}GQ"πhdlHdzTڗMVTH6hi/<:fDyKƜ؅m,x_O͹?oNB1 yrNTR랙s!DAu@r=RNZQ aWc;z7E"b'QjƦ"8:FOk1M€ڈҒ<'Xr!xQkUЗ jEzU`1ȑ5/S VUժ+הOQਧn|fnJ}Q\` :`֞%E]#hdGhOPw_Kԑ#Y*yˌBiE EA)& v**6(:j< u|C9ӦG`m2~0P6*C!VeWGWf$ty6Tp Օ)+:+ Ņ ƺz{EK5i0o=K&*HJƬC6.*C*ZD@ %ٻVlU&M=R `; KjiWH% J*[l`WP&4$: YP9G!*(k@ou45U2TS j,8NH&``U % T-r3JlA e :o9QҜ` ePDdBEs -nV2|댺5gAx`1g{ nQ!6K7)Y_J m*QMe\AAΛ:2 pqXi5JAQ"Ł.0͑&A(x;kZAQ 2#}@PSQzmxU]T YUDI)bp f0r 9qWDlLJDFࠄ2zN Yh"1Q{4{xWP>V|p.#hҴA# X\FŌTUűbmDr2|bLbc iDB1"'CX;~6o,٧ax+_ԂUЍ3߳eUw mCka&a-A7 /M#TYǪd &!Jrd@h2BCT%TB;ڱ", ˈ ) _3X;D[R`0*}^  B=Cqgm#FOʔ=†`6C˿k'U2#bjnTx֚B$KBq)'ޮ  PNzjz48iV{ұRk80)QC^":$b樇 tyB1Gh="ڛY'%\I1MA$2 RČ, = >-rwoެ7al *'z+FqdkcdQ'7 d!g1#Ϻ|AW F-l ֣,*1"@eQv01x:~5(:W!hSQYj1Q`2j hNihc137+;HF5kփ*TJm<3i4T FqFb\Ρ~]@MWfJ zY3*mP zxq Vr:-[SQZpZi䋞P+υiq#f0M8f=5zT&PCXPrm⊑t!oh `=n֘M5bUri"&b9\Y59O/%T$b4u2s't\- 8) .ZR_[fݿuܘ>\^}~ ȸ278Y Wzwu[:o58^+x'Pj95:H;%N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'Ыu4/ 49Cq @%8^ $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@zN B 3`] 4Fh-(ى5:u $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@z=N_z?aj:Zzm\B?@Kkrp Ÿ1ع1.n8RpA7.#^q8ڎg. ] hN\ZNW%EWHW&Π#Հ{< h-~j$t F1b6PF녮^!]9HS189p9h] A6yt5:"ՀЕ st5Pj%t銕Qk+[O_ϋ[xqu(/1{փCu O/7f](>a,EOgI{8+Ȉ}I6l;X`_ O59e!UxJ<bY3=UuuAo8na597<$CEcF ONqyn9QT{0/%{ɋܟ l]B7g@wU+W'up7tpk_0?d @R2.[l,t/;̋ {eM"<\W.Nh}[]M?PQ+e{`_~xss(ڌ-2m|R#ɽvt=ή#&k/9vf1|%gStJ²[hN r{Xmsx7٫ٶ%ȂD0F8,{0OP,VR!lA=39n`Q6FLqXt߈hNp\4}l Y,f!Yn5`dʠ I^/ C?_LP&v[?uxe `re ܗ;'o(gvq9ˣGȱǁˎ;8h呬Ru?]]=u1& = `ɋaW \^ JвgW S+X԰aWLEUv;u -e YAt b2b޵i, +à z#(wRoJz}ɇEð{;|YC+'xB?뻋"+r'gL.ζMPØ㭆7?7H A(Ix}đGiq))w*~Xir8# jwgD\Ϊ;oΞO\h>gH0tWDpu?xO,ԓDg2lWPaܩm:~v],f(1xf2MůR:Z5KvW3uE~ɡC| ~{/g#llpGj kV͇\05vX@·hk}٪ x月rGNֻhmU!A)A;aott/I!,qK}^"βϴ37#>KU;oL) inx/jգb o3ضZ~:sf"c?[{nӃA@`Nha׽bALYҴĜUMb'B0v~E Rxj耿:Ȇq]/F`PC4VpfwY$MJr`,+0A3@'`?2##@sWT떎v_2!RqUZ[5hu׭Q^X:\w%"M>lIYrj77iRI hDrapD)/@q Lem!4ݕ:f%UV9vwڥN4[ůaK.16nxoo]>sX rj[!5ֱb#]gȍw3\|X>uĨ9 //W۷-A p-uI+虻PUuk5PI661Q{n{yvo~A1~.ӌ@3MˌMU8FZ!nlSbJ7j:|K6:Ǽ є%Xh#LE/G:LBQvzkC*ejՑF]mlѕ2%&:8AsnM}VhѪ|fFq{nݧK0~w@"J*g("G E*/sSZyZX+QzF RztLPRD~ч":fS mqc{Uezz&&VQBjgh՞4#TJ NVF8q:.c+7Z&-|s`t0֗I,Xfz-??gl>NF3=f4W`¦W>>~5Hx!P f }#5 JZhKK" Rτ[+Aa>RtH888LhN,ArJ3"ا`LKI S"x*,D]ΰqf@mlw O9j!^di`;zҨ?e3GrˍoPlX[NݻevΙpIw;n0AM6wzQ9d7uٸa9bK6nn^ͲChAˬݻݴK=o*١繖!]ovh6]p4H@U[:h|s{:)Ps2IRLrQ<ښŰ}1w(d4F7?n8#9pδ! );٠wSKr/}gT1nqfᙦR'otOU?b%)v+ZD JIe(+ WJ E+ *A:Z~N9DŽRn&x9)ߒ Ȣ`CqbZ )$nMmlρCg xr@Ar-Tpx % ,|4"qDLPLRVJ] ⦰3xfIL 3^%7!nCf8}7q :Um?X7 s$&UP/(wDXkDKUsGxK;A Vrb\DTr(2UR19_oo9v~= :>69{FMXA8dVj46YЖ+Ƃ x)l \ 0'YP~I$\.%6!`BL1jveZVb8zPN,Φ2gjX؎aә<>+Vy]>t \!W5! dXIu%:ly^ST!JU&ga!G'#qT[dHsW_]4'Aڛ$MG9HI %c&wXoW d$ F%iN4?@ CD) %$YhDJ& DxD (mhhlF;}Ӊv2VƠx$sBHgښ6_A$g\znîJnVyH\J)R!)+Nj@ E@Jl >],r޼3x":X JXF8,qhBDR(UF+#nY9u4^ uJhZN)o5BZA!9g'9Yl&Aw6vΎs̋-ߑ~}ֻ,J3{rOWHm9V`<=O}Qg@)8ׅ P[!J aTyYۅ<-^gw',If9X%U m4zJ2Dɤ@ G}4F٠3¹(6CU͔}H=([cjB:OBDE '[ bT8͂Z11@iPĀ[]ԝ*jNmX>G%BIG4ǟ?h>C%Q#htIF38+jFd\NYEǨ$G$Dtub!čՖ3&) t'n#Z+' 'zP!ǂAV R6!Iu͌"$3[3.L2vBABA.+(̖[xyJ;Oƣ gl|`"&8:6rTwIL$y&*TICgE!1h*8RB0XCΤcW>؍@a?54_j*oTxApCD?<0TRIgwm)Θ ޑ%a9UP2-} -Ԭyq+ASC~xG; lxKJ((Z(HB23w4$L:o[tI#u oh}(*gRX:rT@ dZuܯ+xh ewW- zܺu 2sk}+j+oMZQoڞgM`[N0bյ$BH@)bv1N(Xz"+P%Z3+a8y#j}jw/ub@WYq9FN$lh}(>ʎ7,Myw@9 WVJZY,ؚF]mz& -dv/3`ٽDȾ^feF{w/.Ք~t%Fby h[6q^Z4T+# ߓ|6OPl``z!ZD҄8/H$VA +h"DD>3jR:S-cibZ5)~,#K'PSB]8-.ݑ_Z5_|8=p\kH#=F&8:q6?QȫMrlFq :j= k%7h"Evb GZp"힃>^o>n  3 9^kmMaC@ݖnH![Z|Uŷ575Ԫ> crsxTEM5or@م7/|q(/sg6Nslse(l_'痹ŏ`;tUɹ]i~Ԥы?>ѩ?NF2q7}wߞ(N9.=%K8-Y}WXGg?_d4^O~|Qo?" %%VV_cx;Y'ŷj nyphoT;+߭^~QG\1*T!&{9^o"v_Kwde2MNs &Ҳ47-ߎ_[ᙺ_ޣcQ1hYˋJljGs;;h,kԂ8'd) HLUi`B7,}:si8rwv' 1 u5.kX Qs,"_, h}s93gu:]NwFo/"^^%X)4IxT>K!qsesr䕫8@h@(/'rzѣ?3fށYulgqum2͛쯧K>=Cj-+Ul𞰝6-fbSyul?mN+5 Ea~Ea)zB%4 m aݐ^@NjPu&;VkkߪZ; z\:TI>ϓG2ѣ>O I+<*OFkIf fs>nzDW UU_**TjHWhz[&a;S7F[deI=DpЊ#Dsy'?s1֡/ED96%e%v| 3GJjPZĺ6[}pJ܋7m-x;iazXlPI,Z]{m{s5mvzu}YT>}~J]vϼUApMҷP5_w~7eC+x(1ȫW~Um(jD1]K:^yWT㿖URZ*iHT 櫕Qj68fmEUrL5\*sGq͟)E5&iUh0/}TgG?ۋ2KKnښuij5ێ.R<׸OsOi3`M{3eFgB@١L%}&åGta7tj ]!ZJաUFw;]-_<"o= 0gyah3m= EWt%SD2\UF\hC@W/L UT_*5 Q ztA}SʀuAˈ ]e%JM 3%{CW`2#]=+ 2\h?t([]B %{DWX2p`g#h2J)ztаpϪ0ߑfr**$iG4q3\՛5VQ3KiE+a~-O>=L狑>ϢPα>BH\.c%[/|8.*e$”<}@|1Ep1' wBΕEV'4J?{ȍaq?-EA&6|ZȒG{f߯ݒZږ%ٝMǯbeN)bxЄe룏J'{2`WVcj &uDfhicœaL>gI;'sȲy LqY̘C:j&Ncʠ#.9'$3MY4I\Ţi%:$%n^EW|8i5=u"S;ճBs2|\\}.pU'W F-zpŤH\%$8vU*IyjZz g][y*\%>JʳYjOj}pR k+d &8q5bWIcE.~5_#\I /BQ9Q\{DL'%;Ni~Aa:IIu ӯ5ғ;:kO]z+„Њ3YUʠ$ayG'fP$\(j_L&nDv\oɯ61Poϟ~um}b2X83BR!ǐquq=?q+'Q%~S%\]vsey.u =24ֽ\po2?kB N6$`ϟցOz\;=l<,=O!:fmv1e٦KE+޼ U}[{2'7d^1T6JQ1fJ 8+b3B+[ʙ2s95+ 71Ӑp̏ 40=i.z:=V)n[Em=3{Ȗ$4 SIzh? 0zb |4-F,LН4 ?F0cZ[\%Yլ୙tgW܄uRʮ.GՖTC~{vtvN7f҅6(ﮜv7o1-n\2H,&xyi/zNd: ?U_`(2  uxYT{)m۱B z#^taV/4N>YmIyPxđx5I%f a93zC C(\ޢCDtK|<)ZSDc`bi1g;Ja8 nLu S`Y^`N)[;+0TG73@hu1=M%5QmDnim#ٮkڼxO_w گ[]Yz}Zn2Qv0w|-աХ0x ?b vwpf9FEvL&d1F: lp6HF7g)Oc.*(Xekd&WYY4E*Lemn-6{~m }FK3-O)_t:_C#8nu]_/dn{/l']%`FLU[xHhm; SO \f0L˹;sVmV@r8l$<*bP@Lb%;/<['SrA-.p`_>^yJ>.o߄cFhA#{8 )A yLx+(־E BR[`hJ%@cNaֆFjES1 h7TPYo[؎M,RLɻCu8Lm-Hq6qD6+ 8g-F6ͥ B@G@!\6<'S0FP"" 'ozZyiȣJh9F6bKۀ(s0V&xl|;v ujQ3fwMDQin` &@< 2,E6MGag/=2N8v9>2@%"#m}@ؤО̵*bZnYMҍ&I9oqj*'P۱h.Qcrѻ",x8p7fk $[9k^ޥ ATߥ v{۫.o1߼V"&_ 52VƆH"6o+_eT\gTӔ׺(rIU|kd5QWI|yz/_c8^s͹wLnQj}>o<c.ҁ*.G$ e<*NG irɜv$kHEwX4QU>f6 W׳NV'(};OvbTxx Hz^]Ģ\ը_)k5|(?oΌR}J]+Lqr5<5Q̊4|48RԼLg5 3`UƪxDcXC<  nj"`*S=˨din1״P5ͳHCey֝fW+kI&E}_QȎg^l`rhePxʼFO$*BI8缐c\ȞĜCsmC/W<4 fyxNX* 80 !9ah)C`k]hÚ.^fѯtcmY=z(uK16~^!'U'{م= 8RP Pgt'˝hIZNN$%ko}wApj:,r'fAƕeNF]嬉x@x`,fڨtY dVmSpv(ckǘӔ`%4@2̲O΃&P #(6cF H>fiDceF/#RVEZ~uWG#`ծ +Aprfam˴Ĵ5δp nbweVc1hit%<ҭEV']Zrtcߐ,ͯ9U 1s X$xpHQg R(*Qom`IV}K>m.>Ǽ$bo[ 9_FWybxq>cl DzXDi SE$(Y` +6bo4"tƘ"h h 62(C/*GT KIj`PEL#1q[y ǓxkWv|v)reyRHCOOðXGY)nq]78 Ō93˭"+2|Qw9 ZGK!zƌƁZ0cĜFXp4vU)gG##(S"Q1llV4aoޑLΙuS[f7/m /DE *cRb2Xex/q}Yxd$kH3aJ`4GJDH&=XJI:G<+8wIU^›_32@8+9vZ&xA%ULx+DΰHko3do/dsMn`Ԝ%&y  G_ck>R[mkɾ azFgS?[ bLlqS?[=[b^] ĸh~j>loLXpR4p` )*IFL씌+#W}yi0EIjlͯDuTQbYٖX*|2L`åb6O4fS!w&dЃA x5}L#sZU bMJ.% YXfR]FpAeD̑JB7l6gǘ7z_gq'ogtٚ,fmy^%,aqmwMx;);fz ~ք@n6VU׮mF~3A_!ts~Et5{׻a^;-onGo{7ݞ${.;EP2h2P 9~IZ.'.A7-g&3a^Ʉ@#7ar_f15`tJeIٟ\ԾL/~z$>O2,Iθ%c8}1qv. )m\37+?=Lgf)sK%ZL`ƦZ | 7ҀżE4{7L_jOݻwj"߭{^KQ,5x㮕V2k h I;Ue>itoxq3ҷO~?fiU1%+?̃JGi?T?ْk[RtKWQ|m3 -Ęh(|4.zrf?9׶*V\ھb0) GP^}3!ʽxo4;x+KƩ_+7_PܞK'^\w\BBlEEң1fw^1Pi$YǽпDO㗋?//~{ D2M$\!YsQsMhijoѴjMv&ݿ?lhsU)Ēq"/ƭF!Wm6}mbbqDCIgJP@$/QGenmS832d<*Aʨ1,X4uK{R?uxަf>D$GJDyBJ,I_*52##ó`I\?S|׆lhL&9ήjxtb;S>8 swNTymӃNh'.ubHF};?K55?o(cd[L2 wrZ..&L@1ƣH9əT\T(y c=cBқw}xQj{G7`Vx }]y70`yF2;^y4LkxҜOFn.Sxʁ2~֗=cFTb]Gi?L-;[ڜgJ+A~^VkḊG+⢴1'].^P{%|虿&<ЂsF9U^O?;IU*W*0uK\YroǒC!UASD̶' =6}]c< J-:f wz|Mf:yNѽk-ݽ߿[t,۰M_"YN,%7][=x_ 7 hO?vwie\he85eiG޹q,yJ}j{f<8yq1I-IqM,JjdN<3l{U}ɿ.?3aL/9.e?j]~ͱA9Y2.dL/6T6Vꏕ-%S]wG鏯ֻp#(cZUX5J{])Aw.+boiΘwL|BN=>/q|}ZUjںz\jv2y}Z8[x?hL=ӓH#o3T1YLN߁U\9m0W$\ZlqE*E? 8;9`ڱ/Eڃj&KsZ\5NyC4kUx?gjlc*Pм\\k= UJ q% BYPd\\Ju\J=WҀЬ"\`]5"`u\J`=W2uE"u\Z`T qU+ FW+k QcH%qu2zFrc\ڗ^f|\J]#ւuxJ(ol: }kU5(D=&JՂiRk;J=t,fs;+}c~9%_IXsWq|bb3ݝԛl5'5+dk900SQDCOSCjhP Th0qlM" }pEjEI=4=`k8Yzu㪙Z8wHe:9V+W$7TEqE* :C\   '_ALT{WWg+Ř9i/ zA N}bu\Jaz\!4S*\`X5"Ԃ+R+;]ʝ0+jGjpErAׂ+T]J]#,gPPSxW$WZpEjwW:K\imwzTfWJ_|jvR|Wz,|3é$peuM~ Q@ro Vuo D>GqদE(XpW$EVuWTv\^\࿄8wL~WJvb\5S{-ě p%z\[\Hk \SDbuWҊWg+!,wPP8^j+T+:H=WR2`5yW$XjpEr-Wu>D:G\)d;^ Pa\Z)+R\3ĕЅ U+T-B/qE*E~2CΓj$zFIƻ2J쀖f*:C\YW(zIԞjkf*5J/+m'%I+W.QZ% 2ZeL,S%4X%45dLFֻ;窷ގѤd7Ҿ9K(7Dr$TONjz$*91sZ`[, P P坟@*a?E//晴`N>SOnVhB3czdckgW(*] Hѵ::P%p q%4&;S P\Z%+Rix3ĕ4FpEW$B-"PڂW_W2O  SOn:Hsĕt@[E"Lւ+R+:N*wqu>2jɻBFjpEru5#:w*-WVUhAB$v4 1YIώ䂩Өw?&zL#4;^v<)y࢓evGE']\5 deM7~7_o޼A .<~߯WnGi\.bZPy(KX8=+J.mSE4dbvO 8+)gy(EsS1;=2 *cE߱8;G`i66x@X_ۘ>;Go'@.?_'q%t|6{҅~qM@JFsV+J\~R6i4۩}=ӭm?Xt139r͝`\d2|^2#b؂7bR!bua>g.KRY,5%AJMB;@u$ǒj&麲RM3)Zy$ZAX!\]l0r4֟d Ic#_cxaA>Pf:X,hfbf^[C}t(xDFROVn<^- cHluC@B yg")0 ]2;ak HױձYwy{~kz:_|z zv)Ε_=#Y}s]{W/j˲=(ͫm_뛧;?b"~ ‰og.c}t`ɋMW G)OŇ ]/Ij^,nI.K.ٜZ绩a%W4sWXAls!/x9\t,;n6g5,`(JMly9Y0ǂ'تD_*:3ĶMga|Ct! #]2ϋo-f).lIw@٣X+p'硽NZcYCofE˂Pd IZ Z]:1;mZ-`uҹծ_R`dtu])t,XDk,EКݩϤq>,=-fDѝ'o-|.}6ќ%K|gy1trM>M!hO|8ЎjAp#ػ>'r}pʴGW }Z~LGPBhW|GhQ$Hi?k q[Apd̮a.A >"/9( 9[V1I'ÓPdDÕ0H.=9BmSqA$p#=Zj9`JXkJk,l2dfp^—bVmqKÒV7i0<=;0|_=0x4_ɉ-RLBbĀgl1em ֪}RAIxF, _{> &CR6Ѩ`T%[PWrY9Sb.,Zh{4-9O;L'i' [c[6{Y^ɠEQB%lETŧEFh!$nm(2oZa@s2z\aeȚY4" Gd^tbe&i.z6L` [c(Z&1@D0"”g)lM&£A;ڨ=Vc8iLVB )DPdí#FBY-WNly3u@.·ե֬X.({.\|Ma%lb9l!k]%!t(9,:˞/Ů58exnpZE~l# b+@#{>Q,'<0 xُ gFf7ԾzTB|9NbԎ8s]c<,", jzN(XO]ghkҝ+}h~NUtmWl)OҲ`~7}Цֶm"#qӭ[`QGB\f"Cƀd$Lr:`H}' Xp;~ipwĀfGȃ4Hz2XsſG /O?/y9hd_ѯ_'=q'ilO.j,ė=֘25<فsޏ[9A֠vBUA=m]5}x `{)RMM;"si2*u-NE"Cż*)^f$MzƳw^`[w6=Jٴõ?wۦvT;xe~ ]D{Dc!W,3e9dh;2K:n)2$D%E"9:)\8#ك3ǍZs2Yqx3a(uiaߑ7c< {xv}<.Y7.lNNW2E/7Uh[S u*2)e.c]cz"Ȗf܃ F],ڳ1(]" ֆ33@IBW),o!)j(RNWuSE5Ffз޶` ˂)QvH]nުEnuoI"xTf>۬Hr#ȦX@RȄ@#'S0FP""3/}+zjEO+y^QTF%4[cަ? &Ahȧ H$٪_J0+D i~K7NqڸKYPdy55"TCgw>_xNҞ δ8;|&DA^[O#E#S&Y@JT8)jdJO9T8\ߌ}_G({]_KO}iXx#$}x_do "mI/2IH^@e!`~m޲t6 6QEVv*50ѼӬ fhZuv+Wެ?xFfcP񹌀G4 G2(kp,5aɔ1#b3QIt'hF%t"qǒ1ص>:,£U/8ւImB7E {xx&7dIQ%\Ƃd49pB@*OuCۺ0 b%ȩ@mA-Vq@y:{+DZ_!cJZeqc*h%3ømN`AcӉh"gg\. gވ1xN p֙X~/G >b::`l@ ū«I!+Oy L?di/`e_/m "Q)1EHg>"Sf 38< HK@Y)-h"#R6kIRL!k1a1z5e"(|@m`#ZSq*'2Gz$i*g(RSHÁ)om6->#24Q57*ŝp՘fLi]flԔp̃-S7u1~;tN4W!Y,4+cI#EE&Ht^;D&Mv)<{뛎^S̙fFd,)7YNzM-ۯ4G-EJNWe=8l8j5Vsl5#F:bjxoP{DY^b! ;uF!D!e!x0I|$"﵌FMPHVHKDø1r xx3a2&~s"Ҩzk5a sϷpC:"Em_cvC)}׃qgvnLa@ZҊ2+$ 9Qńg$ݦYѺ4eޭ]ͥvu˻C&R}W=ϴ\Iz9-r[3<%֬_5l'[[9nyݽ/oGVwݽ>tsƘ&?*R͞U#QϨ$&cF3M&Hy~0W3oGPVVrAFб|$ZVؗU\||׬a|쵲2aDD&p TQ͍QmQ'@AuF&x$y|ui$cdk:@۪0Rab7(g1.3+lTk}Z>nkN՚L]yeܫeQ ^z{+Ud)%lf&c> rXp/d@8q$rln$\.%6!`BL1jveZ1oAkYÆRwY)j"'ܛDIÁs1o笳z51殫5eG3wV)Fj@g鈊 D#  :hJcE@|iZEe<[;"E Rr JEdNafXTqXŽ6B |v=v? ~(r.UY{f۬۰?3`."HSgA!.!ѲAtw-?ѴFBZ(41)<!GiqH MjJa;1}H԰^ƈ`'VL#FmOID[.aCքX/g)⒲0 tc'ݞ/L-D3撉/YD.¸33&$bi]Fi3n ߟ&̨W?M.Փa_EFE(bge2ٿF)"; d!rrN.$6L݉µݟ OBOg(85!jrxJ2{y'1g80>]n{s;7&$*ʾ~͆|zP,( -YJpjCV/O VR$S8+. (sg9;Edf_ssoœ&lSb͗n?R3"`qni[#Z;Gp0 6A.`*|W =z{~juΕ3BCH8d+vE >AqS0wTRzu]TR&%m Kb!Mw|'Xu\Ųt-lTpquo.a?w>|:DO?^u$Hd~{)k4O~hMƫm3lro#kqq\)s>! 0n@.!d\cYc4&w]L[}8-!OH-R1F)Y L 'UPI,>_ > xCuq"FkŰ"cIЃ`Y4깓H-ޥ;;^xrDuz136M d԰tc[aә띣(ok&Vև0>L*MG.(rƇ-8j);LQqa(?;z_Wh UXn|.>ʊU^uvE< f1crH84DTL-3֋̔^B IUyå O=cF@̇dIHba֠LJV,s+S_2L%X_' ԐgsϏPl4[fzcԛ{)oV~~ 9 `hf)1H)h[WW~g5*iGFȔ1Zd( uLXl #EyODDŽ6jS>Dehp8%b"XZZ;JaظS05Ff>d^֤fiC[fsT8/$@OG+:cBc% HQaׁ1^gz̍_icbb yEr,$M=l],y%Yǒ0 dթsx)jLФպ8+qj]+vٚxo~ qEz^s*7ʁZnx[Ms9@)>Âܭv 7e4~UiUm T˦6WB,̣^ׇүOxSxㅥOrs~O} ݈sFl,9t߳ʡVh.(70GVpm؁$-K|)Ӝr3"I_RyEwUd4&Miū̹ ɻ#{Ac35I&*5C^$J!PEۨc(NE]M1cn;yݓA/CΞ?+|}tr>o6?67 p-lbquSKP:%>l*|¦BӦB½-[P{6q>z 4Z0eN^ ZýKugbzgۤ;9g<;9?Cyqگ~{NO"s]xy{iٷoxZW5Ge)%Gta3w v`?ߜ/Y}-!K%h"pa-:IC8htfQ[I9Ki5sEZb(NRH6+ˆZAa4* IR}ZL y;!9QKkqmsS꺹\ vuěi6&V4,g5q9VuÊzQbYλu)Ѐߙood=VQʭd2sŃ?ףe; ~z['Zo?v~~[uϜ:['Ϸ$ {|\xwv{"q Srz'=7m]FEK䜡(Eqh-P]4(J*J7k %6y@O75ei#WJo:F,[ 2Rq9,eH,d bEmLfś/?M9>~㣳}D`sꢐQU!tFD7LWwV͐Ҭ&F9cNp&z5b9#v}ݐycQ6P`,BŏڧCaJ0"juب[YN9|D6JZ )BE[iYݫ%>/`rDuG.e܏[QUa㳩߾?~1"".k 7P$` 9"nYAqAZIZM6AN &я3USO$PY:x1".3utt u.m.9u.b\ဋ\pè:iE<(цmiu DW.~\|.x;qx >:{v{g_23>r"g6l;~Bg;юOoGvv]U˓Z)X27a &6Y8nL^!D>!/ᶐZT*XfB#߬ܒEZ }Z<'ݓ,Y]q#*`{g*=P/DrI^P%P_L%inԗR tZ@JB%o} BҸAY qEpz%#x-Z $z[.p PW)>1SĐf1+xˇR>L7hi9ޜ;&%Rdf>q^ zAZ9)6+~5 UGGާo> [la_z]wx5R%`ȪPW{!.$cngGt˞8|@xB~N]uf{ *gMߏAu<4|TsLN5$#idTB&'_IK ޔ__vm27G!s.H-Ǯ j.| s?me*CY1R㬜zUɾI{p-W!t! <{Rkւl=sQiVCF'N;sMkL+]qwZ "S/oU@{|i fkՇ$C.ҝ9#&Z^Z/gBK`01[P͏TehkVKqILǣ[}ŤNo{LM!Wh{lյca&O !{L~8 4Cic CJ6'6N(KNB<{zF8 ?'&;-F8Ҭ,AWdSOD,J)^hF^GL+}3\G*B FFJ GO0^G쳧+2@z&*=Ģ=-qu$~_#C 8% i^24vE"(QX-Mwåd?+n2l f* ^Ri{u.耥=KG Ȏ^:Ai]YsP@ɻ"ȗшZXrR46JEnq x =:@$9p-ĩs4Kg>fd(h,`yUN5ٕxdgFql.du-e]+v4ֱ )Ǜw(fp *\pWO*)yi>+  2uhⓑJcW4@jXC Lk.)!vRAޠ[m2ͷ$Zir;EGC7!vr `Ѥ=Y J vQcKR@~c_ f"li@:wlj΁Q".Hq{+H\ڳ j46x5ڽu@OZ*sTdȫ]kv{DK@H d{)"q _il-@ >e@I8u8avdPN,)U8MOd35 {?+5و)MXQ |R;0!#2]Orr4r !aVw ( @-DҼS/ =KqI 6#ۜ,rJg%%ЬMUAm1 G]BNƲ@|ť L"AoJ$Ad";w+䃄sLc|Ox$RW}eU*!(mH; ttPuɂNu~d}:y{?pfףܶCM6]-I|3}\ b΁6Drb9AwAK$WX~0x@׀R"42;{xOKW[&h#v Vܱ-:(texRcVv'iK9#˨x^7zMaiIF ̓c8`Ȝl5ݜcXdY  !c,ץ4LpA>VuC  -Dc5+~Ҝ!]ڳ>9Mq'@Ii0 4\zkj0 j#RA `@{? O hf` Cz`4T+҃g펯h;;_lOliώ޽ט2)f]<@x%,mu1p,v'.*#bjb~BHY8zh31|\G3Ҍ<ύt@!(!A/{N#p+6ds!//["ZҠD #$0U҈rEf<X[@>/&lE^H"bTrn"trp=r񭌔tW @gѓX\~"02#CGM1mG`' XX:Mj㪷1u Uq8H Lm@24ҬV;6*)>LPd2ZbWYHZS~#|m{v²c+uC߬AD$(>P$ѕ]]Y=p;M{Vq |{P&̌eeS]y& D)G v{ 6%d9ƒ4h;`1 ZH ^dJ{!Ҽ?SЃG9(b=t{25!|dX2tq')[9ApinHuW,Rc0`,hpI|pd$|w R,&* FHe*YDya@z'b[UDf ?LKJ!u^pOrNUI% yUY*˒9xתA,*$ 13JfN{jT#8`?!$%2Hb.O T{*VJD= $еGUsA ͶkOF82'VFZ x tcz9S{C·`};Z 3Sb\-YĎT@+lb)+ 9d6ewEe?\DSK7 QSkrLZ&!NH@;d̜[ ~ efQ'{*M,Ǡ['_絓mCXG0Jzԧ)Cιd3"Bi"+ɽO{ N&/:!~4zx2:Gp 0.(asDXAkS"2X Ȝ![Y.0:̹1>z=|E j"pzT O6_p˔n l}鉶6 w{n3@L=O'. ʸ?BL.8|dg=Lb `9WW;= mB!>.B,&-D!EKYe-sGC n2NAQי]l XFv/utp4x")}\h)B IlP_[%I^I1˔E,/\Ҙ4+!,yf}klJT2E2E7{?lZnKG?O .Ë3wۛYVR+#Q4Lџ _ ݰ]iN9T>rZ, 'p@_dMYc)!<(TUBcDx:W_L<r:xQ~1&?8}=5}óRrGp<-m~ _Mji!\3+h) R^LMyeДidY66nh^6iy>.Ė  pi:_&mz~kmp b-aW>. TŠl;JZSp&AF-|̬Nh^Ken2Igf(;}^L[f7+C,' wZx=;g~ϝ55E#a ؜] 7[-%l ߧ- ƛ/ZglI#f8x_ x61cNwl<[-b<\9ik;] QOܱIs)ޅJdKϺyx Ko7yW,~w<&f\RځE =oĝN+ v11ɼK)I^HfS']"Zjf2<BT d{g:˫uUܟ! ~쬻== 4s 2V,E,LmR#lJGrHWF`'UޯU[̝-fr-u5\yz͉]yq',[%Q*j(@ W f\V!jV!jV!jV!jV!jV!jV!jV!jV!jV!jV!jV!jV!jV!jV!jV!jV!jV!jV!jV!jV!jV!jV!j>_R^:j E,td\.N@ lc_T̨P(U*-iN uv%~~Io_o˖u$ ֐` /SR&$DD-I4^YdR7J`.ѣs/j^j99;_ Wg`v|,q1NN1t9ZL] b H*'jE hksz:ڥ=mmV.VZ~ڂJJ[{^&P\*-lJ O;d*1+2\6bRww#9^އ=twu8˔ L[H%aГ]'²d9BmvooTGw]_2@7YJIor ,/- 2(]"ZWIk^9꾀b;xGm݇am{ނ[mĺ&J!!!!!!!!!!!!!!!!!!!!!!!e]Eb!,xbRdX,Z,``bY, wмs"T=A%oaPIy2PXp]Vy+댩 ɸM)3 RX\ܖaP|},zϾy|4x&iI9jCBhQ%4,IZM' V8爓Qcߝ԰$x )4>.0ҐhI 80WAȄЈ}@V ¸wџu3/Ɓ֛jƯwַV<}AP6tsit֯|>aJ(wsOz:l|Alz fi{8`g/DzLx:ݫrm~l^oEW0 7 fI '_A_d Avo/;cǻTw[KZ=p+ bG\kKo\#1׈F5bsk\#1׈F5bsk\#1׈F5bsk\#1׈F5bsk\#1׈F5bsk\#1׈F5b zi=7bfoz{]R$JqR{ Z3zJq0xj]Bċ@Z"V*7e"~)ts5KE+GΫʳO:CedPdT$^YCQ听,7=9,q;Wcٷq=EeK2]S/ Ң W)hɳ2 JZHD '0|SIˋ/)m!jxT|\\tnRf<˦B*eQps&?"lW|7QuM~>zWvQwK # N_ul!%F[Xjy%1he}w6>`N޽]F8Z=Q 2x̖y&e`Dt^]. jp&v17{'>|>]1mY?M0F)RB#;Oa!W M`7\,+)rF*6 /]&3߻os->|'9z.s%ſFRhaVP_6E^!RHE|Nf8pN-2N7`p3sn;-sr]"bIal{JO懫ݮxz)ˡ/]屬]Ma]^6]9yrl$£,;\f^7ȥks^xrӜ{lB3p/|l ܓT`'duF!4 r,,ӥxQ2E2Dj A(8jiD"&()c,+EXB<xQ[s6rIxq"u;JΩd^9&\QN$}OHSB,YScRnR廱j~6H_?׿.w\0hO8jaC<]BOҤ7hG f旸]gV)V׳[׳ý??I_~|u˯|~7)3/_Zwx.j Z~ ?sZ߽M[Uޢinm]onW-vҗ/ƅ! 7z-s cl֣͑.f;LHF j${yF˷ #ZqwRΖ;8+S6 64 vͅ؎pduf[Ae{_wWf1itfZJ~r@xdP8 qr>FL['.`|@r$DR`iHTlTtƒό:m}&Um:hM:6J)Ӫ&ȝ "e)OniUPfePiq}{[3B߽wuͫ@h}K'\B!imw.~46t>t:+0MkfA~9y^VI y9>='+ϫ0THUV ;|+|z{zR Ÿ - O^L xuBW"Z^9l%)a /2ퟝ e5iµr6+U rBP$!_ ks 3o'҇dNǤ7 S.^H|q38%3 $rb.AoӤ1qMw* T}4ku?v088.>@:`a"42҃& }$ͥD)kZqBS|7ՊĜ/;Fo Ѹ5q\_yą(Ơ!QGg.]o'F Bނ#%+ԱޒU{ J)[F˧"@ǯ.'ڠ&EQi<|(Pu n=T~]aUDep-$?GB \$hsԳugU6F i& |a3 eX)P5pR9&vggͬ1Ϸ+f;ʵ;ޗĜX/dV (p//oth}4}.3eǫEw{úL7zKA]!fUfs>et%roWvv5z;;G B=ܛufoVcm]؇^ɳ/|쬬QܛC`euyWڼ#^/G_}1ٝϵ#S՜4GVZ|9lq0l6?7﹕P~B0BgP"+)K(1K衇FP %JèRGdP`ј,!7WYZe\e).+΁9H8Ak-0cU筋94Y#)dh8~Nxx{C^ ؛;PQVq"4wʵDcf޾݆ӹ;az T{k`;u?Oo;@M2p 2uj27݆3 B !u:٥]l.i۵ʼn^ø?29S-)Z.Q|Co94vp35 c._zubꝫ(ךY ; d rq(sjs'$ݻ9wQYA(*viajʝ$Rq]%!U9֦j5<,<O.ƓCi%e)G yœ{a6QvD* ј+},*K,,9pJjz彛V6Wkͤl`mUب1W(.XUC7WYJb^2R ެ>4jlHWf ~{.}3x99y\gZĉpJJ+Al⼲̡* x3vH*897t|]A?<ݿ~{! yZ?uZpҚ;$hQ\4\`(B EhaIp :12pc^Wt.! ۛ/[ۛv:8gzb{zۑ}Ovh{|`{+Fz~S(@ KǒVsOޚ$(с*i]n`щJK5wy#w~}WjEy{h!q br“E|/s|;߻TuSj1/,\$yRgA\5[g|6r֔^(BG  r8t] jR|-^>TI>V=9 ^mc\bH!@/l B^ z!@۩^TPO5_ +B^ z!@/xM&Q0 F(\ pa.…Q0 F(\ paޛon;ߔo:h9 ^c'UާqJK*Õɻ9ʳde  32lZiiT;cAkJ &Eh9sIN%eLT&SGM'1eJ*S7ZNNT ASҟ^*-g֣m@@LN<_?Aoӧt <+!x1(DO79aOu \h wR=ǫMyקf`s\HKU6&aR s p zzwQef_. ?" N%CzXH/PZ/ͤd^@ An't^clj[dyRMVZXoy/f{k+tBpWOb'Ix# Bq>Al2VD$ڗk"u0auyQD@ }t)"܈E('K<@)jcŽiㅳj>^  }Y33ڔdM9%i z{*_1" 4y^n*ˏ/zB bX;-r y<o&HVd1$F(7JlZ'yl!$t?4SKߺʛeLȔ*͔*)MIhE-N9q(! 86BT5ڝ,5nnЀH])qt)(GfLklhtY/i?ҴD"D(J L.! y/Hb$ 6%-Q GCHNUPBZ-$hà1q}׽n/+06/7( |PC4) "T ",ϗңXˢT  \ 5vL/|haJ$DkE Z8Bf pqə݁BA 6^ K\cBH]pw!fD`t3"gqe˚'xE1EWivj9ͧoH]~ Ԁ$ 9@X9!k33l҃ 5X`_RM.R,5Y'c~lcvZq P[¨0Lv pxy`Hd ˵J!AУ*ɼ J&\ bh>꣤1 ΀s66r#"怎Yd@>E^>d_medɑq&+dq۲L HFnuW"%MUm*ݓ꒩|J+ChhUYTh9XkQ3cZT@e(2%ېd6>K(br<@H詓h.D S^FO cBZ͜;j:,h)[Ҁ[Ov̀E[7p)>^#UUQEA1͊+m&XL: +ҤCVxVWxFcYdF:L8AO &cGNQ@R9[U*հg숅B o./ 3>ۤ-5Ba9Y 7M&xLY{AKB4 BLR%m(0Ĭr]0"#5C \Lx/)ۨ3ŌmBrHmjF2ai1yǮ Qj7{Ep̂}Mf4MGK:1)y_F7MH?Wr:WlKgI¢btK7^`C :,xTN&lAV/ʂjrQTѲZQh[:eAtȜsYqϔ $sWWɾgS{4N| PR1ENt40PQIUETHDAShKXC7ܱUS Wݟv~x&_zut%#\$r0)`Q1"^3Ct! ŗn܀ 9uJs&oa )1eL,3gd0ڂjթ!@XE^bnAL>>U'rcqe n}׶X깂"dB!WR6I&t `)aYZ-A zȨGڑ?l XFVK5@tI_tsxb)}Yj %:UYrH &>jiR2%`0K!\Pg],A,bFy-kl dP"@e![nǑ+Eyя }/4~vy|~S>-oʨ0?WaIya1x<\?>!-Ŀ~=q2:%j"f*%X D'WJ R1bG_=3JrC_TaJ!sΑnt9z1;}3;++G#8?o&9wYIt35JŔd:tѮ 틋g>YCBli~t]į_!/e_eO?Wq3Nww|}tl6ʳ?~ Xƾ%rmҞƴbѷżlY[Ob/ChuMzz]kO+4:S]2/rP.绾޿}-=flDWG I?|rh3qu;0;@n%6!h8>?zGe#!?Um7Yنuƨ!cPo̩5edJ߿ x֧xsBwn2pyvsf n/ m[Y򱿍x߼v˷lo}pHkK.y ]G7/knɀ]b(6e#>uZmnF14=GVåg'קXlUt|k+~iE{6a=9czqs;WӦy=6GSlZW@zؽ1}1:'nyW,PXY Ӊ{ύfk7/hߩ;1[1y&wJu<$;,(*pA']vEn57N\ d E(85MWY+]XR(B+eFT g57H)ilIEElԵmfAhy!tʡZUe6֐0ڡaE^xX598B+R0f$7/~x? f磿_~1wkW?*1h׋t6tR,kE#< 1w!2d\I ୶Sʢm ];FO*y'_kd9@Q2 =' RFd'#RZ f[t@SԁBʉϼ'ӓ{}_&܌s/r5`k,v6886n2NK qy fDHS8 !|*H1w%w9yucFF;W~ljd[9v-~L毬`ۘ 1-""ALɁAbjk,CVΪd:pI3' @kΉ?G**j<܃Y@?f<"ث_`c1E?v_I62aJ&W&W*blrMQP*&W٦\elrMUMU6&WـU6ʆMU6&W*4&W*\elrMU6&W*\elrMU6&W*\elrMU6&W*\elrMU6&W*\elrMU6&W*\!КCb=שS+Y:ACAǍ &SwB(Sju0SCî!աةTb'[ׅ$8 & ?DcXH7d+tʚV6;n#tCA$2x.Gt(l+HN++SjlC n|pyv~E6$cXXi\!|qsNNI+[^iI`VE6IL/W-{;PmӝO4umJ!nRȆ 2*:Yt.q})t nj28~Dv іDy,W&pmEp4{s 1`bbLm>iS}qnVDϷʜyDGՀV e@INShe*\P2RDm .vXb]o[9W|Y`ëxMc 4 ELj,9܉{.e=,򱊿*Aȱg/A*`uW٬t讚j,B~Ѐ *l\l&5Hʍ6J1hZ@Y@#qqѾzY},ŃZfy.GH$WmZy*L w3)y{xqE) mpAe+[+uLj"WÈ[ܪqp##H|Jw$$p|-} dDȸ)ehw0RMћ}NQ(e:d<ۭHzϫ+~Znj݁Qs?zbnz,%Zڡf1e1OrM#eޟiҀiw@WqT}Ur8Wd/ xJދ\o?x?j{Xjp~T*pB|RKv:"˖39 [_ -,9$ ΢g8t6&[vz.bY4?Ӣ Tyrn=z̳q/jw>\}CjԦe{%2BIvczx5Zz9FryԛRi+{$*Q<[ Q&8.eQ6Ni= -JLW1pHihI[`V<mE&SP9|=k(g|G} S&gUwq;~X t )l 6%ȳQNZ-jX!u=E!8IGE[r@Fa3zn'ʑ\g=> Fp׋Fh` w1)0T,`Xx.oYwNJox.vRc;67Sg{is\B|EϟԽ˨!H=7=2Z˔"I+`>LdԤFoS}Z<|;ե]y fo9 H*[Zフ34"*[X,X`dۚ}iMhYkzg{f]eF5CIIňJaDkmSc b:D V1#r!X)*xmP' oP=ɱ*ΰGdݖFB9)D9Fq]0vo<2˚rGE΢˝WĒ g]frFdGÉetI|st7:}t؞{<,sK01 RcI.֖+u sCFWLGAcJrY~sˬl:7xRYQY̘\9Xƭ@,I&1mGAv]9NxmJYgh (b 4sZ$NJb'x.#S k D^()Ө: LfMHSbq/P` Qwo_ KCIXplIٛpuOFڻV7ȑ!I|Tf_ƽO0\2_S{NH<ԯi3FK:ᖔb1qrN 6."O'w G0x$ `"3Wk-<2]I[՜_Oz7/']VVV73 y,0kipwjECvj"?_Y.TswkNֽib^fOop58Mp-1) b.A}t˵a 6jZ}qV|HBI. #annĈ VF N4 =^y~q>49׎*Q\7ڹb1q#e`($rX_N-+;˩0<.$Nۭ~ x>%]^k .&,bK? GHq/κVNjT5Nx>>ˏ$_>?~8xON?Yz/%: ܛߋ{5f~C;Qkho>jMY|T)w5,/^VXu ?[)w)MctM|$6 zz11N'BF6ڜ3Ŝ9ƓEfPٌ';=piZG^Za3ϝUg%Ox WQ&@pM B$evA *ɧ֭3BQ+"a<-I&(K*aюe"q ґ& wYuW]AB ֟IWIXΖ_'|n+?m~9T@xAI䡡s6 lќ C!֖צd R2eaR:pB0hTRD%+e4.ޘ\Ȯhh$GgbAy%ȅp^v\_utqcw8\HQ,3 cr>>iNѰU!v[ygdߙͦU0p}_0^v#7*Hg~U`ɬ'Wj,Jވ[4vy<ǚQ9Md\z@$p )1&гIZ;DQ GԐe6XSP\Rx 8N BjəeN6A9;C<n2n8 Λp>.7Ԣx[wpQgu/BVς g얮g]/'me%=ۙȍͥ/@hsץOv|BdW/Mjοn7wO9otfKyؼmԲleJ󍖛CIԿ@9I#ͿuYwk[:eй=ܵ׍Yo]lMw]ϊ/RmƓx]mnjnnHy1O( 7j{ABA ޫhVc/(Z4+( IZ=LBeq3+ 161ӨSc# z Y̼KqW-]Zg{)#X3.\~46S.5(] R> Jb7[?e"C3#::7"TPYN2304dcvZl ΃/x} f"ʪt1]r# W:ikL˲cM1dt [rLb_8XF/;ʓ]=1CN;f RkJ p)d*_Pp[`:"`lfKv S\lXĬ&{@:$!qッ 凤ro:w/4"@ƄzVtXmR`AsJ( cNLz)):o'v _A#^(qv\t> qaX xnl,4 Egywކqx)Z;.u]4,MҀE*#hZ  v +}\])tD ~i6Nμh˘㓫Q7pՇ`} =pPO[ϾM1A5Џd"&}C4{h 8y\EK}`ȃ^*/!c-bFg*xos' A:b-jZ납~H8',SRVi J5蜍Jy2̩@_@0WQzvH."d+w巄^DQb#$ KTHXQQlԢx C^jdϼ {<^❾?rIU f2Zh%JBKsNhεhP1I'Eښu%+iR)K.H`тI<&ZzALBdF.aհJ5YXmg) Օ:YV 5dw_?^_3"$ JF(cBh,QX4&r)`6'uo#̊({> &CR)OFd2ŬB9TN爙EckF"gAbԮ;ڼl܋lhDB9p4"St"č YuUyAHC&diQ!YE-Pب 8b<-$IF9aϳx4^j}%,E';iǗH-"Hh33ٻ6$W ~씔W!gY`܍y6<%B)e%%Y_dp[ۘbz)! 6 8E8(J keTG$̈́VQsU=bclAzU;1!:Ӓ]h//~qDDr̻()$euЂ& [x(1Bv{pa eWOn\v?jF?fA~. 4*J'xnPx(`tuBx_~n hc()-.(PsBpf0e}ܙ&vnpoKUЏ˯E7n\tq DžRLagrBq(Uq"͆\vnnz\ɚ %w邏T c)Pcd)98&v7D4WǡBŝ!:p88t,%-3/]㮲`^js d|=Jnó 9{fw0q~^w0i3>HJ3pWuWz*,/]wUTޠbc z|^U %^~_+s}0ڝ)cr6YeYD8%i)奠Ζ6q^Z$7)9:fGݻG Vfe2TUw' 3 ETE R.P&VD 1JΌQ O>|7r^N{Wui.~natw;T02(ExgfdXCFa绅,|3Nj]NstsǗeH:g .j[:)@XAYKY®Y^v55ڰԊFķ<K5ONI;])eC 4q`wIt U!2kwSl@:o*Vk=z]R}Iqz>R]:={ɱʲn+|VDGBHxAIx1I,`/%iwFI,nso2i"70:3<06~`X8"6e)9Ht!cSrr)*3E=F ne:T&zݱ</j7zxr[ubjI Eٮ *TSU/bz!6> w_Gew]q"k`r[rGTP.<˝ `@3\]>_19A1ѯLspqm3Cekh%(f}^o&_L3BG^|i]2hBoy'jOKE1YFPg"c"!z%t R;b+'<8EYN&AJL H"&N &D @'DLK %sY_"(}||Xv5)qRd_𡴷Ë=w_.2ƍ.x4behGNX 0^( MRD3ʦf_ 52Xj#VGjkCg:EKb nѨ'MD5&f ^g;zôڋ7]6qbt~5h3!K#$}Z,Hr-'+JKsޔ{!~{<o8tMҟZ?Zo/Rh/w&8bHIC2!p ?i&6<`<T26gAp0k9HXc  0|i52h jυ"f{h'S8eY5zgj7ZmwZk"u0auQ%'q> J4;h`)xhH9hJQ#e.>86^8vC!20 7Q=M4mcl&Rޜ4q8KA@ ۾_uBf|y3.<7mx&|ykk3irզAni{P='.gJFpTi0ŠoTED%lJHBcPGgB8o фj+)'RPP"[jlt٪&hm %QZf!$0D 8$HzxA#Y)in8Gs֐5]* DGFket C"dJ%4I49m)N lE}]iz*=7 ~gi3fTVsF)⶛S[9pE:*ѓMu'䛲1 n;ɹAϻTJs(+~YFD|.F۹c7T^93rRrt0h ^G!:xe-I5+y먃D\\N tY}l,kQG4ڳ!<#>foNSi${ώVwaO"([+.rY44Gܘ껋9Vi\]p;?8 aDsmus~1zLLU<^^{QK{b|eOtAWv3,C*O0?qhyEN)Qh!4RH:EI-Nzع8Ĥ,#xåSK&IDs 'V#"RJ oouS:cCbv<՝?P/*ܼsoyϒݺ<[w',:A*!TɜGuR'z0y 遄XZF*R)<1g eNEJxUlB≄KBo #`AI <a ܙ B(+ "ZzQ^}jsws !׷ѿG@ 1rՏmv_3|=S'0/  \n,.mUйmܶoۖAL㥟_?iL xİV"Z^:l))a B H:%[kl V*fsBP$AHGmpN!!qMD̛)&B*&hTB3CMP3P8`MQ9KVPY ٜG֑ĺ_cP1nD45qSo~cvOrid@MPoDsic0lڠ2x4hx+s5f M9^lxj Esķm:1J"*|o8ph&|EQios4V:h"8#FFI3I̘G!I ms8빱1rqzI$)CQI̹OIExjz1q6s圇~%+99ZʼK֣e 1'j')MY7xXIKE*EԞxD[yͳ(k~2%JM)-G )$h%pIJLRBp[,4[?$CIolVe鄡>PBP8BH9Fؔ81M&Gk.rC 1#BFg9vN bf! 3 zg֎s7xq/O ˅-7< JGPF}mK N%/J+CKRj*K^7$u]Z&x4'aq| oEljk De/trh)B # Ycr\Kjh)AdI d(\NIc Z8#BiK8o=$V^_oBp:ϩBKT5l.@T;xmЊ@J"(Uܵy{[lX. I%"i 2O}D1 J'CѾڛ8O/x1kЊFsAL*'VJo/~'u"+*?E[G$Dt J čՖ3&) t'n#F {g,!.ΗY tٛ3.iL$y&*lTUyH:hK q(bEC޴P<=8 a=~>k*6'JڊmFbCs[2Jkvyu7n'־P.Ihh'˪ϗ>G(6,ёݧ0^<Y5  4q!_$с>ipȑ"NN\=FVgX#C] Bj:n]Clx|9|5C&CO )۲Ćy'h+z>W+HJh8`( A(I/D9Z1$#+0?Ͼ gsUgja)8T1?Rfz}=V_UÏUM?T[O?+gVvb$`볏 ;7߹KK`.-36֯+ M;tMuad0Jѕ0vƭ5E&Mr95q/yѬb,~~kr.gKd wR$nu?Fx6_nvݰfc7v#J+Q5cpoV?=$PU"\[w_=,]Ur){]n5gcPGVcw/{wÍYUOSW_St;]~)tw]i~M3;tz(4l[%Q,?'lD#"Q쁲OdSqaR{ rr4q2H)8a88G(El8F~ZSz~)(67E1n @p(bå@4S#b03+׉r{S.\ jYvH/XWߐHK +wȽvخd lׅy0m)l6`4`tָ F0mg;m¡a.?Lg'GS\W?fY|kwYӳzdiA'%[;o7.$(Y kZ٣7Y_c;oMw,6IRdm{l.nڊm[G?A^ 7H >ss;[=UlQ*lXkgS+jcyz;[GY 3ODՐ+N \ei* PJA \ERB WSF ׯx]ܴr\7nM6L!18yU?7lz6 8H3&"<ײ r#Ӈ[2& 4 ́ Lgq%;6tF)o kf4np4^$Ձ\)Mo'=T{QYNbg;|:˸?TGr yWݳ:f u7OY1zgޯ (ЯC9& @5<7j$;X\/piҵʯ?}NZXZG OL3ub"fNEY]r|@$RY$e7r'ADjιYRm 7-%,, #5ݳ]ru׫辪iΧR@dhը\Vf]yV$\OG[8n2,ёݧ0"kM'6DB-ME Θ FmEjMh~[3e\bB"2 XCuw' ;1ͧqg]Sϓn>kT1G={6YoY\1L3˖{s✅qT񞰭jAF]B^R῵QGh(:%$X^umeHIr$%0v_:H/~z`츜HnS'5_uЀ1h*&&y@"E RHꃢN" "JF_:qlJC".xQopwl'}owem$IJ0n{fy6<%xlˋYUxIQE].lV232+"2R˛0cLP(-8b`#-VH jx+Fg[w޸@F8FH\qZ:Jg! a#p;96$4V+ʝEdQ$6HBu`m5f ߩ<nQ Sw{+tM O-|ϊGjM{%{`y #h!J6<'S0FP""3O/|zjUOyQTF%4[ckp`F `h`IOy!%?_*hMqtXsI ht8.XkTN$eVpʲqSa10H`7Oޟ'_'{&8ʞa :+.ÅwBYw;0SB5Sb1#hU#W7+;o։.׫*!_Sil&!`W|8?j$ 3LCWk6ՠR[v$A7A- TsAV=}`R]$wg 4s߫.n;͆VDDь6Zsh;:}Q#2R%G4 G2(kv&?*U@wUvU"w oBsOjOG?h1h3L9W9v9&f@#X̵Qsɭ=x]fAN(wYgdd %qPJЈ FGz\%s/wXj`oԺ;GBzD]ldM<3G=Qnrnv6K?6f턐"~LZGe1J >ڀ\SMloj*look$=H?kn‘MrJ͙\:3;˜lzϫmEwng_tsz]l[C(sK Xd8p0AQ[O{GC[0+& R\+ AU;0EO|Wnwi}Htoڛb-WIw~f 1ճlfWplm"/:/6~Ev ]$Oiz9QY?g£ G niU -竅'YJO5+i-m`!4r8{+<_gXL^dK'mAN>韣Zea[D6 :O\U~ L S8#>\\*^YM0.5QUaϸS Of>]$"ϛջof#*ŝp՘Lk]nl԰Uan}#aEG3VC_u-*DC9X$xp腂:LBQvzk L)Y_yp2zPݙwE7r&\M/*}<4ԻY@E("GN \({iYЁF1hm 4wFm#*$j@Ca nzr<ٓ"55>EW r͹!3=ށ/\VMN߁I 厁9'^2̰[Ƣc\R73tny$X0$ Uy| )= IUp;oB!gh՞2# ?4‚Sѷf U{=\<$Yq5dǒm1YZd*KzK/'ۀ"g& `hn)1HWܷ^{a|Q%I%r%Cgb㭕`0)q:R!qp@#A'~s71;=W7z:N7~DTq"NK71hc @.H ĺVxz{&j6Qa{keTm&N7̪E_y6nR4LTX9q<VL[m=Һ-/g9$w` a2(4 Z1b#Y< AHRVa SJɬƠ_0Kk1hFs+%a1r*uѰ[gU8f nh0m2{k5=:܁\"O,>mXm;vE;]ҝ+Dn%)QL2v;J bSl z~ڵsmݶk=4nBJ=ݍv~sJjσGij|GUЊw¬iYs6YUۀfH7ɴʹR;r [?og75hBke#ϩ$&gFsMNH럷y?V$MW3oGPVVrAFެI8;?r:~tggeVF\$¸'f|d~WXE<'sQ^ab ȣށ9fǝ:Z?ׁM3yOոpJɀmV3lS΍mb,.RMȀPr$r!ap) \>dEM~It!"-ӪmbKZz5el:.9FVBI„t8 T{e4x%̢]2[1 AzW gzeCX^G*#R"%@TFfEwZI%HD0 AP$@2;#(< {{qxº$?3ic."HSgA!.!ѲzZniABZBRQD0f') &"'#tr.hԝ$md2%? =Wf@0! K U(װ ]wV|( t8__leFP$*ʾoׯ;h*S,6g >TgNOm>]^q[HqG̔*!YliNUlk"L{۫EqN A 91};ET߬BXmմ<^|q{aFƑ#!0\;"|Aa0,t`ZŴBOnmcvT:dݨs匐X3Is`cYw Άe}cf*usStK췫o]w?J@E,8otͽ>Ͼ إ;\źv-lTpq溣 w?Ox{_gO>LK`u$H~g~.k6К4547*вYW {\XSq72l *>]1Hĭ$rD?F;@pt<#m r1)-**OL|{A{HՃ6==RX'2`dV ( =vPj)U4)I8:s'ZMy:0ݑ}k+fnkh'b hHY4vL<(vΣ?"U%kwV&x6UmÔ9NM%:|P1!JU.9a#G'#q[XaH{/Q$4X{T#))vT[d8"Nx^y"@̑?]o9W|;`a.3{`0~AG"Kdq_eK"St$,"JOR#(c,'Dy "9{u6a7(bqǜq\b\d2</ ^.ĤBAw.9$I\Rˎ&jIKdO޵%0iTɸb6{@2ZG{L*w=[#d%Vhk\ Oրsv^NMO/([5 !dWљTUig > d(.8#ϻ+H򀆀Z>yv8K?ln[vig|vy]|Z_?h[WLC~$112Qd@%DheY\Cpxw>0/`}XͭQ1ksdX tp:\]`5[ *S)LӮLICFVۑ5̬29:Q)8O9/:Goe@*Ъ%ُ"%:5/?%"*X!I9SA@JbeAF,%|#!';U ihX׷F|V.nU|b0e#uUyR(s4 ZV1I'oŀͩlW2О, Ճ"h "yLFzεTmX-U*ta5QjVWj½¥>\M▭1'Wx>_  F"$4f$9cQƄڠr*}RANxuw#@fEP=!)ڔFS.LD1pY(ʙ1hnD-ra4L7\CոcW+kminZ^䀅EB9p"StZ%)E[ٛ0kc} "!WEGtMB@pRyZ,X$I^+jܯ[kx0YcW(*kDiN#n`*$DBC)&NΜt7xC]YTآ;gM&xȭ*63}wӏOtkj?h\l>>5NAZҞ64e  Fi$)<jE`=[: x yd*$p!gs0H2E3t ̂O"gsOU!enl4ͻY}s.&D !=:Q!-1 M;8srF\q}LJQ fF8\&2Jl,9km9 Z_ -,~d:)* ݺ;+tXέ8]d-Z0vzuQ׭aMjv6?`W{x8*W4@ͯ?iO^&ף{oZl<7Fgѧ~63߻z%g- 񄖹:z~Iy  ޡ:eedIFNND_W qDLibɒόhd@310ր1d *[P"dUi"f!$Y&&sJ3[ ٲz8ؐ|(l4#U+ S/Olco\~|<˄t&IM^ ,EzNFh5I+B)8s~  xgБNG46D=VғPjT#~ ^9Fy]ŵg+_u%{a)] as}ӗq[:W~rGވ1&$Y9vc9pAK.%T%4O$åW*ļgѤdyBNr܌ X=볗7}f33{ϣgVxGK&r#iQm `rDdߘ [nvfzr'7[A9o[UYyw`Yoh`nۚfcxyL}h?ݨCznoD9MSy,/U2\JBjQ:&dҊV l~9 (`.BT dH#/9y]:́%\vё;5 04t6 ISZ=6&/ǡb>Ą4bw3ϳ9͟Ǘ + " iɖAnnWL+m> \hteQ:ߴ`o`].]ooCܐ-E/m}.Ǐu 6G&TrAs>Xdѐ=|Zpڿt۲?D7΅yFp~C9!`fPd݈cA&`Aizfd;}{E\J4yGZޙݍָ;D͆0% VrƽXӻEh}jX0(,Ͽ\DJ1#B2֛piAXgbXB[%pe[I#b?" 3?_äwC +m%[Ed޴S*1)qI)n83P2<`LcKx'< 9)#fvm?^g=(Jgvm#|An Ϲs:{Cl#T/?U̷dJݬӳ 8ClJ遣]4wF)2يҨnJ쟽9Mo/5 S>77]bSL/9<LV뵽 !VrqFߜbq8|ZPryՕ#)zuÈ(v1p(ZŒBO7MN y$׍vZ uR82&hˣ/&rD_#M#+*~\hRcK{īy36\B׺|~1E_Ƨ*ZI/qWN2jT q?S{vBB?Oo~|Soӻ\7?- s:~0 ?xNƛ-Am39ق?f׌Zc/Va݀ ?]i$ZmWIb~|w'9#xLb.Ȗ䜹VAOJ6Ȃ';?sҴUOu 11-,r}V!icRh F9+#9:2hJuvיϮ;EP/1!;s0T^{x֤!;4Gw9Ua'|>U ?*Z {uUؿ*BlohYSmyѬwm:(E~vq`+Aԙ?y\FE暠kХ;ˡZuegߥΔCƏGCFK`&ɤwepI%,2r-#&}Dנke]zs&-}g4G!\oJOٲL#wyqn^˕6_> 3CCv^6A ΄!MӶ`O[ SeaR&gj{ɑ_];,ds-r,o9,HVHy[XFC- nQ쪇OUSh r(Z\MJ7 K̩o<kb*ȀkeZ9U,"IjfJ42֕x1q1!f[Y. Y/g}]?<1z}v2LIe X$@HB!F-!e"GSTʣ15{{!O;'[q!F0h=bNoGԇ949[xѧ@wH ܉ 3D)r1fGǢ=R$sXjnQdx'B!90q9.=^0lk!VN&ȌR$Ly:`jezdymӳ|5Ck|&GYZ 9BH)jt՘9_BFPv%' Cx_vvٚ(ZP"D F  a~C#NoO\M+\&%k6xR ~=Hc x(}L6`J)2UZc-@)(*BGUt,)66 [^ua[p3Z ut t_`+ؚ̾kDouDB $oHrꉒz$O>(d/2THjoAVVZ.BG+u(Q#)YJM+Eq&+L[;m}_Jz=<=TuՈ7N.̫d_YJ-؏*v1BK^ξ ml%K{b(SQI0Vzo5d]>"lJd=E"g d dNcJ,!@ H]PƒbБq>)ט6g8ۃ=$/іLM[/ Վ^y5΂W QJ" oK^QfZ[UlJǢ7dؙJ ƆL N=gZ9b?Jfzg֞? bq՗SghDY'/Ɖ˻C=2R/7%}6E7NL8|Y“/:DZBPNJg,ya5uImە6pXT =!XBR =NG)A526gf\6ӌ=ЊXhňk]|x㛾˫e.O.NOӳ Glf27>dy`7*yYk2a jAݍ] OCu"*SRZUgWW}aL \TDz78#v9nlrD>x]FUUV%d! PH%$ɫlJgAUJ(OE]S<,XiCff( {YgA)RG Dɲ(2;8NHfԓm"jjDl=sTL̆0(9nTd$>})3-ٝR7TI2r9'Kk)ƈL#~Չqq٭kHfZ/.ƸG\q!wE,@vʉ-RS# YdS)hIx\ 6ӎ}6C®K=?jyA5r׃ -4>q$1wN,יxm$k+Օ|g () cf09ur0Ɂ{:b vRGaK*ha4)yީmo]W~ycəuO3mb:)5-AX?>It QE1S:"AZG`xg(c*]h2sS".h("-Łmݾ汳iD? _}gKf:a=hHWg%8r:b'g'n#-dGy,K?7=Ht~C¸mx/yg8]% Xa~V ]Tm'-2N b.ë{W:s$V Nt_< {P;Nͭ7{.cվs ; mW6Sb_qJu7S׶meD|)aQmi5ib1n~?t<ݫ!“b~K(Mf=K/ݖ ]5i43K (<~hJż{a6 *z*ٝT~kw&Sy@yGfGƼ}t狳5comFSSH-0IK=~h,2&XrJҡDPBm!I !KLJׇ&) Jb tiJ8$<$/>oڍĸyakF pC!&szN 0iֳ-6YUIbF\|\ qk_WY%^&<_ilxͪI75q ;}/I` /5{ゾMU/?˷IaX6)n;tvKf@mY =3{v|&q $DAvoM߻ [M|6kc>?:}`l&Mo͋\:Hg.>.f/OEO4) {nq'ŵ%b+ :k;u崁ߝ@vUt ` EDڎwB fКJ5cj*,nOZ|`W 5 2`A,$!4;c9Cz*ߏ9.3uK) 6. ekDY:kM % *+͖,Zƒs[(]%n^1|#jom1<6cX`-фڪhBmUCk÷ڪP+ YGWU`s<;U\> axp Uؚ*SWUZW,ePfWWA=3s<`יc*mC+ ?k+'Lon&_Zv y]Kx#VVOx([ =G?,[gs/gd'V{@9}Fy4\e'! p7ٽim&w*uc[v0ǭd,Rd7'MAEply:"YM\6dO4aiͪ@uWNGĺBݛ} @P;x?y0( C*XC*Uciqy}]&z-sk|R ظV?߾uF,3`Je/kePF^V6V[Ʋ*KT\bfC;ԩeYf>e!#Dk#,DZzqY9'4zi"*h$GŌXsO 7c~őNhsW$^`PbUVٱUqjp; #~` ~r_=#VpO٢)W~ծʂpE_k\Z{nq%1$\Ic@1" ]J'̸ pE=bpr.WV~0*q5A\+W,Knb3+ѿ(WHv+*Wǎ+R?7Pr+qURwZ#Fհ˜ o~8?C9bn|u}TǓ+NqyJ.eS|zsfƸҀtK^,=lLP;_p091/m[=aUvﺬ]>フe. +<^ JZ?7il? ueGnm7u::_|8Mzxo1\.vP}"n{YwW<_Oy1l"?|'6.74&iuş.:׋K>ꆛ wROi,ͦ:"ܼϋSj7ՂnzOk=Α^9w[}Kׄ~{ǐp!zCohiv(6Y`C9ʷ[}70F&[KQn]n1X+C6y 2[PcG !UzoT&S"7NW6Lj0d?Df;C :?w7LrcZ\Ln ܕɻWL2oĝvWcJƩ@ hkGdꭹ8{#JZNO\Ť FU<<` gD{\`ޞ|ݧO{̰y;bMJ wze했ƗbQ6JܝE ,*7 ֲݳ,ݳV~f?_KICN肄GqOWUOg{}s[pK!p%Ew\qA \`|1m)"JUz7jJ *RpEjٱT9ep@i,)"F3dJ=aTZ1jB Kb++A+VrbF̸ 4xMA"ʙb YqctqE8bi8\`)ʙbKգ3*팫)mNrpErU9AV3*<$t5,#]K5I9%}|ڃ%Z>(oP?/av۬uk/wd,̛f8o{9o+ɡ+iykM,[71xg7u;QH,f$r)e$GjHn#9ŒɽR•ZJUE*R :2], \yIu'ڏă> [Lr5Z;ES9ژē\&N^,V|trz%&-ԁl¸ 7\;t_ޞ3r?_QѾ8ꢿ6Wg5Vܻ > mP ͇?}Ӌ*܄x Q]~| q?5Tp"g\hƠ?Ϛm.Qj#d%yV ͯ[.;<)궏?=1h%u`4QCZ!h/Bi#[.Ee e% !TJmRKmCIVIZх "Pcm Z**xe`VJ mE"l.=d-9Eo `#h̩2 [բw $Tks9J'\tsNd%SEAR15:Ac*xlUKUx}u9T|Rؚu ,&YLAH΀! Q{Om\K;SԘɌț)Ƭue<Ս)C +06YÿC Qզ%V>J! &TvCеU "oC !ƚLռK`ixLVUHul]lt%R\{8'[ox^Tv OԺ h/<ps< UY*VTҺ2` >GA#dEZ''VhC6wV +rk5D ڢ浭1C R@CSNm_KnHVtM|'@ ^WjԭS/TRRW RW"lS 櫓^)VXըK$@-ERA"y0`٤, MLE­b=5hXREAEuRdaClj!("f"aB*VPB0h͂ZR"w! k2.)BM'd)2ҧ؊B+uG"bF[xBN()LZ#(֢v(P_G<酲HƜ4EyD*9_AʢʣF S צU M؄Ҁ#.Z֐!['NT1a6H0 "*Vh>3xX5T(cjChYTrڦzr ͉JPYU} !TO-tU뤄䮒<X2ȁMEÄL\Jc]K"94˜uY@ZE4%TE2pI% jGc,: ԙl Q԰>Q$DaWP<`"$B5hr ƙ"cЀ0mo#(,3)H}`j R?\UH%QU,YQ5Q3+ILmUyh$EAљF@ vn]kF+&C1dXL0dŪ[,~ }ϥjŶl[v"ê[[C:(fIN:X6%dh E0GRF U BH( ;/uU* qjZI &+"{/NZ#qU]%ޗ5] 1(WkZA"pPBiuH YV+-Db0T{4 {xQ> \I །_|E=5M+KQDD$' GcEe( /0bm<^,UCӚM#ܨN11].0A烄Bka$5AW /ፍPMhB)hZI+]H U20S )(yCs ~7XQ|E ".3(ji4I(2ӊAU &ԊLw5}tX%$0AG*AP:S#fHA@8MTBNU~d}%*TU;+Q<&KuYA";__n|:?wy2WmKXZ>bM=A!%DB>=A ѡB- :$pZu $$]LAQ,Y¶9PV$iǂ@K^/B!K !=f ڮNwZX<3 I:9vwEQLIF,hu9&Hzfl!rP+ CEdU5\-J~,**Bhe'@"dֈN ˉ6B+4]?V A[YI$Q5$YN$egQڀJM)nQ{mDu,e4-w+$a:J` تQ}Aw%L0A7lRjh\G?hzyۜ:/&M}iߦ]L泺\GIh0hvV$LC'sac i0sӿ{BȭEEmE^k Q.'I"euCKBm1& -6AeF쒊EAAFArh[Rl 9 ]PnDZܢY.j1uFWEAjЁF"K[@UkQ6Q1l]`?v5+BqҔ*SurUp, wH ># U@ &tB)*2jڴXQTl:00督Wt l?@MtkA ܭFb?ڢw>t" (AF6K(I}!?m)AH-AQ%#p_<F/hm>N1z:[I<+hhH;Vt|jӇfRl9Y?:N~ЛNssb3],TLGs:_iYzXi9]lMtx0"7K7 \)r&M'ZgQSK ԡV4Q; T> 䩮:9' ؝@@?$@Q@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; tN T dbz8N b0N o8 t0WXű'F) unQR^rCw-ѓ-zyuٛ,aVB?{j\ SH(7rÔ834GQzv@Pb;(v@Pb;(v@Pb;(v@Pb;(v@Pb;(v@Pb;N5rH(n8(pq@GJu( yɴ^j=Q#hǏnMJ/zPԬڬ 5ː\ʽ|twS6^Fkxgax5ħf!0:`ѩ f?jGdS頂Dlb)҉P0dN[& f盦U@X7KwMv.-_N.wZ>CјCC+j3h*N4(xƤ]C6XճNyo۳eɱx+sۂTB(PdzkBi;9ha\ѰYlf0,fa6 Ylf0,fa6 Ylf0,fa6 Ylf0,fa6 Ylf0,fa6 Ylf0,5  87 0o,LpYZufaBu'a d!4; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@hi=@֫_Gh)]un/_/ \w,VGy~=[eq MWhy9zPƥ|aD5+ ]EZ5Z폝eЕʮg™ ^]VW{~(<.{ЕekK睓+z8t%]1V}gܟHS+Uq@tE\9]BW@OWR+uC*=3kS Zw NwJ v(tZc+B, ҕ J]á+PtE(S+GKX&pC+BEZm^N=/G4+]srx=+gh1Jv4!PG&SOu]^M) 5k]|v+0[U1~7uK-L 0Z\^6qӸkc:rc?G 1ئU]hdPqOOJj;,SʟO3LP'ZUtgY? `3Lp ]ZwrPt+]DD ph?jVZc{ۃ]^Fo]`oCWW }tE(g:A҂^1 `z7xPjS+#+RCݪuLW'HWiDW ]\3+BPnPn@tQj\P hNWG+/j@tv@sWש֕ (LWHWۭ':D5ZY+~d8Wk߬F/z*uYEJlkLʽc$х9Ë﵈YmjMkLimN;:j} Qe^$.l5jh\Kť7$fyu9s]Fl-{ ^ޕ}[s_WGֺmLt1deWJgmbM'm R1Z&zkJFe;% O3^ fʣG(5qXǁw}} S+P*ҫV*VNW+ޟ }|8~~p{fCimR~LWwzڅaCWWڡP:tut} }0+ =wZg ]4+6j?"PF.O6I]a0tEp`mNW@}t]7á+S 莝28.O bP WNW2(+oC3H] ]ܠBW@$1]"] VmRʍ9+,e]~=.;j礿99Ʒ|}@BkE=in-:ZB#mM/D˴7^x2]&t-דhE: i}+t&vg%4/}x=L&r_30M'e. w}y ~F w~qۜdB?l>O/n!dy1??H~ Y'>k+u}}G}*6du;&/_=[BNڔ|/7L4ߋW٦C2 ӖEB&zXElr\Tno8/ |Й'3l* vd?~z:d>[+ϯY9i2z{Qg%YH?FYz6CbׯlvY χObjX#[_`mxZ]jîk{/n\3^G?iOiy9tu-q qG1ً\S7WqM4 MlϦY^]ߴKZLlMM톚C ?Wio/mT'Z\N-8z:Jpj)8锔^F=sR@@-ߟmAܷYrs'É.\Bv>Jo~|,łN.^MەxΊ84+[u9z܋+Un.?U1CjP?<~$ʁ_4y^|QҬk/]Wl_kWulviԍ U7QwImYn}Bnt5oFCl^#Tb@@wb>% -Q&M ޵#b{ދ 0L/Y %G8-IdHLrNÐEX>`$q|`Fces2V3+c`X 1d *P(d2FI, }\9pFx%[&f"g:!3sx_)a-kݶ&w(r:0ܦ? M ܯ&/ !NlCVi澺Vh˘=8o8]~<jڹҙRMl2xATDL Ϡ rX4`.!\uGu&2Aɚ8=J-:."zFa3z iS|\M;x# dۡ=%#*&;ʷeNA:Ieh^3gbqŎvK)hT .6Z_r{|ʍx;nZArⓓGϬ>{ x08Zkdf'R&I07j|0{b]?8}Ч;Y>v9:i]8wrͥʻ,M -IRZj*[tt8i#PF܁'!܏VHq/psF7NaY6>OO eŌɕd$!0nʔgIF7)LA3fz/8?ѴFbΡMZ'%ÜLPJN*o+-`+v`PZf!ӧmHθwq&kBM;^sx6ߣ4Dsedۘ~p=zTnM_;$KrǼ>~*f^=L=O+6omO2l؎t5(rι%2P?`L_ e<)^tFFY<[n CHrtfqbnM7 a/PG6GOɜņ7gW5$Uoׯ_.\2 8CjJyܫ h59"s; ~#!8pZQlmNg@m1k⃷0bYPppqݭ YG.GR-(8GRzH]ÈafUYFL86Vb9ǣrwc.mSsTdרsj^uZ|q#e`MMgqYSy7ioތ_8oގ/JU*;[B\aPģthIϿۿK͇퟿~x˻\oû_Y^fKY-?%}<N[-gh9łW3Us(َ/SkB,aÀ@Wc[U#q3ǐc+GZG?ēy|'цs&Fҙ\ [sZ1geaQl hxIO}4.M{y=Я#Miș* '2D(/1ʩ`Ȥ#0{ߕuLqv}Hѻj:vi+7 R88,|w33N{.`Ui<G 0q^?jNA):k.'sQ!GgBRʍ5AdEmSUzX/R)jJ,Fk02dR;2vTSFKGL@WSSM>Ϥsr11fpZ,j.'|92xe3׾W?mLf$9a^6A П#>y3n "(äL>hTRD%31%aJ4&BN{YTUV8h/|fC )J>7b}#F6q2>3mG_6zД9N:2jan]?l|ԉS~REDmY<:`Ȭ4C3:HL E߼u6$Yٴf_mBੲPZ|g7:`[3L_O*0d+5P5%oDP# /F=bq Zru@GH<N!%&U6D=7Y%"-,,r2q/\RK29qw; a*j3?Mnp@Ѡ vuuuoϭ=ߣLgxaw}l8fwt8z|5k +ΌKx[E'l7+_KĠc.nu;Һ=-h6;2ajRˆZ6 mݿmƻ;_٣畖ԓotn<1[鎎P1㹽Vf%V5_67/JݵyJ:/Y]7^_xs9uh0?7~ӈ,gBJN~Rץ.F.oBq\@=?yOf6YT dGkzp%5b[GetDGs5RjX:2.+nF)M%]'W`\SM,tfONT[)E^U's7CC炘yA5־$eL.J}5Z550챼ˣdb߷ׯر +R&\259b1ر|ı/3^7ܯo{xLZhRpTC7['Aǿyr{hgqq9՝6~H낓F Gc@{ke< MA4,@guP1H#x4ΙdYt1j(m-/%$/oj*cHkaW, C:i=YUO_?l^ۤjDL (:@%Dpʕ>C9' t,!Fx!Q109^^ 9t:Sj uSO:kR|9Y)̔4jtjڀvtcVmAQ)H'`Nу b6jq:Q ]D?k^Ћ#@$#bL+ojިx1iQ"LȐ+bQ"IȢ(l4”yZ,X$R_+j춇-_=,d^TK?EE-6/ک/8SNѹo0:HȜzQg]9[WY[,bиr \w[ y\.ͼNdmr"my׎wozc?Xڪtϧ8#\Ƈ4"ǫfe_(A@溋j5.cMY4 YjQDkWhWc*7$s.RE|e|-}fJ"= D ` E)eV0&vm0oU)2mnnHS KC#)V^  (E}:s|grvoR ul҄R-xP~;oC}"Y} 5yu7g 4Os`%GRʂ$#y:!>^S?7"8N{&GKXbR);*q I {ɡ4>'8; &̠_ ǦŖ1х@kZE]{vux!";8 Eo4Yw83^\NptQEu.*dͩň'WQg8$m\EeL;vӛxMa2oe96fS :t3ƅÉ"E+ \3nn(|'?-&߱r;89NC'8}r>ۘJqN%L.oN?S|ΏĶ<hHKx+ m7 ub61{1~C1剁e"g[._`imn'>ߗ ť=A;G-{`Qbwdt˗Ѷ[Ѣp?}(Wlk[}F_,[^snknMkl 2w$2eBtxCtӉ'xJ6nvOx.z %g}c2v e Ϫ?qtt?L #/w.&FѢ(/q*hձyZ(^rF`2>Ft@y/-VZ,6P?hư<-*gϊ%4udJA]OOo-Tp[\mUttAHzxP)DH(Xu\{=xfmmi CtbG~|Q(G_&i~vjc.t6uB&^x Α1oz YE`*A@溴x=+(Wq|E B?MWB3kZotHү +RӍ4tۗy[hX|u^?6ve4VA<]7uYPj첇Hs{]2"h@8 Y뱑h=}+*ϛ̡4k ]AVZy4(z<+Y- U_x"kGs<s W(X1W HZpEj:H\ޝ0GYzU;mVr9y[CoU;J W܈};% T+,W$ogZCځW+,5 G }W"j7t\JG\!1'DE Q HRpEn'WRmO-UJ6q%X^ v\ܞߵRlUCnH+OA H#SQ9M7c/_>Id0<%A\wG_..&?_g?xRlfe7~/{?<LNGSЙk~W$`4oyEƋDó>6Ͽw=)Ƃ`eY5;ʨ픡/"Jոpn1U+x=$WTIj+RQvU;6=?a?`}j'WڞASiΚ#mzNlEBC5P HCtCsG\ R`MEBJjpErՂ+R{襁Tq.1[uW$D-"jZxwhp`'բg]JG\!mȊpEM=ɵ\Z7x\Jt F\!jHpߕ[;c+FT¸vu2ڹ<^ &\ NN8[-pkZ#L\k4~ltW'l&< Yi4UmEjO[o?IVC?Iey3ZVeH0ԳErU5>>H \^0ʋ-˾׷ _\S)Rh+1jߦ[+ vԂ+RۗuN#WRpE{?^NnKک0t\ʡ2q.hf+ W H$fܨ됈g"ȟuH򗆏*Tuk<''t\T04SZ tN 8Z&t%(9կ̞CoΕEW=O~ȋBˏuP]Ac+s/zRc]&Ǣ+ط#: mTNW@>+Md] ocѕ}{)>vNtʐ'DWlO.ũЕest%(:+zJfPOhJZ:zڞ ҕ8!ਦ3w%pi2SVc+AiNf)ҕ< kܕQW6A4ʜ ҕמ&ufܕe7͠'u$SP/>Z&DvGwny4h1{^ijk}в@0LF7oj#J7x聒)OMotDӱ9WЕ壟D֟WCWQ1I|n,^BIh 5~nn97-Ѝ(`}~;(7?]_;n[A=GL?w++>:"1l i[V#Xb+-7>fEVOr8$/o~M u*@mi3߿{kEg]*ZF V[=lWyy*UrPƿ峗 HZk3MpZ}5qO6`vsT(JM Y)n-c3EJ >GT0}plk'CO&h'DWЕ}f6c+46voBWM+'CW@d'HWܔkr+kTJڣ72]=Abvnbw{l;] \BW@B|=] 'zte9#Xq`CWLf]Zut%(i)ҕ;!`~+k'?z᤮"]ykӄ 5Mg!ՓQWAADWO;w4?+hNzFB x/: ^Yw@'(;yYq|p<(zoBևSN񹀇 2+wzd?rr8LWv*NNwNNPprrOE2<vq2t%p ]JPiIG9:^lWl\^]}Qқ1_eypϟ?wzwKoдgۀZo߫$V/ךO%mmՇIWW)_(_ JV۷W/иt5].n, 3nrմ&FbԿ^᭎^r|3ptǑфֻ|p ')u{O{mn&jg/)[➭[3KFju6o!T7vU~l~ayD}* {ẇ e 3p@_5瑻Y)w@`?/]zڼKo Hm^gj}u99P1چS3e8]H %(/G^㻽7 $ >&>B͛V__3_ߺbq3wp99"uW#гn*d>eKIYdVLF3\_tP:Q5fOBJBmZsե 6ݫR_`( u6YmHKeAqVPc#=3٤SW|&˭91:DHnL"TrZ(\ #s#f;^)Ifvhѵh@{]CR R֖q[EM~w!0*kR=^bC- ïUh]#EG(@+Ј*t|uJ* b Zڋ#Q1'Bk=<$(9A$YUȫ;;]SI˪.j(%%QhnbP9\VT_^G#F˜|X"َMѵHIZ%GYG( }52$ H/* Vh\.`5 ja}ɈX8pj6z.WP<D̨uRL`U} TЇUW)6QO )dnJ}Q\` :`Ԟ%E]hTGhO` եf ӘJ2S(_ ,( }FmQxW 5h NAk!O4qouh;gڰ N?j(QN[PyU벫ҕ8Xɠ-] 5![]J++Dеlh 0ճk.keB`o\d Xe=Ҹ1P k9+YPѺ J(Qkj o2 8Kcc'[FpՈTb0VjXq21hqmq6A6p.`K8&X(\J}S _PNj*$_XVp6PLnCDMJ+K2j3|Wh%W e CX @CР,Lh=4v&׹;f[*QC(]g֜AE΂Gܨ nB?%0R88)ԙZAA[:* pqbVZ h\CMEwf%Ji7yԞeAQQ`H 8 C,鄻r{FGU]%ѧVKԪgT0-^7Dr!*"p`l,@5E@H ,+auhޣƻ B\FФa!0(72BAŌTUE1Q 48*WE҉F&}vm;o՝ܷs.ӚRs;/UݸZ|oM/+&>ML[oT^Fq:ʬd &rt%C2 RUF"1,:bP0XaxTq4E"LԴʨ `5J2`^BdG[݉U+)91dkVJ@0Xq0=]n/guN^.ۖs8deiy5`0uvMr63IҢG*7{ ;-,F ݚs)Dҽ$D(Y=yh4vMfcdGrqѰMʌؓ!)a!/Q[tؚbCJH,ܽCbRuL` tB)ѦQYǨZ0x4Θ͊XV=H՞K IXCL@Zq!;69yYghu k!j>@MrVkA\c땀;*C-6(=< VAP8jժàe6¦f@ F\Rčr48Ff* (8qKYkCՓ;Ab~p4Tm֍\ *w^M(`PrAuʃf Ԥ\ZI4L]Xf좀%![ U\ mBױ]1;y0B.8= f,/ޯ͋rp6 =89ü5`/QÿK駝z:{7Gҷ2VZH D!@B$" H D!@B$" H D!@B$" H D!@B$" H D!@B$" H D!@B$E~ڔ,M@@AY?#J H yB(W@B$" H D!@B$" H D!@B$" H D!@B$" H D!@B$" H D!@B$" tH H$UCA#E܍xa$ 8j> qTr\H $D$Г@,nB$k!Q !^xډz=i@B$" H D!@B$" H D!@B$" H D!@B$" H D!@B$" H D! "8MH X \%Zj ;z$Pi5""H2&$" H D!@B$" H D!@B$" H D!@B$" H D!@B$" H D!@B$" H D!@o 0Hyݯj{]קi@R&q0GikDKvRB*~%R3.ҽ 7"q+AZ#258L.2z⊁}'M,m!q5G/JIPz⊃i,lJq5UWoP\ e--WF#253+R +dE V=`&˶LR.2(Ln=җ6'ZҪUQWoP\i_ 8Exd6Rw)+~<{gRǘLzحq$Lt0*"Lj8E2^Ѱ":]~L?V L9Ks~ ʼT4o6)+| y2$YxVBCh*؏:c=OEhKHL+ Mퟙ\'1?TJ<D6@9rqUGg*EqՈ+ԫb97w"?`Ł=4qqTRW SZ)j+qզ-*S{GR-+f% q\ig9k?nub ̖&]uͻ7<׃Xw}S͟Ï^zNEס"$N dWP#I=39=_t)E9ObAo?~_״vf>[T?5t I[/9N1IcNc$\*HP1j5yXӗC^_9Cˮ^! 0;S&!yHD@|w]s#=vZ F/ #Ou-7iNuS ֬ElH{4L.cm82Ƒ\ v4NKd-]k\L.ٴ;gwPH+m({J#ͻ=n\ܯ@,3y#ɸ缜9$}CKĚ~gbU^w(]rb@qo%dg!Ԇe8ޙ}s=[7ќ&p%I.8SRFd%lȩ T3u)&2x]%SΫ?\j1K=09yLNV|lN}Rr2g#`jְgoȌ3]-[-Z?/L?vP}3药e~o^Ͻ=dޡn_Q[W?ރn1ؚ74e!lBO`/NE\O,ndԫWjhPH.ٌ=ljvI`.lf@ǖlO ~ߠM=GhG9 1(vEX~ > $.Ҩ- MhFi[s )E-5hFdc6&t:egܯ7);u~_H7N^4 {rsZ#Qkzx &U{^unf.ILog٩6zk'˴Ӆ]kS7{@^~gy ;BF|'0ȵoZ(W[~27?_2m׳vWqQx`=-덛^J>Zcl^ {acjT]<;\Q4o;vEz٢u M+JT֕ JmᎺ|I^u˖9- $^X^JREukp\/w:}SEK> IcDH*ʂr&2 4k>Zzr/w_vafJnם֍ЪW-G2=k{|S^v|1"Vv9' hEޡ6(VQ3Fo+]RY1$9ą3C4295c?pgr:^{ճu< F]kdgS+E-;ʜPN(75)̨SdIg/L- /[^:FrMQ[ ,dY\/3] zi濯Ϸ&IHa B"~R LHGDEY@l᥷Er&Pt2h`~UzW~3]j%%&.s겜41gB8K6`M%(#,JrkeD/~3^87SlޓVyl/OJXy,a[>4cGU@2RSV&tfg߀J)8s E ?#jxzXVnw8{dIE5h4"[rBLieQ/ՠ9N%4&a6j͙Brc$jV k9l Uİo+ }ӹ6^p^xk*Oz.rPMLR!jʥhteE͍P1ƇIpd>mfNpkڛ8]Ev( ^+}E]Zߢ}s H PZ0т cWK7eEѨSr))Y2C7Ԧ3o)HP@V;˝^iTpc5ǹu!'.[/IyI>=Z,A"3N ٮ/^=O;iFebbd&RBx" )/TsH'#Q? T[]|:U:LeYl*АcV(-5`] dw~s>oRaR>+_n4!Ye@_7ɓ#ZΨpd[e-~ddEP寮RT&Ydhoԝݴ%o8kأ4] /\$nՃA5,ȕ{T8Uw(]eZp:йسꥵqH&r+iV(SW\lzm:'٪ K۫W;*6uѬ+,ܺۆA͋»+eGK%C\#Eyo~eyN+Q4ڗz]XsYA]'(󚎲FPǹ.x oz)Q\Sə'̭A^~j~.c>\3WiKj;6#AE+,+`QDoUib18N['1!ғH27 RRƹt[")$[ JQ*!$@=yJ^6}b9e  gjZم6 ^z l=9x~|y(rfZtu&X\fɵ-YPG%7xZeS1ۓ>:'+.Z/a3^ (`f'帺:n3ie,Mp(+GyxhވC׹9YD| .KhRsziH!IeGMh4zG\&.%\h8AxgˤbbjWInבlj8ԏ5~*AjNT0NɅp*]I5!xuzkINgz!ɻkK6i=H:'7tiˏR4I_;OS\ZLG FO)4NGW+Y5 ʩ \4'aR]Ѥ`y׆KBB9GJ;ٽ?n&o~xt9}aY9{H #91WK_v1K`Qyp7V׶Ɩ\զff2 1QV:y@W6WѶ UF:V7U Vٱfk$7,}9\Uv'`y_[)_tVa.qj9?0m?~׭B9:_M[ z!KV(3q?v`x5 vV0tPĖ*_mnF/{c WsIݮ/>ܥ\III\Ŕ( (kR`< A׿?-{_|xw4Z`HyNO\2ÌYst0mQlPK5ºKY&19Ҋ`h-:r<"gS`G@-O(x펥Ĺܫh?fKsơ3)rMqeЀe<dF^$c3b |3`ZyBI҂ \Awms1M-ɥ|v CTB},kRAt݀L?N$$>8]a!ٮ4FDȘZHKsyE3ý ]J29akaI/9Ʀi3Gg]iſkzŠ~9=6#]At^"f_V ZUtM6ʳDC2$G R-܂פeAV/N%&;mi-QP`5פR* g|7J7x3#<ݢ %g4'O4u,\M-~#Q¸=)}m]SGZ8LDd74aFx aN9=yʵ9-^mgw'YCRy [K܇H&[sdX:8B\]b7Cfdj0z:ʔ4j4j2@R;xiV VE2ˬ[T5RhՉ ]L:k^ɋcR I L@beC&톚-'ZJJGC';F*W7pkwֈOŭʷo, 1l.9,=iUY:^ spn,A[1`3J$%=)Bm1 ' 7sR8[U*հf숅Bz,+^3;?+zl9<Ѭ{'4~4\7# JcQFL$ F ,U{TAFi< a(ΞGdBhJZfB1e-ލ%&cp1iǮ+6Qg7f-r/rHEEIɅsɊ%#B)Eȭ YMU3WVgĝi9N`-)߽ayty^莭kvOn;' T2grԠcYF^bۼ& tB'biD&Frd]U#Vez>'0(X&%'adHRMpfJ^qke&Ff]B\Vqwc.ֱ)+I8oBUI3vQE쳴.v~K>Yckϖ@ZJ" l8w Rd$kWoq(ƳV &.=ӹMkJɹ)-ySYlѠ%׫FӾ;42yDǕ@Hh21;%K))8Q*Vº5tdLAREd-8$0fLy"`- Tr"f-CQyAk*9Kȁt愉4C +rԲ7}aƍ+餄wIl3t3G蓶+EAw\O \۞U~8ooڢ+n^MA7R.ݰ--?6E6Si>2 o*P, b^6YƋ$ޤsBrܡnvK36Jr=<1C$[[30Ɂ{SEM^ZkL u21MOGD;)R"{+ !֋wna["F;?ۿkp{@zPl1%nqwkθ6Dh|SK+{`.p\97c< :'(![}5? 1X/Sݖ  &ԝHQmnnN,!+ ^ xy׷ͬԜ0XڐT ReYIe.ZIjHuJ.)3վ吴-Sf+wa<0~.|t(JXp'Oi:;JX JlQq>~.*\4oƸpJŝ+1.FK5<=:Gr_u oy>B4n'm c9c-c$JKp"c`y@ADirG6^0mK2a]4n+:p.J\囻-dwbˡ/?Gm-v#yK7QA_->$q~Ćm6k[1_u[kdWT+m >ĸO[qCtGtՉ'<l6X>|Uj>}mэ"!M:㚦~v/$|@!bE㛫gg0~HM=9Ė隣 3#jwg&SAYgֻi-|t 'YeU텎Zuz l}3:3<0[ ,Znwz?|`{;Nݥ Qc5_uՀ}U6K0& MwC!)r(mqUտ:x]uՂW$'/- (?$Su4\BjCϏT"^a~,E<"u#SңQWZ.]]e*okTW (A4</ZV.+wqVCQt(?Cug_?K1ƯMq()-*1 Z)ܷ'1u@IY r­V8ACj s[{6!ߍGWJ N"|ov4gk|\K{M&zrBq"}jK1ܩ.LMehbL$;U cnHQhTRxbJΔs^0i 5*7[R.&QwTQU6p\Wu7ժrZؔ;D&sJa, wgBTSF%K~swnUꟲ|;ph% FBDQirv:@]J3*zv󼔃Mԏ&h\(bh钅ҠȌc3AS$}E!Aژg(F1-BaEx4IdJ[N hQ#jHN~kqPZύ$M"NR@⩣&ӓs t8[cgC ].h % ,g1,3y0xU8 z\1hcVŔ'Ɣ=br"hbNtmvW^upq{ܔNRa4 y(1UD9&ZW%"xs'\.JŃbaGX;+ *y׫{ZDrhu3L}Xmoa;y.~ٜPMq09.*E 0)yH92bSDp\ hzC|\9;8\ZkZe fǖ @ۚ0/eNf o&W oVC?E*ܥ 0sYfL鼥嘃wX9%B=_Z :4 ߲/fk+ꌠp(_[Rkjvhn:0)ا⦗=pGEѠD$U嗲0 :˾a7] 'V&ܔdl!a'B!hܟ0^pJ$vM9y6WK@J8j=!و2@yt\N@g;?d;\{ՍvJܷ!IzCN?ǻVd2K8?LWiG%Tl.NG-rlR4lh =W1M.ƯrSsۛw113bN/u;~n$j_nFAٻKۛMn7~xmOtU Fn b~pUmwӉٹV N^Ytu֎c^0Ne$w >_7C_WAFP*3S5◛o{'wUUsKV(3q/ab\hvOx]Ts U":b7~\% >z_)^\Q]ԑ`gz9xy׆557kIתQO&|}@DCoҪWR!1v@hcԗt}H"xh'3"^g㹑':DA& bQӉ(tf[;SRV^6K"%-M>P+4@ #ZqRs-u\Xcínkks۫4yqO)rԵ{w=|\xLIfBJ {χl÷P z r+qХ3[JJEEx 9y Ϟh5ykl V(9A XT&!9Fę7\Z|2ިGx#gG u$}8R1P7 A)qP|aC4JxGPAE"\L6( j"ފĜoᝠ$Yd4q_YcBʠ!{hb־y!F+mȾ~=H˻I*ւiFd-WY-wnmD;MM ]9CQPH-(Oր10>lVy6N(H( k p$x6:hpX˯ BQc7&䌐7N>n*ГPZp{_wp9U҇t8 *'}\Ʈ9ͩJQ+8BOo/6s(8pJ(/S!rWQv#B% !<;Q|u b2-zF/fE<5" k9FurY bU/1q¥spHV1Wku2WdtTTDd:2 t s0;D6:bd3I)qBByxnOrS={ޚ஺[s) [wmj Nupu|g|p^Y%kTֆM NmLJ҆}6VymYgObM ^%]?]^ν&< yj}5 O={n+nXs:髛ԯt|s:zp|e]y?Li{O(Ori es&&.!R{K2ehlbmWNԓ)QjM9,p )$h%8I@ a=IJ.EF]g$$CsU@I 1jMzNG #g4&/5Q7TdiS@EAjEYN$c-R%|p{ۨ%> $)K&^z$rA(&R$'o@ĂSρ̎9TA~D\ D3ɰ$3@sR\hW! #LkDSFg,j#NRkAs#)%d ܲĭɹVHq!E~/ kݷQ5 _ j݁/ds=aiqᳲ\6 FRXJg+Blx;{p"`NT;oh""ׁ+4^&j@Z@Av @l 7R: t;T;/ș8|_]ɁEL7 ,[BЇ+BP_&xߵ,ԇR"P_Tr{Є=H0fۭn7͞sMk0%[(Z#Gʹ̢5^gCpnaDNQ'r-Rh*ɼ J&\ bhf1 ΀s6zzX}H>j_ׅtu<~wLj nR6+$hhh/ EpC$ow jĖ mDR!Z橏¡(QhAh1r jTN0tY-< l4\#ǎ#רV^1&I&&cET-gcT 1*pT*|R8ez^ 8l :%aU Jn)a*5cclp1Ҙ.l3ԅ@Յ@Z]W];ȸ&3ᆑûu?~ns͂ bRAi#-s"Ȩ4QNi$HٻFrWtlR|30 n䲸,|}%䗙 SldvKLٲaffwSŇU<<Cvl`;8fKe3A%hF&J<-%fƣb.ZmZjj+]Z\ ͒K>9lF*D0L@9R.q,jzR@/քHdi/8\yX,G:|-lNK@Pj;wi$ʍL&p- JU3t3 wZmAwg4ͪ|ty~r䇹('q2=_f$!m8J_'_3WBhB@M[J ILØɷpJj|yEKjdA/'gW_)=xi)ubf.nvnuHOӻ4/URukifݜ[F ]sjGa8#QCP8j뗵5f̺f֒&b|c=dǫm}<#_1JHixcv6pm94+_~={uT#V\?F} 'c!+friݶr߃Wx(kcS>ZO !c%iܣ{ӱ˞8ϛqJFMiC[Cv}L?lCZc/ib1_+J_)P8q~WrM-]Yҫ.1VhGg;wss?Zo~:6M7=0{,B[esyV41jʆ- QPepr jm5Ò'Pe 餙ƫCIRL7v0hY\x (gēW:M%13A j66YR0-}QJy%hh`v0ZLre)PXHf#g]R !1X-N ㏂:<'E@DJCe3Id TV; Tx㩎F&-K" ,y*]J8ې[|H ^]\ΡRV˗!3p]ȉ;ׅ_X~>9q'o >`кгк `fY~ALūW^*O \(6 LPRi>Q H#`7604E ˏ ;d CҲӈJN)80=SYpdM R NJhC$#ڭ-`-)=՛E1v @p(bå@T VR`D yq:T>).l RGfkx=EtL Rt~.o+Ek=3XؿF?ctoìfnC>g;u¡\3^'_!0Rg._8I|3/4I~'}Z[΋V2I8Jq\]WYu>\?t~&)2FsDm#^{lvMW}3^ /I v6t_ǃ I{1=x~Ĥ-o<CͯfՏh:3y0['6h R fjz˓Af=SfV'GhvUR7B\(B• 5jѸm$"*NY=/h./zV =+Lr ZѨXI8Ip B:,)*doA^Y-vE}]nJQ̜ _8dz5y< }vi'"si7VYriű:s )~Jmh$NU%"K #@$p]HH!59gS @Ud>ʳ=Hnj֫]vFr[^l+z%CM Oqt[mw{/ƣ\\|\nZxg(o@G|c]2 *5n>nKɅd)tL HԂhQI͕AJzׄ2km>Y1yZӮ,6ܽ`Bn¸X&eNS#',Ҥ ARGd$K48l PFFCњKm`u4[mMr%NRC-N-EX+g:%"ded R, @(BJK8tBJ@1Y}S ,o\\C7W(%^hNA r\>NMG :&[$9!c0 XcԵ&ga0_͚MϱG4cn(DY{~.؝ջ:%vv7,s#* Z?anQ|{q=D_PCΫ29}¤Ao&.bW\בv0%}Rut=Oc)ӓyh1<3miX%rWR%rW"w%rW"w%rW"w w舡]Q%rW"w%rW"w%rW"wEZ{@ZJDJDJDJDJDJDJ䮞Y>36LfdfdV|f,5GfTS-Gtmad+3Sl=,zL xU"4es:/)SmT|zs}1v1}  &b43ll'䮓,9^?l'WAW|cmD`Ə&co,ԨޝQ Lł}Ϻڗ| (}I%n?K7#i"RkD EGZH2^_P?(zPr=܆꼺Jߖ/%FPfzo,5ۃۇjl8a@=&j]C5ɨq1oP.6h{p7ǛN&{%7d/U֞C%_ۛȭMF$B]sY}2gC6wb\nr';]޳$l;sn[G#ݼ{y#ܹJWsy'|so^y&Ś_ܺɶ22w<jL]sϴCc>6BsHږجu޶&a4 &2z%cqCcLH:{Ȅf_@. KeD_,AeH %[{M\+DFh,i枆e ڂ\'M<7FNB/^[ςG1_by`RO1x&Rtp{P)Gi~rGVDj9-gةe*0uKۋY>O]~/_fhB*:'rD̖{8k&>E"(2P3$Oo.4[>^m3a^ȄAX݂qpfVF MdM46-fdnOۛ|}GEy-mZLSLۃIct-V/NlG#e yֶ[H(2Jz )ZK 8gQHM '76H3^L*&V CEY]=m6=!`5JdbM%bKBH{6gtq?yCr} ODaI9E[6!j$b˂n0<2:/mN;niAb&EB1;LYehY'GV'!vTW9&,SK^9.]JS>iV4*>mk w q= ϴym/.= ܙ4\0֛}Aqb kx.Ҵ7ږ$}?NIzrSɵ ä?WZK6%﯏Caב'8;ӤalJLgRjc30d  Gkvv gn2IW~p{6:K̯-~ŒY9X.0K$u`ڤ$7҂çy;Opڶq:Ov{s]VWVVS{w>ױ*bsYJ)?~4ZM H-m~Ⓐ4tZΑc3n/5uS}vw7Wcb8 \_<_\Vػ]q n\AQ|L\lI6M7O#a41Q`h8],nU8+gg]+Y#vIq/qۑfB+'Π7y*8B|xw$2GWץb ..;]BBlEң5fEx^XΨ8g.BۇO߿ߗ?|'}^z/J-M$h|2 )kh{S[^kjo10ruͧ/yɚa޿D\'`,Mf*iWhfa(יwc̷=_W< {f88~=-œы/VxY5:jhot7ygdyw3D=7>Y&٠2$L V@9+dNd; SWJMʇZ0ai#m}fByG-%'ޗx8u\l-Y vЧ&1{BrH H1qG0"І qJZ¤yQZx4ZW Q5a=A+$*kF2 `Vj z'o5fbT)AEuxm]051_:zOIgR(b5l؎6^i 搃&H#@JOLcN LUA-u-/gA)ن?B_LIz)CƔ/-G53V#gq ~~Ɂ M?UB}{kWeVʻb"$W&M,MteYXgYkPMJG³V9p%3h01 zR0z]> p(]&%U[3V#gjX.Bduu!NU3;)|=I{ƜNt0ټ7\c"ǔdP$("Ą*iCf׆y($(3n4ˆ,i"g{.xQBʡƨ:Q̥MҪRfAшZ~4+&hjqֆZ:!حSah"Bd}C̨ H $+:IĂ≛8RD9Z,,QH^+jևQ_b ;d# ~ܖhȅP=JY)ynkK+;N 7*4@(٠rJ]qQw]Qqg$FJ,EǤ1%dѠ1zq+gs[fmSBL>$2x.hzt 2PV"е٣OL/].BZy]W.o?GsYMXsq,9/$Z0z]z&^(ATǫ/fzHL7L6j oˠ 0TV5gѸlbI3R; { ~@c }-,Z{YL6ieѣ{@Āu0sɽ 2e䲈D %-yrF$)ef+jypg]hǣ_oOEp?KsC0ikl|>جͫ 9G }dY ҊRe9$b7Qx:lU!-3XY&-Ђ# ̃2 xv\]9x慸NB\cS|70T_BEU6QozfiMv}!:B}㓹4>tR}ho纑bcRdējWӱ@w7 RZo." cd:IhK-B5%UԤ/)5|%H6#0Ƒ mFW{ܥm> egMt7ٶw4!$WՒ=eYMN3fVgubyӠy 8& tŠ3Te-)bVJ%c<J'-cZWἑ[O1W=)wDzK\%Jr'RRsUFڦWw4!=0̃-Ky&TQ ) M:,gc%J4UW·8;0&1ʫE{ɪ BN3Kt/,䅹5lܽ;O.asSO1q=rڜwh7Kʋ-U/FR4P,v7UU4"$3 ZQZ1;`Bv;Ytsrr;k;OKaPۍGstٽ)s];)*ieAc.0^FVՕs^3륯nNUluηmmh~¥ e5 7I=QtN]#3 C;y9Lr9ȶ_ϳllSG:>gw>%M<O>ZV+#8~z)qYCu搫Lʏkt>˫4Z;VXAF͟xND* `,a]E)*%M!o#rpv6Mmm7r*IA&n5/K#TcjTnJSpJ|`^]4ACC&"d})^U/XR_c@t??>^e,-49@[$g*nղToAHW9_:3M@}yݍ32v` E`oU 5 TUL;f*gaUM-s:f$0kV*lO)-Cu:>]Zbցf[(#0`}lZl0^FѡgvW{!3k.i;swWnG->}Eڝ|m+܊ qD16tgݵ~m돷βA<׺Xwk']c[;r=e<'퓎{+{x~_̛f"r5ݽ[ލmjů5-FoC_Njm?4xzlfɯ~)g/ngWO|& fKXO7 9{-/-?SkޮzqqMOϱujxt 0u9b矾f߬.48拓 }*9c$o_ouRPX(ױd0@TFˌ1bI0}{s5mf,˙osrF-̟Zy:&% gaAX2wD}ďSl|+WŪVUM xUeyڽ^K%n . QMlk,_$ c_Xࢯ_=VH)J W ز0kَ[-i63CS@\QPAT&Wv[mzK_>*RB@F3!F2AFbѐJsqF*m=)ONHc>vyN~iR=9&z:I,plzi<5cύw,\fr$إ3CTfrl39RigrN1fSZ'GݙJ"Z+RkB@e\}9;6=?eJJU/>䲧\3uKbz\S)"=p3mz6!\`HɽZ=H%d\WB3fXBB\dpEr!\Zc:j; P"\ڷ_+Tidp pJǺ"]ZL"ʸ: -P HTpEjOR<#R v:\ܮ'u"" q5B\c tpEr>hZOIWcĕYj/pw.͹R( &zIYS%K5x:hɑ&!eNȄk9c&dI~Ur1 s)ʠ`4dpjEl+cӋSf A/m]+Vte?r)zJd\:dBB\2"F+TWJ8•pt+ˇ>ZqE*c'qu\I ;t1 HW q5B\s̨p9dpErNO U 'cĕfNW(تt+uLŎ+R)2ƈ+ôR&!\`-Y2"J+RK4׻ 82 EsO0y^PQgH$EGkVLu]}:;z59?K2;zeL2YJ>$X3#ڦ2CVG4E*(gr S l\ÒY''2z/R ܱ)'x~r։"hW6=VGp S?V%+kLWVqE*;(WJHc 庡v`?g\WR0)MQLITpEjWejBX1 '+m*" bT.jR) *+kHPz4"[Wcĕ.ӱHZ;P%2 j20 H c2jvbb SӤUԁcףNJK"w:ik=T GĘOx-kT6$7cUNJmL4%QeU%gH|卲7*j+q)%` &kd֒H5[;1Z;~BB~LH-DX@*u^rp;6=2N O^U?C%Qn&pO,z 2mz*!\cD W$pEj:!NHȸ: v XW( 9H0Tʸ!d/مVjy\,[⇭'gzg.SAݥhHѼVxUe#eQ (jQ-iXS+*+{4Vuϯ'_wz$m<ޏ DOv6vןVh/㪙񓿔7%~K+4Yp*\yXb{}ޑxp7BFWŞ Gi>KCdB·'{<+)$twi;}݆鼺oXzZBay5@]y?Nכ^8[r_~uS}ʷ?uUc-v%t^X_+&BRUtݿ?W%?wH W@sy7s/|?՟o-pۇwx2ck[OV^'?Po7ItG4;<{0Oiq6~~=m!|3n:چXVv]\3D$K'}ŝ`-XVU\@?{W8pKh.8$~K W[gY҈=TDyDI[<=ΜlUOUW?-הY;! Z9yN4k5mP8YȮP~syk0K=x(#BXY`0` )c"қ´$HߥpbZQy5i3_/@)r9VWRٶ\?a2;Wzq:~ְπVTHrv7.pYz Viڜޓ'oF=4| m_NX+"j0428ps8gfZ(S#.#3H"-1T8P ЁRZyHT&:1,g/3E4!)`Hn؈b΂wDðq$ggPȵhPw`:Z"a60y \@ Cn=7NO?`MPW+W[?h㞦-u:\5zs+e;}Vu,17gG4 qL|y׃&qfh]b 75a1x\ oXd1F۲Pa|Bc W5 Ur |5\9$.%sIJS͕j&̇]Zߚ`2H- a]($utwt'rM%',/'t,ݹ\LSף2b*,a*SxLYma3k%R) pd<`/5Q J؄ A2 ǨU1kQRCO ԗGgl[yӋ5E'Z`d \`ӻrG=u#R}lddJ*D LB,p' A0[ʼj *ƨrRgG/6**b)7{r QMH7K2Sk)1VkJ##SPPJ$XNa,sGU{铢#@9с:@:@{ # h0Ig֘4Faÿ,ɌEP7*ٱBa~?eS%i-cS;K{(ơy=4}z+ sEEP(HgZGq=1Cߜ;tnK" UӍM{WOaRM8:\~u)Ll:L/;Z3,x?qk9@T֤T1F: l A2sy1b Xe;ATB>r۔JsZEn0Ca0&Oa2X@4/>BW0ȕ{_]6;L&"u {g[}H.aƍXe-_e/9)-9YôB@Y:@ԖiBRM=:bViƉl,y+t:d7JN.| ].HJvZuU5L`+H£B:-6 $V .[ \\e[&I邒$>#ĺ}-v_?˿s~/1&(j)ۍ1+)#-VH jx+Fg[w޸@G8GHܨqzpJ%(-Fvj s6$4V+ʝEdQ(E1+Ƭ=ӱ >~SM*-vwl4>ڶwv(+8ұRIȧ<߼X]7cxfX{ E>F6bHۀ(s0V&xl|o٬"K/+c݄6{('ԫ@Qnd  BLx)O4 PSw߹+c_ƏAhk5ຌg6sRfƉob8>aR a[Lթp,Յ6i׸`v%)5u !C:O-K?>I?<'T Χ^Z /]nm%~F#˕x%F_iIP rTi.n-ˆ.a+anď!&GĚQSy4*"̸{^==AӠ>cwJեI\)եmv]fo>N#٥:cչ"`ݪCFGDcaWTյ5:ڮ,]Jq'{]e5ձ1hwuIw%C7$I;vlq""5mEv~`EhB6 !Qڵ9&>7ۜM 0S6 O, ;,7 w Pjzhtma|lY*.R:6; V]Zw-Zs"Gτ U9u\FFሀ<(kRp &P F1(x*D#Pzo3B+]^f +7‭iU"bK(<^[ :%I8!n6q)p%8<-Z}6˻cz`^HR ,?؛t ugu :m3Ui ? 27|+˜REAy`,(U0xa2+$ŤGoal%+f{Wj{tt&t5?I\J>IZN.$e G&;~&y&,/@DlRGTPJmXi#>>*ꮏ&zr\~/n_D}x<`\ɶ *Ύ+i{a⤸qZOT; 1- 4ب){u"yu)]q,K Y,4+cI#EE&Ht^;D&MO| dE!!YKM/"Fhs<,nCvOw Z80SE$(Y` +6bo4y Jp6C\"grPHC K)FD}>[E##$іX˽ E I|H#Ei$IOgw(y =DM7|/w:nn|޸$ww䳠2@8+9vZ&xA%ULx 9"7<۠$ѼMUر!~hgrn,m}|s$ɿr0bg`,) uQXq<VL2v +c^̾#e9h탑J2""cJJ$"iS;=!! Il#C0!B*Cajb &Լ豉hj4BZ"2ld`pgܖSL |#ʢ@ ' FBh|2[).r tW7Mgϟ{@Oky:*˪$˹[NP$x~zT\Tׇ;q@(E4[@20[q%o9fd5_Z7I=de+;-çA3,uz ޘTY >갫Zm=T{k>Pz np­Y@T5F 5o v#h)E%ğHA(bp^P\%8P}{=,6K 9zL,/oa~:I٭86[?Ϧ5n͞6\ZwDgpoMJ\16{ܦ+u`-ҽ/)Z}M짆LGC)bA6NuFbø;GJcv~qreϭ!-Uwt~0Ϧ=7+/7couE²G5lgמvC'&\QiP{3mgʴ1TL5 i:ݩϞhq  T6+m}C:t0VjS^qϼ"*MPJ*CqXYmR-Xiґ:ڼusL(QmGs(5tdXDc$O$JH!Wc5m̺irC4S]p}ŷ]rSz쵲2aDOhGM,+&9N@9x'|tpioزɊ}sFNLKa(>OVLRG䒋P|.4p5bw8_1ɐ(JCwڽeu\l'[qLStq,J"$EtlN%Zx/utat y-OR~_;Hj-ڳۺm8)Wlr -f)lc-,A<ƘH zǁI1JAֆeጎ' .jMȳX1ǚ=ds<7r/(Ϗq;o'tpF .cH75aTm;{ 5G '#*7-Hui02γ]vL=i88 Y>OL0PGphCdRh2<>*#--; %ᬚ|ĐJ Ed0'`W֓|! 2L->4.*i#Ѿ_OB.y@obpf@X%Yo]meKj V(m.r&$*}4fcڜޢ'?|{>xU61 Њy8Orm:e>jqGekGRzH֏tiy,? 2>aTz2],zr9x8krnUݣ.'iԍs%jII'g}DZ d(\r*33V3X~:87Vi1 ]%̻ iPWNn"v(_Z7ޘt9\cV2 '#kzQYޚ_ Nh(zCNU(`W$]軪H$&wCUuX/Io?Sz 7~Bɻ"6RʁZ6lLXtҼġ?k"9+hͻ,Yw_egU7ٞwמKKŲ]F?zcY(K2->͆{ێT:T̝K:HL5UpIvtg.:1!csA=<׾a4R6 cѠR ( (.od {㩰Nu!o!^ ~thU&hgYl1zJV& xlz!;b,k.wd*kLb/dc.6VXM- ۉ,jXr&]23bf&TS` jkUVH>Nτ\J"$g* .,sg3+^Ro4?އNɎ]z]_w~Q.n|`1*S*˾raUV&xE˕@LֳknJhwsjT=%sQP.٢LR}#col֛;b!AXHp'EmnfYO]'4M6/ ]N~ 'ӯUNYQ8hrֲS.d ;q";TNլmph24g/d:FUMm9v!<We|1T*$v}G#"~Ď'|;vEm T1ɡa\Je`1#s:`{JCЈ,!.,X $rɊXZBX!S]D^{ȹgN}Kk;Ҁqo "xǷXN%$LAv1%5`oKQ9nJD.!X 2ŝdԙ gbəbEť$2IGψ9#lu\t ڧ޸dW\4=>x[&Yym*BTQ+#Z'Vnw#a[#lN٫AW4de 1$!dݦHJYMe6]+{nbc!abȦjr6\*dahGYz[ *TɔdȂԚv@9crku{m|,&&^(娴JlTd$Œ­'P"ZZ=/GB$,X(,6fdI՚㬲I&AqXlk F]7S_yvM ە^ؠ:@M* w cU0 ͔!s1h*!&_92gK dR{8[Bb~ :ovp[B*ק ]HiiMUM.>.F-ZS'p(qs# ؤ*!CXC 2 AUfm; U&[u}R]xlm^s@ MހwK |Y]!jUW@ @֘ R{X|,#mjENhv)0ְc[ 8 9@g@!t9hs^u{!ƈZNmJ'0(.63jAqX[(&Є+> ;#RF WnT\pYT2N$Jhƚva0Q{Y ̂5*)8BY xAmJUΪrݶ2+{уEXY oT[ 3]$yxt,2 cmcHuKm0Zi+ FŤ+/t6b2M5d\Y"7tXtqtU  G4衯po6:7  ;ށf1h.֐Dk sQS9)  `LyzWã=7fܵ5p7LJxKdM:39P.3ds5HNJdt A(P`AHP#F' \ ;`752[46I#*dO "$U."0"pŒc't\N&!K#:KckT];g&^ ߧu=gx^E:<^.;nqQde;ĪհNnq-+Kg[]n?h}vQOePUQ1 (y4J XăWEJ/P H!%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ RUq ݣQ\Ǣj* E*bc@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJW cRsS(`F k|J X@_V@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H *a^<&%1@G d6(hJ c@gH DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@r@wֻ vԴ?^_vׯNI]OCRcc.y"xK0WG#\߃.y&%җ \zc腗=75-{4᪙MRZPÕT̻쿘=o.&2-eoiN}zvgr;hv3mhpvZ%S S,4$t v5 ;OhPN<Ɗm| Cx$Kߖd;5/Y_ӂ O2h5BDy:n[崇hַm' M3_KǣCٻo:>lixZ7;7?贋y+:_գ,WI.ٶ"jsRLwVIRW#Nn{<.}﯏^OH<;\p8xO#:]pkپ{o)w"V>tgggyqʚׄY)q+L YΖC4:7v?ݹ1;洱o㲪)m+dciWϾWpG[߰g62sOS$սٻ.}>o0Mn5|pa}nӋ;Nנ7__jPU Ǟ?EI`v}ѵM!/ztR )G';\6uyGȋ &F/IgН'3/A'0t'] $mW={@F@WI)c7HvA9 ,!AFw)WsD/ VY,OX+wxz^GPݖne#|v?P4@J}dئlTKvB3닋yYCV}夯LOp]˘r[*_(k rl~^Mw5</l^n|&WpP۾+1o`Ty[-vhoɖ֧BZFtf5x60oB7+M/pZq) -[]= o#C_[ +.&\agl(|c?}SZ~*[RH*RC_ Sȍ#3$\id,k)d'a-1T#n)f[ =ǐ?>fef~Mp}uf>v.^ıxWo 2F TZdjUٴ*kvc9z3x #̹Yؖ#Fz1(~;~QߝQ<`ZThj]wU4]+-gxױ8Mz ew%Id޶ɚeV*x^dY9]+ ,)ϸ{geʹ8Φ] O [0=Y/A3ݭ>v8>SޭA9iw3[ᕪ '\J]"t3a70Lڣ2 E:gX[c@4*W)@ cyI\QWxԈls*b|vS>#ӛ _ts#"Mpޮ|߽݋_~%xZ%r‹$Xխ-+IJEZ|JLNмkʲT1ggjMO0%P K>s :ҹDX#/"߼`9gEuHމK+u˝f`C8%JPA-;s ֙vJTV~ԪZ*ĘR'SdURhnh&2?N03GxL8ط5]SN~NILg'_bb$v$yџI^NPL\3%v#x՟Wn^_>\~M:i :Sg.;x@vhC5VEfb;lg8[g'nY58ْ{|j ˃ཊg>|nJ;p<"`dFռjd`:q7uaՏv㐳:[qÍ߲u+]̏>8z YGzgv껣d'u|$\ 8|7 釳p 䮮Ȉ:tۀP:}){$^ѕ=Lߗ;ҵMcpIY Sts3Q؇i}>Y|=ƨ  $^Ys**HR Q{ CdAMs >^lBt$x#c͹J*; cjp`M$ㄍICd J3:O& xhJ7BR!ue"H{,Dd .ahMHBR)2Vxr(ךDAj+Hj$ϯRmP&P ,Y%2T"2pFps &GhtWߓ" 5KjQ>(<Ġut: @'>F~JIF-Pu^o0;U=FyeayA <-̯|MrkVԳ1/%3Pc} zy,^~p ڤN/lF섕K:hÜU2$!(bpHדrEn> +䤒/u;? (&t>8dK?]zlnj Nwx7%;EvhSm> z1ѥgZ=^ce~mӓo./QXs~O|o$jOODٛ0YmM^_kW2pJQ1r2 bQ9ԣrgOnz8c׮ͫ.rݪkRUg5HK GTtjjOy/՟8j߿odzT9:uܫd hv]뛏_ 2*?ImԻ0Ob6%,|'D,}r1Bf:"%jr20IpX*?v1E켝ٴe) Pң-5|))cr5DkĎΐ!a^gF'dߵ~_khb Αu!E-1jÃbERymᕇ5Q* R0K½57*扞s;'L$S'.YY)RC1O$/Az}s(x\r:+|>ZΘ-B5]'* aX̅NE|6ڪ#/Sd"D2vFƂ"vзt|7K*L*A{酷>{KI.!1P ZX]9&EY']ۗd$lXK2IF5wkݜK-Z?|v BB60&%*ruHDBHB Tn,W;Z2(Jnu9hei9eA0!<; GgNk* x}0h>R8ՕޠϥJYWek& ضB I {zk}OJ$P!Y;@YZ-d @z(Q#1d'eWa&atw3;4E{5?J|iK} XVoy`s&̅[-TB t H7@N>zvF[f56ldž:6:%K1.K&AIHTPk_ha?iAAY)$Bf%L:r L9 *}(a)+vlXk| ؍G%4hn׏F|R.nS|{b1*iNUyskL++5DN@:V,kR 6U%L.Ue>`L(Zkfܡ$Vi qƆD[]bЅ[Յ̞Vdw=Iw9;͟ЅOUNY5;4θS&ݨPD `K2dm@6ކ셬QiUզ.zJ(;+L@_d-H[g#ZsƎG|݌;6ڲ֖썧eP%RehUJ& XRJu`1KbѩT. {Ȍ a/:irk2d)L#ծ4"fAjbyms>Z6gqgNM5jՠx\xSդb= Iaao)y%;sn Tdu[ޣFKv'U9&4dJ SbOZ&Z]iFlUG֋n!\l%EX/A/z&E$#끀碋*E(RShd.GN =m]ч͸cS}!lVm{y;/CPHՏ(ͫ{S{NwBgkP -Ա5]:]Ɂ,DCv C=@Zcq1#@ 2ꜣTH&3%mKP>K.)ɘ%$$EFyy  R ЭȹÜ^5'VPÚc10!~x:;^W|^0ÀW z-W>M NjU YkLϟT$DmgT](;J_(" @6hU-크E-SbR36Qz ATYq @Pbal;Tuf 1:*ZbLQ gmĔ2"|c ی;ܙƐ;hOMp?̣=!qi֤n>Mkɤslp霢bNx'׵(:Z2fJu$ |S!^?g+,ч XpQ(PNZ`dH24#v!^E#]и0W鸀)x]=nqT'-fOZtk]>Ko|]Pwr@'fC:e؁47@*y=#_xż~aVhz~/ƭǟ)݇\-zoE]j::Q>NX.z SVWՒ/*gDʅ3%||_K/kH׎|: ]zW\|WZ#6nXhj_#Z\Oj/X<Ϳsr:/0z?}V{#jU st"O:tfճ!EgB R k+QH;wjOIF4o'v'$0dMCY1 Ia@N ibtZeѰ ͎0ݸ6ŹϵE4b`9H^%zG.ՆP-d "Tx"#:!$_k#)9c˱`^OZq^X  {`%dͬRٴd\L!Z9UAX!,=wO.4 ȨIñ 18LUb6qxX#0>GWx:l?Ў>CQ!<3)^֑Äxf{bI;a:ٹI1vELUPwf}don\?MtgJ۸_rUf(܇M*|pT*c䒔e9xHFSDr4 L;gK^ }_JѸ>No?@'A.p7M}97~zh,"-Qf M%qaԙQ܀c/{-Nմ)C)eDݑcn-VYJw*T(*[A1G/%u SR!Cs`HinE΅rl;\7v>[ʽkm }\n;i'Fzb00QS;.)D՜x ZSL`e菉5HBXU]UhCqH]DK)j{g׵w߃Ȗ6@< pUfp#9F!l02u8Єa>¬DJϴ‘a|Rڠ)C^cL`%U`ݘ"^r|!46+ڠ}̚fN {Ѹ`tlU0Kvm\0Hʰ.Bg NY6 3qv4 ɲhz+G9,r5hpv'g3&ػfr:[JN^;s㕻` *t0G<1)3xk~h?E(CyT2R5TU'K'Hg'oJWp++ЮɊC0Ud1U}*WBX{H8hĜ.ﲰkn7UY&MJڔ]c&dO)[2k5EM_@~W)+"9TV\\t}yٻAWr&͵t*/t6|9'~b4pxn{~? _o.cjʳ=)(nelj^Hkmu>>Dsћd.;74)MJ>>oq EJ\'*-p鷬x eg6;o^~^߳͞=j~rꟳ˿<<{X)0_a\1@;އnׯ>wٓww.d?7Ÿ&|8z0BEL;J6^ ^XywucX߀? s.}*iutn .N}H@[3:<5E3_|07pͰػ*>u1e5-^hp1T\㛗E8}$nd'@ġڳE7|.ts1Ut+2k@ܟkyzsZ[ݤSݫ 6.ɯxtVm|~2~d- 3#Pzoo:zgP?2ӚJ)]~[|dDuLJw$Vej[՘u\*3?Sݞj޼ s"CKu_|Cȿ?֑s##b52-P@N9/$Ƙ*;oxÌUX%ξp_ WRi|TԄQ' h_ #b3QItag%t4Iq`Z5#}#}M3WEpjߒSeaitH$[? mMDے?%QO*)`)s;X&3ebX<\`029G4ۃ`R0cSh%|S2c>{z~k,kŤ+: vƦtuEPY|\ Le6/zS7B{xLf>IT)ЕQ"'\2?dڥT3UV"7u&(=τisKrMy61X!M6 @0#ۣ7$rӒސ7$*.ڷ7hjEpK.Rn \ MD\\m;\^t-?9\ݏ`E.CO~>Wn\{պS"o\TDBW@-Ct*QIvp-I6zJj-pb7+J`kEo \%r%J׮(a\}pˤդHSfC3'R3LNngfd=ځ]X"!L̀esϟ=4=>A5cgVp<[LSY.Lr027q~qJ[bgUmj~UhVaxN *M<V^뚓ϼ\)^XE<'qr}{ܒ1O O>E6,| % Ԛ4gBkkz<Րxm- R2`<4@Qmb,Z %Hr,whA hAu1&J.%FL._߮o[Dl>ݺZl,:*W] JAQڙ( D$!L$/HS.+ie%sCs:ȵe>RTxGDJNRdNF03,*͸J UAP$@2qSFw_y, 4CO/+% {&B80b:-,b.-Š=5XZ YPK,GHI?mvU iQJibSGAEDqYP@ZobTTS ,;AjW azD0F8f 6n#\4M"RRB˅#lYK8_qz^vˮE^t{)qvO$4L"^ F$.V^p'J0OgY(G (pf&SF{W@gmvϻ:DcjɦV+뜢qr9ϏQq6}1Y JoYW 5릈STKqŧ};v)Uנ.DO ݴ%&׏deK%UUb_\wxBoN^t8۟N^`N^LD&27ϫ7I[MC{uTUm刵vޮr&@XOh)(1_'fQ><>#@p0jᑶh91&{BaR:"As:njp k6QE7Ze.hyOϼvnkpـ^s䠢;9K" ~د${']>l7Q9p&E OW6Xɒw} Y[v$+tvok"גVPtjfs#6ߐ3 U.ϭq8Nm6Gt-sjƭwnx/|xs3=/\hMONyG@p)nxV0yw17K_ς5?oյ϶MPecvyC'Xtw1[.[CDwp")G2ޛDb\KK"1VѽO$K$I`;N$e7hοニ',BVV9+Gʨy茵yQ}Hz"R~^1DžKhZRXD%7XKkev`(eG_l}FOn-',*lҭ/Q:+Z`Wϯv Q"u_`5P4$D4^֦Q NќԔ,eQEE5&i*IblQcĩ<1>:quqQEb8^% $zGgG)1A΀4Q ^'&NF.pq_w\ea<@X5bAUqXkQh&mU=ěQ(]41Y<ӕ*C ˁGm};MOK(]hBXo1QL9ëu D(9{͕\+P ԥ)g{# kǍq(RĤ$#J," y, 1Gre7g$(σ#FHu4ٔ7j@ |2zsU:^XBs漵8JJ$K XSf)H .esDR6Cϭst;r+)Y)C~8*'%)B d<(Q7** TVKQc2 RJ jXW.RB^_Q|TrrJ8E]vUP`,P&Z0)aD@T W.}J8S/[nc&4co0ʀ/7vy]zgAnp=vn~ G_+ԋL峫OˠBM[0ԒRNԆ4TK%smxm)qDeh%@\rQomNuA=vK]Yϩgc6*"䤢:LT hPRH@F"l'cr\Y)5sPY6zriՕMKOn/fZ8ΈD0G 0q &s:vWdꤑBE AYJ;PWD{k I2Ė=vKH\c"@] @Zs&Q89S1;O8KNPJp/F=.G|fEs?רɇEu0;nV3\^xz(8JB̓\L-edD@r[V23L?B=+CU(xƠwhDM'mBHD|:)4M3ߟ*:!}n|zſ۠8)$OGG!̯˞}һxt[~xZ.`o-^|OO^>Kճ_SI{}oMOAo?M\V$7K_<%:wW.)yʨ~*Z4iy/-_;7nej|_7]p4zIAOljmxҜ;|4XӷoDnI7@a<#-%tlZ~xs_|=kŮ?/fxF9 .yqkȷ'Va۟p‰8RaѴ wޔdas}E͋љWEr~M+&r4?GMi9hpB|n6 &{Au]B;.<3ÊP43Zo }m/3xTu6ޘHS7[!IԲzc=Ni}PjR\'ut73<ۓٍx{[ yk72`|I'mzٓ_\8PFvN/#ãWyxoF.׷8~fͨlw#یqX2ep, ,e9k3,?}rh~/^hⅬ(ٻ6n%WX)5dUVjI$}HR.\%F)ǥᐢd8F4k/ Q'W!!MCKG:v6y}gy1_sWyUpFv~~U;j~7(L |7?|_]iQ/#ڛu/z8{F-qyܫrT)x2"~ 2RJu;fOm8:Gfb{1V ,2׊*yZCn6j q A6"gA\6p}SJj[!6Y(F_lKE6y"K^,W&pmEϬi˜+1$ (vtBH'Bm?'N3Pb2Yjj5VfpPh!E.9l?TDu>5 5D]M-#WKenteY@MQA2!RR >G $- QY E2ݲ3%hQ"SǢ9oZ(Lsuq@ZG!x8l6ٱ8fH-ox`.s\6XE݆GkLƹSsq8͍q;,ricǎvÎ4eTVb7i)Bx"Uȑb&nv9.Q^ցk5߮\W|s]W բ[l;#ŤBSQK7׏GNGCrho6 [&4I+r g+80HvtkK%LKҙ6۵+F4Fd4);N%0$<(j-|fէ\/H.ku}Ň„ Xe8)IPXֲQIIJfm,z]LesV%Mc. Lq6{4 ֜#*Ĺ[oY_Gi^~unoY¦ 5vEgYȲ-E?& v_ߋR *g*h cW?zIC=I~{4Z0;0.{ARw?_y7MgEq>J=@CA>j 7oEϧAPZ )U9옋hyc&UJuRIA~\ vbNF4oS92;Df M x)s`QxҾEt򨠓t"DyŤ5@<%Zs9!]. ]WƯ+וue2~]_WƯ+ו=6ɝ~`خȄDӫd *5eN'sM$,1{vZML8 MցPF иB)!4ZSYl̢QvD80H4s)1iR㇓w8TLNsڲ-avژ841hڢȂ6Qe(◟%Z;>Mb|_+_N-}Mp2`r2Kq*BP DZC% M v$)iP^G)K1GmP7,M4&lgVyT=LʖJ,mRl;o-SDC%gaǩ%Zs G5D,Pinu3*W͆&BHF-KxpY8IR2ѿF_G'UqэU+ʪv|rz$v6R9/ U"Ws\WÕKt7ljnL09 ˎ}wF5Z'MjH RJ֥yΨr# ޯL-bIݲoIUTWzpXJx6 hHWݾngu+gd`ٯJ; p7e[TaU忍 P,bDR'D&. AT Mmձoˉmc`uým\[r*٫o|H  >왓4QQb!jBL4BR!!M FD5)h6sǠ׻XOpnGgE,%K?NɇUu0:*ivSϒVgԗsŢcL)d(Jg q#g X`T9%,͠UDZ!׆Q!CQ <\HŌKi}cZ(}ש /h82tP"EM[v}xa!/xLF$MrG&FJK>9̂͌% MEbN\zMQZ2)+a|K=Cfġwg~6)aO;NtE40??ޫW1$n0)Dj[G6O~Nw2W88yũ_P3/k60g53*'+ޛYrY='n:Wg3W1ew>~ީ;mLv!S 7WH'{}W ᧜I㗋zVȽwow{yK>ϳ~_~yuh:!87_})& N+y~݉Lr0<Mٯ{۝ᕼ[nϦå/Xg ޞM?{dLj?s#*!}e~/lPXcӝ(ʿp|Nm9?WWB#=L+:m4Kr}oG̏} j(/n|4k.//ǝUQYf}_׏xt~th.s=߽.X|U*iVI)P58xM9g84SCMu/|Eq!Z=\vKo^\+WUɮ!UVM[N9qzN(瞤^';wW_ ]m~~'7qv_MߍFgQsnr)uo^bM>?T_C'[4͇ 5=޲VF|^xg^f{-qcnY$d+>P}{1xx?孟XflI0pּhPzăNKCKE&VcզMWQ'^5b6,rWdBFu[M 4!Ҍ1aBje^Nrjhwg5i &@:~r%UcQFD- ;y4y{rhˁY(Bv$LioRJ4:'!0@IM1QC`CLAqJlAp Ԓ3˜;mЕa5r*TW{$2}3PHצ*k2=ltJ+)=J+Z'c0dڞ'Ct7qֺ/g4.7Nȝߟͩ-Jop޺˧>8GDZFK\6ĩThVB ަIT: 0bYQ ̥}>6sڗ=| (}/8;;u"u!-$..hrX@(7d9)p sLÂ{] ;"=lJ 0Σ,>~@ު[3^W5~Z̭UZZ^cݹu]ҭEOŬ^4 2+bmA,m\n+ޭ|PRy~}䛛kw_>֝D֬B ߁I!og;x/CoW֬",[Xs9ۃ7/Ӎw] 56_g>09:AM6{8OdJU?Hn2Ej߾TRBR#=Ҹr6kmO9ִVQոhRp:۔=Zk5yF49ĭKw9M\H >J1 JI YeBOŧ]_ՍҞP eN 椕|0LI&P2j2P$o.m5[]mo¼ Q|_d6]S Q/dy*p Cz;no^s8ͽhYokzg{IZcF5CiʼnJaDk}̓Pc YU7!J TN`]T[o:gh{ yN%U]mkmzDmjTn?- y9B+w RY=Ćri}r׼i\?r2']t!c2y#cEtY0.]`>m4-^8Ը_z,%uhƀ$J%3Tn\yS5$% 0Jo?b: +-CV?8ƮB[/w:za#?<),,fLl,R`VLIz|tB@ Ev5=?ִ 1P oM)L-!E fNsIIO }҄Ơ@0ZpBiք4.>:2~uث_ӆ&skOCi}p4?>Q\ ˆqqmOg&ښHHdy >~)7a]fvayd./V'48NmĬ}f`:ᖌ┡c䂝|6ڸ'~60:I 5kV38+GaH2Df,&Z+Za1p|'L?CuU$U=޽ikUzw\>J&EzjS3BpJ9-tں]l!1)Ĝ߆<~>V !V|9o=zqG8jAʑ\;#FFf$ ǓVGIJEd'|0<+*oa8c8OY,e.O/JG\\7k 9i25Lϻl .PvTia8)A^>/?~ącJO&oO">e  }h'j M-V29ło95FU!0~@ZΨsFosM!2@+VHL Rs=uf8x"yO5= ▧ApOc6ggSNw>`q6xޤK XՇL +Ma62 ut5]ABP6zpEx WQ&@pM B$eI Uއ֙kg0RL֖$Rq%XGx2uŸ}ޕq9;hͻ.Y7wpe4omٲ5d͝xlw{BPOkaoNԚO|^)n;8dL2O<6,҆ 4g1>nZ p_m[GzwrRNdhtA30ܢYkHD3ҳuLd(sfAG9 8˄e#-g`ViBLZ'b2ƫ ]{bqǜvi+5KejBL*$]}rBc%2zQGP$qA$yV w.YR<%ͥlkɞ~R?tD> t>bš}vič*>'{ՑFm݈FRҸ]юcIͽ+D'Qs:1EަB/cG$cSzktλm]vrZn}"I=Gnr=G6/qTZu\=#z۾^c TY"żOTj4 'ɀW醠lg >,P<,(G0-T^Z͍!C7ZDŌURhOuuo}LPQ.ULB篧dS0X6[ $Wfk92dSˌ& DJv![I?/?%"*!IX2B.o9?O[OǐYOTnn]׻]_[U+ok])2bx4Y:o;\+At259 VR)K.H`тI<&ZzALJfFհJ5]X3ԅB`.<.m~Yj;vA]|4[ &F;"$ JF(cBh,QX4&r`6'uO#̊{> &CR)ԯHFd2ŬB9TN爙EckF"gqeb֮jm^Yk^kvc=EXZh9,c*6`$!qcB1{]Uf`+ rEVthH$dQ 6jX2OE2䵲>Fv}% w:LX[jDQY#^#qǗTI%.L9ELd́#k;#B1Ie2JgB *Ԩhc$KZ(#*D5b5rkogH:^5 :&_g5.W/zQz׋8^g#e&L 1 @sdܤ!g0({xx,w>Tu@gͮfs-X|ƮѕҨij ']ugv7Gt?>t 'Su /nO:).$֞U~~,,r|-cxP=QªvuQ޾&=Rg8u]1,Kt5]Y:L>]13Jhm+u%{)HW)] .\tJ(LUs 2| .fӺZJ('ԕf+vJp+wbt5"]‰-ӲOsEʟ㕇s߀tVls2z(oUde_?.a@}e lwOiNvy+p^&̭jNџݳr nakӨ O7Gptߘ BU$3Fn T:4* v<7Adg %Nݜg+TՙB@8ȁtEp '@Z8 @JLuNڧ5r3lt%'>`U`ueҧ_; ؀FWki] Ou%4ƨ+n[3ҕzk+וP=j)銁MFAiO0JN+|Fb`1]+%Щ 4ƨ+-~.֕ltŴVu%Nwsz2zEWLkjg IW#UpΘϓ23mѝ+wNn߱dH 8&L!#R-Vb^י=- 7/fwr+?_S^b.nW7qv-Saߵ,޼siSdIIS.BѢޗ7P_UYx]rעm5li|oO2>Z/fMВ- ˋv)XIH}msDɕdSVB87MT93uwg*Y}̨m ~7;}ͭzͩ_j1^;3 Rjyt-7ڶ]$[g٢7=DoWoF{LJ{͎,D|HiuuUE<>gWuև-R=k9r2Gaowxӳ:oꫛ,c%E~U+fJ 8-lw_z4sP;'mCM9)_f&px͖}ٶui {(NR>M\~u)?;ƐUGEH&r3-rww?E(Vݽ5Y0-pxy|xu:)m?)A1`0m3:]r*]`gtyZ੬RFlf*ӺS32O  ?<%Ѽr8 NZUuϖ)Ő=[t`|7`۞iP)QtkKϞ'q)fS`}>sOX]:m3-F;5vԅn3tXe<+=\ywH|ݒP N߶=. LlY¹kӺ?oRU\uһxͶ}s>뱢ѳYq,cԬWqxtAm4ͼ8j88[A9 =8I12{75 PJkx hZokF8R?5CKjF5cLfrSIz |*=Wd4_pe8Th]+ӄ2L۔+l mH~) LcWUyʒ8׆_bs3`'s&0g6,aN=Q^izoe;lMuYã=E;y~k΄)dFle 4J*]CiF:;$5̙Z|GN(4L>Ǝj:#]10|f |u%&]=]AϢ3 >aOatI(;$++tokDBHW:yjiRJ(t5B]$BFb`U6\RiAcJZWc82ҕFWEWBkוP0jW*Π;FWbZ6u] Ժ}vthZWr"6JhmcWB4jr^ Jp;Sוкu%~u 3ҕf+%͓AQ*Rɮs3έ>tj; H7t&'s\5\iҦ{8´$drDf`2>vly O#a挱@AcHW Jpɴ~i3ҕYL|N `N=r9aD2(SےЕtoWJf+ƅrbcUF+Cg+֐OJp+ :u] %IW#Fg+>p s|J(Md_$ʩuѕEWSқIW#ԕS& Jp1] mARƨ+u!vE6$dJQJلk3j_,:`kts8sLuİtn'~(ːty):\6afk tP6Gw *SftXTOPeԓc''Pi{rFW+5< g'0p]\WhD Qbb]ᤫ}^kB3tRi)̤kX7e+GWLk5+IWZW bFWL0y]y']AW?/z0JpMrl}هKI1hDvM\o7޼Ǜ7o6A!_ OKNګm@/k{DVޝP'|U]캆kCO\qS"ʛwh뛷e߮K9gljU5^Y&_w| Aapp_|}U޾}uq_W~TśrlY|ϡxWR׵y0n_щWU2]rZj#N7_QY;"bVqӅ39n B G{5w_|w }0FE˂-_rӊ`l_eS-x 5vSnEQ-:]sۂ}Sz5ܶsPnq4A+VwmY:v`APӤG8@KI'J+DY,lI֩[[]ں*u%~Yɠ-],p 5 ]S`17l@ ]k05Za7j\s ^-FU5 LhYlX똳 PVPB ٻVLU:IcRL}A1CO-` jU!Rcf!'`WP&4$:\id<$X( ꛐI:Ci*dL'Bo ,l*!L:@@}XE[PB]Q[=ಬR uWh%Wk@ e :om0!$( ""*f"n43%̴(Qeԭ9+O[=IB# ݄#Bl coSL3H ̚` Ukq+y 6"38J!\+B4ަ2ݙPHq Ls$eIj g (Ez@?"}@PSQzzKUW.#{/$uYhAUDI)be+%2Lcr[jkIk ˤDPZl,@E@H v/P=r+ZQ0|p.#hҴAO \5m/fĥ"Ҋh"棊1͋BIJz6}fC|mgC_be!}8M-Uv`c{Dzf G1*4БV%SdIWcHV*U2P (yCs ~(AΈ ʃVs I"9d^U0P>xmBV5 e8 xay{'*APP#HA@8oU?jPJ4%wV4%μ ȓd}E1`Ye>ZbM=A!%DB>h 8h"!es|+j З9jz#H HjLEQ{XRYf}Jh v% Aڱ", "+P(vm!ڃjF,F,-;="hy0%-@Gh*k? ن,n% EBiv%&HTyP*Z(o*"㬪ZU@_֒&ZJmm`=A;+[|hi>lӮΗ\GItE`0u#IFOB=V`&}˿{B("bjЭ(z֚Bj, yHYchPAI{(g_OoyPǤbpPaRDlэsE6G=Tʍڪ/-h:)Jj]d*HE ,3 R4Č,mAz |"2PʣƮXB{?+Bq&SuurSpMr;X._* Q 2=Pʢ#6:X&zށ U l?t* (AFW6m4ŘP+Bb4CS3AN8QT5]QzB%aIPrmJ2W[1pQi8 ;h̦r:;Vw:A'B)N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b':7N vVv"^"1(, tN Ͳ@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; t:N;[=O{ًՔZ^v7_oVWWgeyX?V'GݏqIE#1.=zPZƥS0.nF]:+b7tN^ЪY-ЕCwDWl\膮 ]Zm LW'HW#z:2z+Q`Pꝳ6N7Ph5G ס<Ў;웝ͲU WۥCgVAnEُ_gե9\Rg8LN3in8m|i؄l5/̽KZ]`E}Zg,a`>Ζ LP\R9Bj8/&PbǏ_$jm r=qr{ӢV?|W׋7Au{=y^pX9;d5M!~98H7p9W^tuu NQ7DiUUݬZ{@ej}л NW{|fKW}EBi:.r{ЕczKk-+k}/tEh?v(N λ u("6BW6c+ 3] ]i:*wDW醮$8c鈮ߋ\{+BOW@c:A.S1Np셮V/ auut蠣㗕腮9zuE(]`:AH';+:NnnZ{tE(f:ER#`lsX͙U/t8}ڭOl/-f\˭mݰ¸6X:N7||vY 8~QG5fWI?mc01Wyib[Մƌ8ܛ=흆}l z5kk[YrF`C %lG i?f;ty0)!:4֢Y/!Fe;6C/f4`C9b:FeóOƞhFK%^tBy+9z-i{m+kم@h?#9!}^p|=5yj?vj߃~8_,{. \O`'ü jovA}LϾ#9cBN8] 5QLA a*v.#qT1z5z)Rx^Hİ|xHE)D~1Eh=v1E(c1ub*z=fhoWO(hlܜe>궗|5-2IhBpoǻw>3vK9 bgbؙrW7AEPV1emy>ɁWFGoz**gzpΨ=5y;!<&k/NgPϼGh?屽.8! ֻVr.^M`z Jvħ1/FѤTAi# 񅨴bķ%ODr$^ӧXL-p5./^Ϡ{BL Zacԓbj皯dcdntn 0SێsK4V>"tQ1%"m.RۆkS_DLIe?J7~ rZByAؼ({Y=^ ;`koR )C0P! /lx^bR#h5_FBc;l3jgv[j k#b&Ht.VW9Ň RET/nnz&`%4Ao{Ͷ#{rJȇǍq/͌,TڧkkBkqlKm+ke/tEh:OW@ULW_ҫWe7t!^ PGR)ЕU1 g ա" imZ`u7tN^ЪxtE( HW^zZ:/+EMk+ζ_7g݂3*%]*UN.yT )0)[N_$W( 0bOޞmLɧJPWCUP\'BLRy6Qv-ٴ3x@(tkT֘n|*Q^Vvr*;KO7QpL*6 -m?#6o q&4a[>"MTZIzBOׇh3JQ.8@ISq*]b\^ Ǖ4B8W$78.BRqE*S믓q\)Qe"A5>5N-qE*ʸ 8$#\Q]\&"RWCĕ[W(8h>ɵ HŨ#3+ʌ1Q Ԛ׮H WĕyNBh6" Pm J3+ :6K΁ɯ]Jw P+{GX|(|/nH2CJւnl4I}饆#nɕJ+r[tWƣUɸ:o.Oޣ]J{hbZ 6~*fy{tO6a\fՂԩ; ;wpʄB\\+ԆgɌoWLp=o"<`yX\E Rt;Z^=UZ{Ԃh>oRW͸z륷Eт`+k%\ZR "jR>J![t- .Bxq5@\A8EW@J6"^sUR+RBq5]f2jA>V`վpep.mqlpEr+R J+st5D\9BrP|I.."9#!ZWCU+uD#,n1_]sMΕ`%v:#Se 9F-—kYbGytk,d-dţLpB@Fɑܽfyl5#6>#gr`*\&  UpF/\c䷀Qy z5ช^'=Nm8Pt\^: ұՒ H UjWάьpE֊OtErꤠ8u̸ Y8{>" ZdT*q5@\i/{ ~W"Vp6u\J)͸$#\ Hq\pEjKWk!Lqe=+\f :`goXe!P.e&Gj!m=Rigr*#\`P|pEr!p&u\J\AOD a(lաS\&Nej\AS]^W(/N\pEjHW2~EW{c+*W$+Ri]q  lpEr+R2+#plpE5+Tk;w{\J'Cĕ%k;J|pEr+Rkl".jr '+P>"dz:P*j :)$ӯzfq*W, H ^~ ?~ $G3P0(&n ZqHu S| ZN{^(8"2M*W tO >M<|d`{U8E1jZDS(WOu=N) plpErAq5.u\Jq5D\),p GpEr+R{-q*S[ʸ @=#\`\\/ R!j[e`+$ :HWĕ1{vf2F 'WV>7\q5@\Y6;i+RR;t:jr&Q>;( ZTvo~pv6SlgSlD )<{ A -Q g96h:߿<|tw{+-`=7|8F9Ϯ5lHOHe)C">Z/NO~2*AqeP|I2\pEjլ3Ne+Y)i+'22+GsYj'R;*Tzq5D\P$dI5'UkNRl'3 Mj!vRnԱ6_UJ/=HhyIlUZ~jWѪd\^Y^|Ļ 9z|QYf:mf./sma/梞OYe9;Z՚e{5h>?`51 *9VNfmd{TiE~[E5?X=!U D|YxLo#=N:}Vo=P9 d)>?/Ż +8W.Gg7dy ͸^/#__7]?Z{?ty:} ]:RMS|B|RqvƷh9t:͐V_>H ٍE{⪼>= *ayqsU!Oz_~G}{\9.˫~~񀜧ݳֱo=xO*S^w}}zw() l-ϽҵB%Z{T'F!*WE>GNt+ʀ_ׯn_wްl.oPIg1:y?j@'O,FBt?Cns|y3|? 68޺O.O+ZFG׳7*n~wNhes؉@}.?:;|;gg9}Yʡ{ЈLq=sz*;sU?޽?~ݡfhyO8<:[!\f3xԔxSPZʜG_/.ϯiS"J9|%RZB n5^i$/dzɕnͶ_O [i^9BOpIQo1ܱ?8^g ݡ 2u{<=U\~.QєҖ3?VN Z;Ix!x(}@Lr#?6j x&zZJ΃jjp SmK6LTU3-ʦSj׵6ր,?&Kv.;y);iKxӎ<46Uy ig;r+C"-61?Ⱥ,ggx?)@:ZoNYS_?`*x꼡`cV" tJX?n`R<}0,^;uZ,{uD ^x;8cB`+Vԝx۫(=˲U ԦɍDH:LS/5kqvwmaO|X>YٟTԹoSS Wu8𫍻 * /ϏQo>04N8^e ?(+sӮ=oKŵf~֩0L|etR }Oh#)TMe>QNɉOwC'NX=qBz1k96[ lȫIǬRRϒaas\L]1I5^oe:2LĬq^!3\™8Ϻ g#huV\5 ^oX\^gM;fU=Bw<50؄yQvu0icV1k1+&Auvrāv$u֯Y \ylR MeHH:2&E .eeW3Q*߿g#=\]2 4ZyvzB16Ǩaj$GإPl1V:G3('y,12Sֿҵ ;/뻳U~#eUAf 3QӺқ`B#e2FUJOSWa#ucaieSHq{ĝ˧^/gryFPi$)k1oKw_;vN*߼ XxM,(rZePj*,oƘk6~NY'?d|H{ԳLn"%(+ÂX.G(uyмw+.-U +j]df֦ވy#'e[ۥ; m׫Y];Yݎ,X}?e4KLW^ pNE9ƅPτz*gr X>\9hn& :n}וlbkm^m, K7 SJ93+_MJnl%RlU9ikknGݳTa*éݙڹJ ͒|)ɢ$"vYu7WxH$uAJNT{/=_X4q^*V=xiBEܮF'+mY)cЮȗpݘ᝹u0 y.wN д;3N2zdxVlY ´+=_Eھs'%W>w,"wcV;2ӂ~e /ѻi̝PVqL!ALeY%Bg,/8 nղ>TN17ϲ4nyN4Mdi_I d>Y-먱(|VsV:*`nN,vdy~Cf+U$-S= T`e5׏SrB鸔\?ZJudfg,qXnwljº% ,+k|1؟wuqC~ʼyҲ2/%}QeeVG!IJ",n~K, 5 f} nW\ '1%墱G#n"puѷs_ҧMzcNk􅵢-'#K\ovSد}hmz췵!INWxkv ת~oKTjDHIJLĈdZeYB2#\&<+Ȳ$N%' ߭uZB Y2t׹%2Ws>pپ\]|Ft]"mr0E%\nU2t'B.퟿h*1/7ш3/d3Fӏb +OT\?WڨlXүáQ]AS_w|3Yc=V to_>a,7yWBb?z%+Ɣ#t!Nڀ`Y?~_h:(`?>2־^-IJW˒vXFߤ+m$uKfܞn+ndgWܽ3gNJ_ue7^}nYɓK}%GIJL˩vd q4wcv&0%ioL38ӝX@}YGw;2yRCcj2%nղ,qUgj:YnPղvɗ}P'}V)D=ʝqgŪGшCi s'D|`LȒN|II䋔:! ǑPq`azHˆb % OoQVBe/ZQHGvY0<첇SAK+_,dVt'Ér6FKfhe7;7]d+gn+zL,>;ZU jS ,*,9ϖԲf]B΍qG6}G(J`A-˲WEx3n 7dEKCijYJQqQאZ.9YZѢRjՉV}V%c4YF:8XgIJ i}> \=Ka7W"sC1o[h\ejߙ{D_\;@2ɛ/H(ڇv)hv,ye ;c&Υ _V~:1!kˎ9,r:y NU_Ѵ&yTm`tCG'Y(Ggg}TK~ W9h] e2$͛{I`Zd%,=eʱ+ߏfukRNoCc 6@(g7ReLmD<(iLy$y|QPo23 92x ;4*s"ݗ(ys*C"*-9wRUFB*Coƺ9zNRZ)?``%ѦXG^{n=KL!mӢ)o@LX8g>,Hv}x`A+{# 0ȈqՒsC-kk fD&7F_dȇNacOu:;ObV t>eIz}W̻Ip`:K׋PHQnh@W"j2qq[CqlB>-f>Zp_SDL;*כI3KXûc845qᮇnR|6\뛕<0Ǐ`;%FÒ{,s\)i{Wz_)9o8ؤ|7@lb̅錢g5>dLJU%;[X= a4藡inu ba/&N`V5riKEsS۹{9.0J`(R;QX Q'G0+H-J@ K %Et lإeo4Yf~Lj]YQ'&?`2PΞ|* 5d jwysFdn`o-f@Ui~;V!uBx4A25AuÂ2FptzroJ NsM.` 2#,`'nVދPfE:5!nDi3nsXeAZcFF`R|>@o8n+ Rŧ;feL&4FOWpŅ`V# :)^@آ\*.1řrPZ_C<1S+KK * 56=[}8Æ4@o^'j QwMC7CEF( 5<[8F{;Q,y]mfEZE|3c)2'x2;_7+̹c_I]Syh - Ix89ьe ~JTKw8`@@Y;b֛l Ĕ !g`4^ך CTuaŃjSP]XzL_?fssgFG7"as(S;8H3;$j"lg.|M>K 7tK1N}CЎ'w>IZDo8~X,1*Q@^PB?pG 1NX-'lHu$3ҍ@)VqSيl?[`qTb&oëZ@V9`T@F y슡b >W)]x=7I^S"aNy{A!PTuopᙩlE>a^`q_g؏/f/'>)"1?kpm%UJ܈` 6%HNr(>?ZYte;ގ\ wu_1窹XǓGIEr"߽,Mi#52JC9x[PQgE$8t1蹧F<:s*&7 'q43f$_|@P)[M2" g}*;KtZmtWYLpi+EJs-ŮV 7& 6qV냔{*P3k0QqBa]F>]LNoJ/ 5O#Q # MF My(1!u ۇWnoÃǞwc] {/& Pb}_8xpD/^aG8 (L\)Q|J$JdBK=0 Q%fM>bRPh*qDD5q .(i8y0cc͂/Ңoii :1]a E ֖n-`Ĝ8 4 D"#8Ɵ$$kFoF^fJΚ̾۞/n;s1RH%*324%Ϳhsj5GRKWiEЦWF/4F,1kiDPf (Wcwy{&-4TC{]g0ǹ'N-&^GSѐyċ69~m;/ۗEt9'F 7@2Y`=nmFL<y OIǭ›#3aI̪`spx+gqd0P;eY J5ac޻ZU͙pc>9D"r1=Pw#!NLXJI880GiMǜq-fbJv"4=hVj*e)I[^MT6z֝4%*ePaOظQ LEz]2S%egbɛS!tΗxmD"VGl*: GQB!fr3ZcA:V6DXV\ULBMKP(>O}h#; uz>fv+O,YFV-Gc[ad<C\ 縱VwgBs,E2H;$` be<eJ%1c7hG6Vczƾ9IzƽtsLf/ުi=%6HJe=0+ʃmQFFSMs7 SRv -&xתNvstv?hl 4vlGH;FPN2q9+)MU5Aǣ,U jD.&"4ԶGdujt,Fۯ<Ϊ"Pŕ bt!txdǂA.cNgz[G_euoKy;OIc FlY%z*Y(*RHlwX˩fsBJdYl0ĐSW]kυVt=H`M` ; 89X|Rkvma-ZQMJ Jq>]=DLtP [V*Y%@}(F6_!ƲjuYYQLV ^GԘoW֬9'EۍVp6vՠ2ȼ(w;⸚ȫ``T' ^TZEkָh#QrYc%rKfW~\4KPaV_t l}b>H<#׬q9=y~ZW*T-*]ų*g{V}ЧyOr: V03N'w]IzXoK Ss.N""6ϑ7ӼIM=a.3mgr43t:c)HKin鲬YcM6 -V*!%3ޛ P+$̈́Vҳ)ix"SwI4[.fʼn*9_eſW̓V,Y:kŒmJD~PcE ;,OI <Ӣ VԵX?Cg=>̼Sq[d%m  3 ޒ,sڮ(8x0+?=!cEsۅ[̂R⿢dĽmCVp .FjOҕ/uڱ00AZ+GcqD:3!)Q7V ;59ByL#3-DAX 'y.a!6.7i;@~mҋi#U|j7+-1y# O[PܙPo*LcBr-Pw'$e,VOmfZnV[9;E߯ݶ wgJDwQq8^)XoQIգuFHlVF:]ĻR0;m#LM)6JΛooyB(T$Xͣž&{eMhS%SE.porCz#G?%zKt~/VJ Fх ٵHSpyu/r==y= P9F{.g5|H\ߣ\])hxUr9Qv9c\,]r)n Ms =lWxͦ՞a_`&kȗ6_X|\N e;a^ѩfLHVs՛ng OeGDen=J-=x1ok}6HRg#t8ser| \rR~ ":s Q((۬Xͣ$Ze1+%b@- fCRW9ru &\$ũ:\qID|zAIBhh0^CMa ~I7f# ԫo q {bo,!sxj0-]|vL0g^{1H.7 (ro:{.R6bR3")"@E#3^(|UZ=Y$w$TD Zg,x.8j8e;giЪ8@8UQ{ے"Xg߱ҠwϴbPC&䂰BUCP0:>o$#T`ǒrQI\vD3e?#BCPo^VBcKճYהBvdUlb&Pfk$v/JVQ :b2ƋLfe֬1dPPTWoo&ZB|uvD`A.(sWǯxC'a~.;Vʍpd>k4MTfRJё#؆ xGKR}PH%@H(#UNwk}4XP Iۗ]c&gVƠRg KI.=X5oSib*K<X V^Yj}As`qsi`Wdq|Ycl?rFmpmC0blL7 AN?>(`Ȃsf0DLwi)`um4͌SPAюy9Ԡ0 ؔ*D 퐬#)]#}RƑ]^ U˭!D!*XQ[jBSB>0~3B0جyR; o0<Ȇ'_eo[cuɣwpJKfH h#L.qT71qJ[Z*znpw3-:SY(ВQ݊!--mI]aWZC_ Foq`D}e+&zy7Y&·j~j:zvN*= k:{oHiTDAZq<MD /؀ ^Be8$wځgP3bJPjmN`,lHI 3ړ@ Yv: -2ۚb9}#=Pylg?CHsF f K%QKc"T 0Q&i~ƛZ.J96h"99x/96kV:Z-@+MVʕ=*ѻJ(SG()J YVq!3W%xǴ\hއEzvu1½9LJ_^iD_Ig F*Q8)$YT9z g y"U2U* ZB+|=4e^:!:荰o,aڟP`6i%qx Y|BBhaZ,E^zg6 UWW@)`W)?} $%l=U7sj!WWƵFKb6j>}9D4^>own&8?jwӓbeDE4Y.ŭ&Ɵ9QQR t0;crW +Y)z0MF+f?vPX]@M}'QvL$/1N "GiLST$'c(&Xe?Polz?aiϳ?4ן}>gK,4yX`GH)cZ\[$)URfWǜa#ϓ $mNJN~4c' rv1܌Ϻ z ,ſ0ștHwz|cvw+>'~9 ؼ|_k7M :f'?wIqfX5pL ݅j$5Jh~DA\a(j֖fS)j?:7_ ;E9gkIuaW!?g%d,7j*RpCc4 3jR@|#}BQ~mW[#my'{ 1M*Q{0Et{d tŦGt9q"4Ȇ+H1\C.w7/pdSj=rV6;yWɀ.8~몯pM0k<̔aDl#51-rWumOdC;Lm%]Om;v\ ܀ &?mZweBKcf-*~+~+~+~/V0s<'>s:db VuFJ6L+"fT1ɹYu ͪK|S.5H຋k);x_}fqZ8V=^2RGdGFwsiiy[v5&/6G̥ )% &ly7Jy+G量l癛exǛ~xsT7~Y?ڀ''J@DaYZjfBc %ʃvr<{޻(1T{R"1pcy8bƅg `(儉@\΂Vr{[72xK[֔mKΰ&3^L0 7U(]fcgՄs˭^Zʥԇ-M֐uZ[*9 u5K O=(R<t` erot3Q?\6ߎ=st*R*Qqݖ_03m1Uc"!{< Ma%!*yޞt~u{s_c$$=9xj %e#mÈ@Q22)+ Qm6FhUcIT*ݶj &{E^hi5yXôۼoa_VR Lڼ4/9&W^*w+~rMrLJ\3p[ex6 ü .\aT)!9K.jVac}\Ԙ5:9]`%ix.Ǔ;]/OnLW1U̮+Ц[xůjKU{*+\h͔]7o%47ӹC01v0xZ]%D "k"LI(*Q<`T0h$S,zetwR(ʊ}aS>QZ[-8HSDZFgy *|t TR#j/vwP"tBNn/Y$!4d!C|0&]4,ZSNqnX`R ZQh@0W A5ELw9Usܝ#4Hll_ w<`"qfX]ȋz#9 cQD%2Wwb5Omz[NY 'w` Q I'5|2ӜƂ#G6Ť$(\U4ݏiȠ2bY@q1 I+Rj8=.Cd)%d, ӕd⠘E/7F tRIkAC<ޟ*lpc@`|08LHTBcC9)WEmvTNƄj58#J]&6˧Z#"q l,"+I.!yK$jZ&8C77Cz^w d7%R݊34 kyo+OSp6Dk#X.ஂ0 9mc ½qcCsO t^84KDIόoV5b4}\2s=>.*/ӏ*j B8m#Rg`*Khꁱ1ϱ< fm+7әBw+lĺɳ؟ mL J4lU8 `%1"Xjga:$Pg'E>&"08cтN n"̒4-D0jѪ(xiB e/^MM̜R 1u3;V`W1azhNc#\U':-eDW}_X 2XM^s2ϤoX*daZz( 4θ .;Ɂ(:$uNBsAhZחb(6V| <cVF6VU锆;WЏ 'i]l΄b䨉x|`?R,:+V`40D= /'eƗb+ *;Xlԋ?M&y~fV\fk Qf`#HM8x呍 bȜ&mB)6ޱl= xP6FrQ  X2xw˃:[ס;9jTr:z]Jj;&7} y>}"j&HVHG.T`h26h;kᷔq ݠ%N7f-~j"?WJ$SE3Q-`i8Pf6$'i<9=ΣC/EvӪ0ҮEӦ,+_"ƪm q{- ArǴv{#_U:M,晧Ȧ t)뱋΋G鲓@xa$uvV`2OBF.<*V4Q + XB6psXt[yF|Dƃb=&Y,6/>Xm< 8CtU)!:n1S* CZOT4Ǜ~ͧ~"g*6d64파% "Oe UT^xc)S;(alH/3n=ii@s4HD/,' ̡ /Mw8Hţ.S(mK& U]EնXA.z1Y]KO_dgk2;a6|F7H jm /R \eȱ=ͽ)ǔ[iRt+Tk]!ef,ߚ%*L0ǖ7PB@jtWD $7)nGII 8]%a?(c/:ڐdm6MS!Z{忍nv=Y<㬠:&Tj,QԔ%7EPR)rv9餉^)Sj%{L hH_KP SZh;ɐD#c*BEÚ,^J 0}bh ʑq88*3#ɬB9[ʆCJh}:3ا/HٱlDYd/{NN[dظw>-s"2."QN2̈-x$a7q{v(諥wn1F\/TZn)al#ey}`| MyDqa&EȒ@{[+͡w|I)I>PYᣪpcjpk2r-qc+9'b.W/KRTO@x2IƮ^W%T U+~7Bw1pYө0%ӤhJi| 6-OsWUQu3 f } B(q|<1\oRG݄׶̢s=ZBcK$<1P2o aczNتrqK]Q_֞6p=S9EG; gR|1˺l}4 ~'(aĭNfv7"d!bdc TÒPa1XWHNDRXBJ8pJcYTA:9[J(6A;ϋ{:w͢gIA%џ/]Piڏ x[5Z)B< mz~,Qe%ݓa[K]J)q΅ 8nmή3B:8Z&Uەx%UwF>mD'&. 9޽z,+ʤR%4NU+D?¹R2ɗ٦N!{4G=<yLxUuOA%4NW 0B蒴; ϟB`#4<( Lc Iƽ7o/Ǔ;/ -_:ӄ):gL_[Qm-fUZ$OL*sJܨBqJ1nj *8M_x+2«QpQ<*TTPBUŽ40cRzX S3hTڸYG[LNK ѯ1!Q"]41vKB+2Bsgc[pY,6R^!y7I;xeRbMM2Wy_-ɒj}ES*Ƚz=I:bHotJ̤_VU>V2X@3\qcm0ΛȌ۳ӯ|ӉSN5ɨ;0AoO D Xh0wӷ %fM4[3XhosG`<q4u+O^39zr^Ϗ.mq~ 46糎]Ѧm\Y$W?}@"@%ukp8&iY=s 1{gV:ڳ>Bn db3V'!qBmx^th0@ECS4ΣȽbM ^ʿ`i`}çOA0x,ڰTx2Y`5B9ޒ m9a>}'_?ǣLS{P=y1'\)'KV F~9sId{?u4YFeOVYi\25=Ly<`\o>8 KB(Fp f$GJ*SPS (y9GMA% !feJw{s9փ[;qG=&+~%z}0z`[sC[ 5;P/\~CړaaԎ=fwş=V@dx~ņb:.tem'.gp} 2+]Y~)4A|h o\V=. m ["ӌCx_ڬǫD9N}yf=8>sqQsHt_)k=5$IPQcތ1$ur_p zܱ4C %G{\xEB=AA_zvBdK)'ף󽙢+xٖE^oXj Z=9v{NAo،ǥKLfO,5g]®/Ͽ\}f4၍eA,>][&nCkwQrBI>.f8\`w}W "*s+x ٌtw޲AX,(Xىr':?bq.Q&eoe;.x0*6@pc"BơnM_mɶf:3ȝ m=l]k$/G'z{-t}:KjUy X \RЛYR&,`JB#+Zh݌R^^#)CpIuO #D'})n=%'z_#9+kTA=68 %.T 4ThPN|} rSNf<8%7>y(jOX_1CW's[זHU+gdb^]e0y.kpw8d΂F`7.lowi.kZb&cx,Q  $8Acp7vhVੌ|0]: UNT?aAc}΂txYJD;hb3܀]Ԉ\!" Ҧ3S&d 497ooBg EO,܌ZDc}M-/Jb4\^ ݼzm՚ݶۗ& yWޫ׺i'3`G鍨0g)ҤnVb|WRB}uIeJ w8Bu}dڲoLrIV]&^e;htL 5It8XCd_}j7&\JSQ64d(NVDPYnJ64ZN'+6ݔpky2bpCT"zfre@ꃥ;IHӒgv4%A%p“O<6{ 4Kg|WQϑù| *_(͘pל 8dܝʌȵ?GQ Ȏ_ zPwn`DDH< GD/cĨ`f~c,o#t]p}NwTm20-zo\kRUwу_.Oqtnt8]M DԦRRK̇hGsCN[|h@)EP2nSmҋ/:Lv0lDϏkJ2zRT;ɸJ)^qO2I 2z>RK_Z}}x>ē n7`8 8_Q23:;FK ;w_EI$ٻ;k[dN Lqo,=m=n94nYQI x趝I"c'򽮮|JIֶeяoцP!ɾGSպCɲhk vIcP%mvAiB+3ZN>nYT&y%5 P*P/ܬ נ:onM Ѻ'ӏmjI Ț׼7Znp5LF6xbzb@j@p)|kK0D</yDn1>/w(1`K叱n1C1 33d(%:Ljr)J 8$K:ːչŰ9e2ɔdV,Jm9yOi [ݼ;b~%nJ] 4M1\Y \,3ji2rFsV"z7(/A֘xcA aB?J/Wa/LA`E/Ixd w-٥U MffCHwA(*Л7^4TSt}bcaI/9ncD~Dk\PW~yvMc^B_\OFI_-"B ԣ5^WO΅fB .|ӿ~p(UaP)83Xej=#~B4o FFH,׈Y,hNI4@0 "b8%fUYΈG78R*ѝ9R.Tn(I3#\VX]XJgB)\2ŵH&ёv`Tp8Mnז2!E{:x`>ûƸE]pL̮~0wBW/Ռ7t` 86YhP\عJOyZ I %`b1Z ?FedsNB 9Kw bC1NlpX@PC5o&PvQu~CE+"ƀ ;Lr ZE$^wq76")~:#k:P2G9vLMJJ)>_ p ]lzl;;q5Y o;o/)o3e r EtLMiũ1Kb-YmrA\)-NL&(Mp%E8tN1KiY тQ-&i [3v_c8f_OA&`ڢ8rXySt 6#eH;2tĈvzA]\K6. H'c&2(L?B6ey]. τŠ.\ E1II9F*VFeͮLUn%3 N҃ի?=vEռ޽h|SNnӣl ,'Ż 蘿Sѓ ˋw6 Fe:6f ٴ\~o $1i++ku9ls[~.ksz}lG*gk)S^G ,7[  =Dޯɢ'çe.$4,/G5Q>%;h ^#Teeq!(|ꕷa5[\VƧހJ{)k`j+ȣz\ 0yuAíP\V{®tjks5Oytz*B nWgŌFT$T-B?줥#V>FbӍTyM8B!\ǿda|7zciMC_6xGnsqZv<Jnܖc>iz(["oEibyX;F׿MRXS껫[«]ln#5!8Oa{& Sk xx.9vX\k)B['/CIBc$@`t| )#%AmM!v xnLBycn; #raRGzzgIUxsM0o4ev=Oח_j>=wpxF'诣gׄ4:1&[8}9S4dBWCeZ3w LJ-Cd5Gzhd~W\eFJT݊Ԝ҆)S$n8N"X_~6yXA:BޙZg6LKUP쓙1[nXJ$%vXXoPȣd-f{5AⱢź'4az` )5 m0N(Sz`cO%I`=Dȕ֨3M$1c?68QxiJ%3A@eR^JȜXǩ:?]hQ47LWTH2lOdzj4)C o)Qv=S2g+!Va 4Ap\ Qacwl)(`q#b# v4%v4%|?Q' P;RQDMY;hgxr,^Ϩ{C;s*}m|j"Y'.0I]?% !Ƶk*l#F 1yVQ 55ߐxۡv11)D9?8Oό~H LnÕ`SR-`l].}@`:MIIhWcGiB^Y[g[j[4&=z⪗6׮_8ЭYWŤQ4nٻ8%W,H'=43;ZH3ҾQ+66f0xuF6`}a( CUQYPh/N_N$J=wI-7uP}%X̻0e1o|[-a"Y?7Jn8P7%S 1&=\^n==V+U`,sك5ꋄR;םK`^mY`q*S{Ė`ƎGXrn"4[rFΕ࠱W-@o7^z;opW>6+09`#kZ ŌE9::g`]"O^Ki&sgNI$S-2i:c%.Ra l^o%̤aUcVG뙳w$vN7;S4IBR9Z[ mZRUtP)IYQ 5)R 6MC%1 t{J1M Vm'6-D-$%0gw(6d枛J6Pё"ڰ mt=ѥ('K6 >F)B68Z1j<GxHտ+?AMiwor'Ez˂} #PK\rUw{_MRhҰ)U Jׄ]16}ڣA0Ī$I So4)IaqJƥp{ z#8]k%aՌAVx Vq>mPJ*(kjhCf(&̌ѫ}Zj9XDtc]$WWxŒft0YL"6N WeREs&X.q=+֎DmdKQXXyW0p\CNEN!RҫW (6̽Kdߴu⟏0c܇>aV}l%É20*8pw/ȥ 5(!ug4PUie3HI;,'4d(><5j9`zѺW_f&J>C't2U^@zޜ.lRJC +WU-9$!\hqK ¯{ԝ&K_ge8+3YqV쌩& ag$=Asz2q45f@.s~N6Hsi}!^AWl]Q8,CT}c>g]&rzsc 둝X('d240 %t֖"O{#臦KAJ ]{(혣S؅~RQ_EzuQY<'rյs `;:,ՅFU21И㸧V&R#C7Z﯅!ٗl5>B4)to8tm#cT۴ʮǪf4ua(-9Oz]xd ,r>gmUQWmFufz;y`ðûsKMqU$)$S7+W8=4VԂI&ydb]lfZ+Mͩr O]*[wٺ*ыzo3ͨ6۬{o#ѩS$]u]n~p'daR\1Ts' Yt ټ|yh0\ b= W3ĺX/]"ՓZ} 3< Hћӧ{SOONǣ A)LONѣsz v}ȷG! c o5sx~AhA/TvQg8ӽۉ"ȧ{Of=? gʻ_ޏ _~Q_ft$ 9XC%)$ǦԨ2^,(3+G+gUIUu ^ň e|R~GeOP.6&7겘jz<(fu3d6U$W$HTTQgWh]k-)jV"ܸP hO ~x v';gYöǬK\DUѝ~.6@j<7?\ADT r ֺ(%#`GڍHؤ,f9I^E㑓vUݬxׯc}bd&ZI7/72%qV)2QUg0k6U(P*-$q^RʁM^_D[{k$:7;*n^JO&29 LIׄy1ڌ1f6cYc#oI ;+}GM\Eq݋uYrChz689XVGRҊa$!rlmUg蝝s2$_Dp,+e7Y ݼ<(d;4 a}*ac2nQizFNƄcK&hQBUIsЕ^yl%tlUzrs~Hv]kXVRθFiKòNU. 'ZzӁAn巠'HI"-ͽ G&5Zk&M>8'\Q_?Jq 2glo@6̙.GuޅJh3Ȝ@LzIo'辣KCR+i HE#*;blG>.wɫ % ^lVIA̵#Dcֻc⍦펉7;6"9/̆^xd04D ֛S#ـ7 Y9/2 d<*;lKt1݀"+/޻VM;YOI۷eBxtVז`evO>T{C~os˜yphrJW++bφ޽?d@)S.]wyhk90ӊpQpS$v&̳8xCs;qz0{Q.`Q(Fnv"p+%^oN~8fR}l8أ0Zͻg팎%c CbehIN mfhR}bUS% qGo [m30Z9SRVx=FV3afF0g w QѥS›t^]Ks9+s^H$TĜf#>{D~nK߄G*Eu˴aU||Ȥ})'>e^9 醛t|Ԍ#^ ڝݬCb9A8#78H#czAN wٟO*qm-ovmI!;$}wIdzT5^3פd/#wO| >`_RTaH2u9Po6 d֡m)!9 (V;ӏvoo߇-&kF1Y3ɚQLL.E*ON#02V#w%H fCB8&UM"PnQE^1*⪣_!@\iPsW[SIսC[>Q/G ލZe28/Y48X dQ$H)'Ell 0b Qg&H8Xe-pD (;Pm!&q9Ԑ:):_GV "Yd9#/Ya6=R [<\V,bWJ# \/ {/zI\*g37*dxƭsDv]Pj,T/&~QImTQ Kӱl9 xb 5qmWBNɰк*GP^ck*0ru 7XRM LH)]"+/eQj[ᢒnRxQqaTDUVE#ucjXAY!o 8(hFEcjFr"T1Ęh-SV[WҜn,ŖU2XkDW9 \yNR&O[W3p6A.5;wϩz ՋfzWl}6cvz >vȋ!`EaC w%XF% 'vM YyVK_Mro>}&G nrGܺEPd5n~^S̑6e-<+ c咻ڬT9;|jw3xTIa4|VN=l$,2fRy$n`Thvx,&N1:(GbŊ)=)alNpզS\ jQO6Fo'qnO0j;B6̪cI7M]z87w/o(bD,?v@Q%\0&Z{%(@L]ʦJ{Qv$_,VJIN[)Z/hv9@O)xL͇yt">V|7խ"_p6#{rvkK8V[-"wO 5ꊭ-Ԩ}"[) 60vy<0) RyR~LwEo~UnM+-yJ$rj~"1╶?88gQ#-J@9V*iˎ3nTOY͐c.0=js rΊYXLK㵹8>ٛ.S3Zl>JyQFIj{eTHK)X)RGW$ǜksg<ѶwO wa'F{ხ&Mzҫ&ejxwm܍k6-+A1ނSCZQ)ɠL"Os̓0-=9ef  oYʝX6z_rAjhy/cR,T^ ;so#_%JPa ]ٺt$^Q-^O|ss2cu9dbt):!T*SWܛɅځqIMksR] *!)Ak\Z-ߔŽU0RȄRyý:Mq@)~@ަɬv]P)6q~΍u?uNFoGϴ,nS"Ds.Ht)o +fzx'jYP-*ZsEk&$ȫژQ8ləshl=s翿ywaonft6۰݆ʹp?!*Ξ=\>G$VodRP'fVQ#X}5N3TFdGQ\cܩG۩ݐC=T#ŝ+ 5G?]P.I[紊o?[T~\XXljߡby,MiN$ך5)Y/,?5JE1R8+W,JTH!(w t3!/ )h 9wMy JXlÕU :j y1pdřW2 ~wպj+[=deϕ4ʹ o ݅{Bn8:3&w5ի9+T,D2C͕Z0!Q'Gv/;2Kix (ǢnIVюbk1&.KT܅oO]*F^Bvxfl n/O(=ol_O~׮M(k٭7^P>U16˧녧+Ou{ewвV!㈶Q{36t[Yx GSK@ל+[#vhjsYaNwV_B͉>$pxtSݺMbipϸ&4skuZ֖<mT;~l=ldt&fS@k#`DG ~][ڛD%Y&6'H)]К^eՑF"eudwA)3܊n]kTsN"XBluG)AytW Xf+qH%$Ӝ\tj9y峐 ~ۻ/hd*hdzQ9@}lG8@Y4G>c0 FKZF==_-̲8a\d|3pZ2^j .B7>H*CVq[6H.2lO8X.һVo(fnfV.Ȥv)$IJ*<8)@0cck%.ޏR޳: yR@|XNPН xe7?L+U{*~&BK.u}jZE8*\kC쫇mEb#fABϸ F7r1#xoqne:wg<Yg:w睬nǝ[/څ7WeCMK&ǾϕM6gNx +͚PHN*x] sշgv@ƯYEXD1G>-!`(m!*LY%{|E&}h\w6Y9-Km0iFJIƔ!- ;M4?`HU=Qu#616ug`_o-GEbJCWi,5U\ZcNKJ3v8SYH᜜6=V,[ą`;3pV O @wYEcwޕ*`6seHKL96;T f³pɕ,`ݲz15e6 :|xlܴ! |B |BuSo_,^JV5mi;1<&`uSh/[Ɔ;9Y~ zAl L`^Ou:ǯ' )BTN@ j( # f9p<́ߥg폏 PW}n͕OZÙ>w޽ֵڱGl @8B X܃sQr8O9sz׃N}#dR%|ʷ[N'U%уnm;n=iE^itӪ>Eݳε|Gғuˌ~+[M`/X [fLƳG݃ =(U`ﵛٕV٨t8g4]lj{"';ĭ[=mxwZnL9vIH𜞝ZukΆkva Z$)x$I 4$C^h&*EhX,I &kY_ϸj'8e2D kMIr7Z8\SM zckQD=dvOqv&φ!XD"bhֵ!sZH^Gs\˃ݧkssS3gw9ժq*;&@qw+CN\ZMTݻƭ8|Duga)\ ovcȉkZ7j͘cY:HV' )eut yڞQc»YN.,/eO|LH,Xi`2]L}5fة ]MnpZ8l&ӥ=[p/1OszoyWZG9*/wt06{Q)|/tQ)̮uYYSw^/A.IQU%,/5yGMY04q5q _JG7cF1a\Q(^E:NaI ԥ#1Ћ: NuS-"FYKjH#P0a_sѲwvF.{y:C7V޼m$j]]oG+6nKm Al.A/X_db(kD!)Jن CpfԩL@:ޜ:S~AIG3y_CqJ域>U/}ގތ4Ɗ Rʄ&񵪴L\Z_GmrV>W:)-n *KSM$q$A V}pEEQBR9:bcZL/K9>^gtvr=n'9i~KJX>MY@[6SZcL.9  45 bBh*t1KX3 f0O);j &%u71 (8\5Dƈb+[3%yh8dMlRņ:lBD ,Slגqp\͐uJ_SV2TjV:e˛Jgw 6K6^fwg0mOiaQ)(6ESRM u A(,MU:jr)Г\b /dO۾mKA1Y*j ˱G^O=胺~=?(ҵHP#$&~i/2 ^8W0ʣ~M>}13DD d`J~o| o0|(gƌ4ܐY<.ସP\5v2FH&OgSמ;iӞeM;򖎴iCq#4E6,^Rf Y)g mD+a^>a.42א`Gwzr䢗 g d7\kd_Y!Bo4 7z"y>$b|]n/'0ԤK yEVy< sv.z ;6aAG\Ur$G֍:}7.}I>x4t炡6[ 8ק_`pyc=I5lFyWI"E"=sWDcY>hXڪۦD[Y$OVԜ<~520bAs,|#ӷod_.jzviz=w>k`>=T.srgCk=Y-oV720RKso,p|=NT=}N {"Z]uVKDLMie5ߴwfˋ_.zLK8D ̀m'vYz=qt#v7۪y8GA9vN xO蟦 ̍pZݞҾ5ˁ%KNٲ1E~ {!Dkl!`bH(ġJ|vWCݔfI=h6*+D=&&.abu9-5U`9E@₋=5mSk9֋om-vhjd*1)>}CӅWشye!b(ME魚3fze_~\1ՕΣ7AOld̀Q:[Op k1U*suG.> dZca6k؁WCqߧah<Ud0%`@mOh=+.x*#ã r7)=(o 9<÷!xxjBAx~YS !2C/@m Suw=95Q3v%SLz }H8C(R{ncB3n8ЊpyvVFS8%q,LUP2X׍)g$_ lY霕j+SS{WXKɗ*jxD ES$ґL笥(6՟ ĭ<H&[BUknRqz3\ Hyoi&6LVw>6i<>EOgD` ٭R0+EkZ:Qred8ޖH;F9<S'@PPAJv<<dOUZc1rAXѻ->FT/~m6Ϥs.Đ4"#g\-,7CCM,abXՑuqʋeuFLhq@`+,Q ئgF!2|)WɉU*>$9FSxu]yВ#}R3TxߪՁ"0g-_>)F)ps]keS0*{2 X劋DYLvک >Q;{`q'^Isej7ƴ!jfjJm2sNV0SLz𭭆jRg'uL'eL%ҿ&^9󋥾9nS?5j_9:rC?gHD}s^W{TiVyUnW'1f81r?rrVSϓۻo_G%7R ztq~> x)([Kmu`M ۪X̜toBW E sܴq}؁s~PD )M OTnw5!;g9\w໷o5vT\LC}syFBmIN_6\_tZlܰA1M5u5 Yaּt_\~<-}r?K 4Wge{hBCū96xuMk lK8s85zV(=o;ppߢs,~˻v^<]8կ|wGo2En/j4VM]FA_ES^un6sqsA!W6ZumΪپܬkɍTW]aհֹW28SKUN7ҞY93wIQQdOyc@JFɓJ{&3i?((=e^cPx4t!tv}=o ሹ+ĆJ2M툓JcG.{֕RT6(ɓR^7bLmhz%\-z#yXCt8Oj>\ٲ DAg3v 2HQ/hsy;˩7fS75r^yy}cz#aāX{SޤGKss~?bu|`WdŲp˦/'bcjk36ʑ˲q6n[V'4ǜ*s^݌sGU* c =7u~0C-y=vm(3]V'S c8+!=։ Z[Hz 5pL\:jOQ XĢG3rWc1v0isV]k^Vi]R ؜2H9KܜnZ8U}uJOsҼ<v̔nE8ꠜRq0 Ձ s"v1l>FqmߐM!mjDx >9$q#vq4/j*9^gn͚>؂XH7_!t⬀}:N XwF6+$M\e1PJ;QN +%Ҹr7&TT(LJImGCug= EyעH[;fZQKF^,K%MEB_絢V4 s`S]oԟt4dzfn/>(>:wFO%c>n{7#_2JlƠ("ScX4 F/{P|[%IbKe%S#SتIX] e m5L⊲h֐8zI9ͤi淣PS/::ZW[KZ]4c+47W͔$䪲4MےM& )a  #`AC $6#+,/#}ЇE.] @SM IQ^ҐCҢՉ![ݿ:ʡ@"IDVLùZ/HTJi閨J ` T A X }: =~MO0?9mEwN\'}Z7I:2;.(sE(*~}b#@W^ZC܂{a26ّ2'^ִ,''xrQgTE._P2=(/')L!g|+>C\ 19),'=%'#0KBˍ nWonx }t$VmG!4:r8s\*kS [ pי+.kh\kD.reSK\z 30iF $Qg>>-H-0^Z*ueȤʥRg@}zugd%+=eOg(kɐ_avA~Ÿ| ~}Ht{]V2'x~teEE~Oߌ'g3n0׌,t0k s3q7?s3%+vwg$-tvU&L<^"0D>(@]RpƘLjQ8~aO,f|ں2g/:pFiwM ira _v[*>,}?d(3΂kxL¦17 U[ˏH ezt@o^qٝ2Ow[;{NTT2 6(o&3TB̰ a? t6o6Q 8Kbé*ϑMS4jy:8G) dStQ-S@ErF\I5.9dz͖YΩ[1@q )VJZ$ss_QGǞR 35E~YuiW tTCէX~`Ke`%<L .uNi(M!U+E*.P˒ ,}ލ@~EβRVUJJYWܨ7,*DΙI06Q8O4h h@P0.S[οlޕ$@7.MJv޴@,XRIw ʷ93Ӥ|gə/-(=sĢV0K_0e%Z^M1ΛL֛290%aK 5~{ٌ={>}R']u}Jsųw|PX5AJrH5Kv , x͘+>(x+r͞o^k|3f V\Vel϶Ģ\ H XH>muD(ھJzlڝi5Em 1 )`(w8Φ#hŸWڑ݁ԜdUIoĄligK{~hNe7ot1: ռ`w) 6ŕAS]kts;M\^[=ޫ1 P>wPk>}??m':C OO{^ ™37t[ ?4,1&лpd{ gcSx]|'A%f``)6FB ʮYv6>Hk]gq ϽԜ>C JM9CHTtBd,fk gMT #?f3֥ "idF*:Q0Mmy7HñQ+OQlYSӑp `dG?}DI$TI4ɒgպmoA*-$r紤 = VKMl*1 T v:/7i7*u[K2R&yjhi5jvjfZwPȓpe/`L/>T6Q頤*W]t "=ӟ:-C Ym~f֘@1u!:]X&LaIi(!.=8kRjGӵs$}HVDeؓL1n Fk4[q;g#b4 u (6P*Y'3sBΈAGQX;6Zx)xA"*J qZAG ]HGƣ ҄R (/& N%b 8Jk*PNPQ<i"Ebn_*MThXZ&=ZيIɧ4Y|)`H@()MUjѭ"EP )ЛJS)r/@N]xF,J] p(-EiD!#1CQJn`{8oF^`V`e1XBp+Q;qpjEI{J[ي'4J4r8hd!28 dt*pG|?/ M UȕV }en.8Ds 9iK;P.ekvx1rBrUޠ8UO=6ȨȘDC$G}裚i8{Iw`]c ©,Gs% zތQ+h=.TFiU@ղarmc\crMWs]*MP*4Y5A HT :o 5ʁY*o v&)%XS&kt"rv?gdpf%hy̺ߋݻ~H.Kn cE(*~}f eE)^]|pZH)\hh,-<7[7]Ƭ|9:\iC3|?f/=9C,rgH9894߷l%@Zba诛|{TTmpSgoY`ןuޔIwfS=Ǟ[@SQZP!B5.(O'.NZU1ЪpF꣫xLEQi"W(ʤbξF2.^ut-1UWfLiTv vٖ_lTKb׾뾟u}KG+r)ڿS^ RE\tj2U$NYݗꆝϦiOYHZWvT`s{IoBvt_u2R.: 6(hLע0r[LvzL?;;k5Zo!\]_ܪ+8Jw!F|9tV3nUo^Ql1x VZan8`dTD5r| ]#uÆ !L1vÏYUJ9W)g3Zƭ E̝ь`5:`A R1PW$5rTPC4Mrxh܍Zun D (Q1GvE놻XUx*I# P~LB`OH_\vۀȃmZ/>=.L:f<]>=9T 臣:](gk@rh:al%[%鱍S/"쩯Emq?tm1!c_}/+Lќ~ĄmljIG&䩪vrx MtTQJЅbJZԞpj=mJ 鸕̋V4nkD 6% ur' &{>0\'5aSIl6٣D^^ER֌'Q uk[ʈCQ&/RSC0睐!!{F>T DZl <EJRt<+0+Wt܁ sUHkI_Dw4HQ-f^RΫY=|͹VJ >kZ;<ؤN/?4~\HYQrY菟J.**֊~H9rSTc/gJF/ccXRJ0"\$Ed0IGT; O</ܖi8b2a7]BGGzf @­WQ#pp6D5-y#09PW#2mrBYG0A?V$gKR/&YF}-&WzOhO.)>L>6Bmu7*sv8PW<7*PWhwzK'kVGOQ{q"Q<"W0ua<ˬrc6km˧64XkF(TQy+A aXRXyɌBh6XWNHol+t5g+@e/4~rkk39ЉUk>=Hiي2=Ui/աI$p :`4iCc5k5! YDGdQ)%y^ЄXNJwm_0}7%H[<41$gղJR YCDEH䈜YfUSVc'+؇oqR~hѨGD$TFg{>/c7HUb(&45ZاB)Wx!CLx:M~Wqf8iݚjVu{}j'8`c߬ ɼw=\),)t_ }KϔQU%$圲c RLW+r+9+M"g;NiJet%+>CQ鳹g=k<0 c4i؂tX*|WiBp6ykZx &;SSH_i](ŏFAkA8%-t{q߮@unnGTsFQ|q/&4B"7L@X潞ð$c5{1LmdK-+qIcc&$bHY"JU%KR< XJҗT$4EM@P= mM|QAo` I'EyS3@yUaZ .ݧ=i6F S| P7$W c\.l)L D9+̣/+SԴN 3>\V 4꜂ MhO*s[a-w!pSc{%~IG~K[Fpxm U!L>x$>a? Oϋ {qm2BH~?]Jk85 NXXa 갯 0`E m4!V8UlDz&!Cy.[p 8F`T(ThbaD¬tXYb|i*dP\˸`dt0nJ:%| \]"UV!I?6,C'qg?OcʈT)0Vpؔ6hJ$RCL%#q/H ZD^#0={5G߅ k[r:p;]O "ėoEmQZb%,0Ak#LFQײ> %8x"@Q5i#_k7>vlP']4K m,hjŝ)K@<ſJ <<}&ׅ]1anUb3۶"y]Xq0|Ì?x^ض\ [_f5烻=]xʯo}{m\8EM"mXۭ \zۭ׽n~g 'u 38i\f,ʽr+;O~3*?*1RNiHO,Wjr>&P.Lt- ً-o0qM+?* >J_5O%7%?7ZH{EzU=J_:HY@6%Z[)䐽IG;߽i(s[z4Af:* > Bg'V9U  qz37!K9a=#_*}W\Fcr}ǁ [`0Wv (Jp< |fc4ZHPC2&,ZztEu7^nO~^W^F. G 5 Yvc/ .2 ^ AE72;Nr9oSz &%Ҧ1I6/7sWu@~j?unY(h v)캟ޒٮWu6w^O m!+͌0Orx#$'$[cW&ҧ?.z p"7/Bv, wcPN3eJ3pЉΟg4?Ob3]j{{2Ypǃ'gtzlo.h'_]eg7}8sz[N_o*33e=L?t֟}}ٽ^Q^3+v[̐_38|3W3=U\ ooppm.2Ow{ӻ/>]Oހ'`̞,t.R{"Fr7fo:ukg0&nt2+򿨣N@+dWg~^uٛ `tOx~/<7nf Dg;A.|}fMz;>}PzؽpaӜ/O}?F9U7߁_kf(`%ƔCү~hlg<wW%\6\ǵI&`wbh^_+#+oo g_;M>iOn$FzI. ^.Fhq77Y"SԘQU[,IIGkZܦ d9ĝ)c{ `؁~2w%ZZ2OO?# LڜFk6n$Ӡ@C'Kg{Ɏe) l%Eif`L6T˲/_l__srRv"QE*zyM\yuk~_ݬ2i];X;wM3hNSI8b&Ғ(6qcGRdWV_\g؍06WbL'** R(f u\ 8&Z0jEIN'. Ӥ#nDd8~`Qjg,,6޴lQq9wctoeDަªP9F Ek^|^-t,.x$'XwG\ ˭;Ar<%Hz`K&Vu@ֈ_GcAݦcYc+X!xe-5He*`ːb- DmPjS&.{ҨJHU 5T8)ʪOfB"ހ* n)& hHfx7T 帯 aonsL9cDžי$2NytqY4͋Ǣ.,uDE].B(i#JpsX(PЯ۝8Nqrf%!LVݱÀUadEhʲLS {,BSNf=W0"АÔF”.)4ഋ-z_[@ѣLSWhWPRq<(m#1'68 5O?AzsMg@G8pVS )*gJ{ȑ_evxo,0 &s|n6Ŗl.>$; -d=bU}D|1g8{ZLpߤEqsJS^IcL"/tebrػxB@auouw݉u$99ĺmꆬ#ʪs#!nȪʪ,QۑS<圡Eȴ$\TA`#C:)˭ ZYn=R3-q`s*e Hr5i_IpCTՠ!EAy VL} 5R)(8Q[Oe~L+="sV}I69;;M>4>aJ#9͜"Ccf:hWZchkŠURvN{axAG : ܐNK*SwjSwjNN DLa2]>E;*gUKl1Z|r*ڋ'=HP0fK5ăJ\;MxPF]O.>캭UL+[Hd&33<#' pJ9s*`)*,sz)cT}ߨH_ Ho$?IrEx-Sƨ<$Kg ^?D鹖٫5C? Ӏ9jv./UW=fK$BcZMw-)6.=j~_&jɻ918-OG#dl8Bt%⫣DT;[-X4JY4JzxWӡk7t DЪr@+DW@k8 ƃc1@;c6uqރd86Xao ;`D%kW8 %fUa} ~L?\nkQ^7a =g_z`zD~pPA_Rzqι`$zYW{.lȘ^7Y!eFS+di425G*=_M\x_gzyyݘ5 -0B;yo+=l] :)Jce_"3"wpDq`HKf֞zZh܀)[RX4wZgP $1Q7M;B57 b@naq<ٯ7d8ac YgJyD78ddTeh\|,&[,jgQo1 5]:lFhXLcnY:.9~+/-%5~-Tk4s3a⩐]鸘e΢0d>}j%hٶC'"Jz,!y_OZҕk#sJL$㹳4Tj.6eP)ET0v p@x + A7~E)sibX y+@9BGA,!HcIByƐ(WrJ)yڀ/H##=pKv]B'z%و|}\96O *ˤ9KҕUx#xtg(PxuTZp"8:F_Q(m1I:*b^[7N)륶nMxt8Xf4u8_Oja'ŗh~XyI<>xznZm8Jo{s; JB=~uIY9KD EG\\8aɌ(tCzx:^p['чFNpJ!,ztoԘSb @:IcR:=ۄ eR֪БC(jX@k ˌ1c<3HS*s(sH(Fp4LCc1%%\Hb'Ai]uD$rfMdnQZ"h=ZQV9|X򂙀-\"^#"L?0^d D4s"١&+ߧ*@TFrpH#k*D2$GW<P}w-8y5frē[AұLrؒ[4HM, o_WS cO+n$ݪs ǫoFL(A^b xXbum.#昊6t"S+]3HRT^D?P ϓEx:fn<}(u:uYȷ )YIM{Q# hz 5Ovd{vtkCZAe:+_n0mߜ5}姣oލ'lYG1_ >3EU[iўǼ5̃nAܛٌ! ѵhGʉs '/hЪPTۣ1rr&CDSR5g c,QY߯zHI!)pf(JvLׯ1]`J $, `4_D.x-!:NY`rRAG >ȭ^+!K3Eg*N1P9OsRTuO7__? c;pZAtm>`."7<]Сo.` !t%緬'2JøBmt;x-V,۝}T;!+ǙTs{N(+<KTKg@Itḣo2JiN\T#eOK} X MޠN0U\:~iG]ǻ]бQZ' 9t^܊ UcS{H(jNX}@kaV][7N@Nݍ7X T;NSu-cIvy^ Q@2+W{f=h@{˦p60UAĪvŪDkdle`H1s/Ba$(Re+*KxS"/HxTahHl?FSI{Č|7ښ@>RGSfp绳ʏKD{4N._9U0[ 9.y}D:ѻD} =gח:~d[ a콨ռ 0*Ofz(6a K65d-uCE!dj,V}D03jۇ?.# #1ehdq2-xH\뿲5ǓGaȝ6գ=r63Bx0t`ki*Oa]eo.ͣFo&׷7pZ'G-}Ql|xv=0ef#CJ~ |PS.~vy~ZYiL[Pͫ0Ⱦ} 6;A9yv-Da4ϛ"ũ.>844=ɵ'#ճ5>{-@j; g=˩zs,YݒsYFe/sX2Fv>l0#Ϊ˻_AX:ۙ9vs C6mIj0 5w~;r}lǚ+'ZjT".RFrз.ϒ-xٵcSc1nsfd{RmV};f{]>/ӌrR "nxc) 8;$ =QtNinRfOڽs+c J7PtatG]>& akhE [IW,%4bP}>6g(F}x xs&t"I<3n(1z7rø4'RRik0DY8|Н}p'd8*98׎*EYPT]@ `Šhvg SCLhIr: $G.Ct4RXJe:A*9MiǤL-A}Pnal6c7F?&yHPOpe)9͹)W̘yJf]ߺN.WOJWjiO>DJ)fO>?^@g﮾7o/ˈ뺼99@]#+ª"[ۻb#}wq>wCEh~''Cp(sdF(Ʌaa7߮+B@F|v"ITiﯮV-y\%Lؙ?shR3J2C͘b7^/yn7V,J%|Q*SZ'Y+z/|\A g)KAgzqh)% {Tf tZ3JYbO<<ڋם цW Ez؍<~bu"9PKmIZgiC lOs  J0 JhN%\"$Y4W>*}-SC͓cɧ:. Ke$:P D1\daoy<6e:;LC7EpJwXdEC=%!ZJ 0J^ %!9/,P-x 2=}V&k RS\Jk/\5ecYaޫuCU5j8kPF|m:7DeO7jꩊ 1'RIJKR'UZoۧux9v!|]X°]xϦwۯ9U]},ܑc3CFාTg pl(Λ!@jcz5w_[Jr&7_ڪKQqO}r˩ #OmFiB|̴%* 9ThN]%"/4Thi\ZPe[:>A7>>:\$mJ; 046{+F]@(> +) 弦5*7y27YӜ>TMflF98.`cn3\믚\!Dw7&O|Ǔ?M_1Pbc;&?f7+ezG5L8~ث@{_u*w5jkx-dϱIMne2<ʺd8 њ^{阬 $r/%/A3Iم8/_ulTL<)o# ?ZapR80nw3Lj4hY ):J%h._r<ݞhp,Y@I ?> Q_ƄN!#EH^_先 l?G:+ˍBF 5d$EEн$ElK(PMέ:B9o(ϥCN]\կ/%O˛*LoJZcdyeMuYp>Y)nwUtgӾyypǐ G;n5tbL_)xxG?7RFtoT΍>$h{/4z\\AunkSZ휋~G2. gw;ttS<*8ߣ1j'Ǭ:JO2I6 y.X!hLJ -A" bf%nAPתc^n%im{S.Ն8SeJ-I#p{At(4)pH6$-x$x3wQ (˄Pݪ d2Rio#$ǐ)*pp2 Y=ɗ/~H,$^rBwF~%azUQ1R&efYlPQY̯N&"#֌bt!0C0.! /J ҐqD\*.\HF:+ A :ENz5A6+4 Jx+P"feDg6ֹ ?+$t $FHSZJ3A5+gzM3g 00Fvﯦ"?I5h;Eؽ(F)0Gd`rr. HAlfBBEQ\b{mz@C|j(†YDhJypauO7gEӈklp{b\E_mDoM{mx' (U cfiH\Y)t;>T̚0r:(Aa%ip|8> mP 'Ӻ*oީA#8(1{މy,:9`YAs8Te pgJӘ9܅Jhn'c&v>M)}IU:yA6xCEpoVXP! lW\aCHcEemO`v^'0Vۦw*uva3c=ZC-aP"( *ťhOTF"L(7AN^L w뱭1O ;`c9jepAPDj\PJ)]rwd zLO /8bϸz5Pw$]v_8-e(MSCC>xGŬ}n,0C^LKk`0m:t>iP_^ >߁&Tqw{D&Z5&w rl

G0.{qqֲ7KS<N3VZqo"юwe/Fًzen(~_Bo^6yjkiE=6#4"(;v4)n1ylPMvu $T8 [ϧL RAI!NBSi)3:ډ4B>"TRPz:/-KKőf‰NLxޚ4y{΄i@fp7NlJ=PS˜74%DL L|VCǼa] ͫW8olޥ<#_.(U]\:; )l]@"#kASz@*ȉ4wo=e\WWNAH)v["}4ԺD46tbYAO 2Ot'aW$XC@$C6%IO*S 6[T V$r~o+4RZ5H^(3\-z8f,b8uky[)lU)rm'Odh@9@ (PFR/OeB1C-G,(L P]?Wx>-/^36W4勛:m_LwŢHSVՂ+?{QΞ h#fݎ-("t#Dя^lI8G+\jFQȗyidz2-Q^bƭؑl4ɶ#.Peh?T`J/93;B-.QSNPL'e,pk29i>up12ƗoJ 8SQdR{!h6 ` w5Cίqj3Bn+L#eLj#(?ŗC͎%`2'vx"Kh3#nAII=I1Q_3Y1#lt_NLQujJM4ofx' -"-+6E=_..S n.F#jG_q4Ӌe_YJݛJa_Ч|Y]0[Ivu93z>Lin$.-*+�8}!jUEwޱq H6A5ZآT4_V}Q*,oSw\lx'-)tJC(4eq󥅯~ګզVCǕ|Vى_0};]ܡe^Rʔ-D$gJe+ztӔ<^?rQRQdVz5\d,+1`^R(- |!b2o*:_\<6uZOr? Pr`FH6Zz2_̈́6JÝRJ閪ɲ̈,%7k̹dqeyiKHhE(X!!SXPšL6uM{jUs2D].+w^ +vh+v|/X z) D.(=2[e!"y(QVdm(. ڇsm;;}?mLJQ rP YE̷k'"֪ؼ}v"J[Ƙ1%l45Xk1k+dlp]:t|j '1^x̽֜\W|{Om]p-b?f'7dQƖޤUrOO.i<$; +)VYn?7'0Ej͏ܸ!_@_ a :?[QbKGMFO?%2QǙ0y9&CQ`Xp<@mՇ$*˒|[9(OXO \>/B( .V⡕:(jK6l"a֒׎}ƹ6 !ԱT^k->ubO 3cffL93"GV#J3ku\_iQ(Kp PYAʬ-8iJ Xn`jv_Cntq5j9^\dV0eK2:GY*L 3V`uſ~b|޲e2T;jeSPr`N(67TfZbp ̥rg|!5.]oc!&N;zzr)6OcBe_b!u:)j/M\V2#.Qe2 X:2DF2DjTVPa@_jΧ;aWo!NFPHms\*]sa J/X}ϳ󎣳TER!e-9:ip\"LiFL}d>]|O K1^Jv-Rb'6[ӝ0g'R(f1c`zNQln!kQ s_[Ÿ+wK{ӓKyC%h͆NL@@Khђ_63u2uA~P+ GShJǹQ옜VE䤳tf'hK7b@2+@)@Su{LxF,NqX:wրa%/+gPdtB25@Ag-R=UtA.YVBBfHk^ٌJku? QX0km@Q;L6 2 ?Y=.#ǃyq@J3((V9,"#"u CNb4^oQSEwź8r~A$qpyV!JXί +G?󛓂5j},S2~:x]۫7[/RX<<8Wկ(?5aynfGE0UTE1Iӿoj˂g=]ǀ&@hN);5zǻ[*bT'u.mqaX?{9wK&4ֻ !_蔬m$5nRy:޵#ba\ndviL&m _ܶײ߷(mՖ,n$*X2^&b n -ꐐ'.˔BrKn֚v+]n 񲫔N"++߿|\A{we\W;zѰ}X;\cu=M5bj%?IWЪP|2:ze'sozr\7FLo 3ۨ8K +Dsp$[K*^(UdɉId Nu{O2BSr0-lbRHLc|R|"YII,gOd[9!\|"sag0Oѷ綼jbK>|W-D`sɗny{ƷV O 4P vBŝ}y>i' \Ŋ{#lN]m(7{Ǹ鬾ݴ\ }mϸ}BvtN}2`F&-h#cf?-7N;xʓ}/<'Uy@(9R^,_r@qf~iRʽj*g~|X\UW.^IN s$yQ9Hk&@XmO{k? ܜ#1D#1`Hxjv9s ;;g?};^[az`na.Lcl%ZZjl`C#Ӛw[rs1WT#1lݕ\Lٍԗaa63B^;; #^/mGif0ٚW#ȕ0Jfwo]=wlmm]*(+T`Gx-穕q WσyZ BEj"_x>bDfs /6"q\ӁoZ*G:/qTKo j%{.,pQVQ(' XE hsv9D7(Id8irŌ\'njyq3S* ѷ_LQ1) C1_zV9`} f  4l>HHvx[B^.n L/\OǿkUV"t4`c,Td 2w 3G̱rh!^{^Vp{_=yC䛱gZgk!Cu[A 5kۺ>k~3Ny4j,{LlK8_dtrghTDZ9Tր`I,fK+ÅQI;~@>u-clw{{oe|hnV$0%9^y;N~`p}p=Szz'Ey<-`vvIbb)'N&&O/!Yeȳ ԉFQa 2 G7e kИ%[ _n1pe1#sE{;IGk} o}RI&kVt@s76 ccͬSH^*D1"V`ㄐQ3XXS9mӦKG4ZP,$&=&, ?×,:YNb:,aY\(j6N &lDDڌs]l9ǮAKU5%PSr1{1eUՏ:ג~`/*"'qp*M(PYINj $@i"X^k)N,ZfL/LP7dtۚgV[pRE9ZmIhAc8zhCjS;XlGL>\]z*c}/3.tznR}10tN($nOg!})#WizI3g/+z^1!̹KV.*'!!O\D+TnR-5ʯBQUŠTmv;xɴ[eBj:$䉋h"X]O$ݪbPDtQF .8ML% AAKPLk<5FSX1 Vuư~mI)> pn 13`6TjD 5b&ğ&,9KNCoo4eY|q3545<&. x#1p;‘Gxl]Or` L %R XEX7`KXJ -؟xߌϣ݌G$ddʐe+ݽEΟOĿ'@C%pG૶fit^FQʌ²_8]v`=su/k -ڪN`6<ޯ5`#7rE./%@V0€ZhqzKt>͎YDYCqJlL1YĥsAՎ/o-X~k*Ff20N(pd|d([!^nX"GM-\znPW2w)֦ \Vn Eb\IPg|l8^,MWg ܽ.I3/k Ĝ3(D:Lb>͸ @sDrLul(:NHIŸ9, 4Wd=肈zNArr\~FА9K<:10~sHdl;lVAv. />vȌ~|8["*W_IYvu HEV'ab#%bK\feEɃK0XE-%O`if@4Ӧ+DW𹳼͹rXI$K}GxG;,cGb̜(xmV޾V|b.ʋҘiitl"4x8Ō#EΓE lzQ|JѯW\6Ʃ0kXb@(;"y\ 0 `8[47^ PBuԜɩ)'%t3r1IjYgЬBnpIcL,`^0EqaJH @D<i0!+qHE1Uyaݳ0v6lϼ Uꢎ&L($3ŞaőqJbGϏ2#"^ٵvo._/1Mً}@4p9ίc',*gGNz7R:I젱^K/B0h)%`]e ݤή#]%AЙ$?ÀKgU4rjY_~;=}wm>\#'͘䝗xUμW~k"‰tUece/H(*Wk0cas81s"_+/fp7,]hع\ <1k)v]>~juAoj2 d x~3<ƒkWn G7r "}YԨe@֙nltI=',,4uEڷ*N~kRЃ.n`q1֪lэ[j2@l?yۊ\>9`/_|rCtRI_WqXj82k7i:a8OO'?g(nR~͢Fb7ˆ>,\O:bG^CƈϪO9Wp.[w?w!A?49D̼mb>9ec)ۈÑ{ۍ@xKJl.?2=rI%U QVmMȽ߆FIK@%k.F-lbo@: !u\0.sǕ%*%GVeovڕ8sm4h_ߞ6DHݘ]h` c%!H{4+ fN^W(o*fZ65(tP>*$[7N/ EEZUn!+F TVJqViZnRiYTXK*G(G<FUh\z7( (ab H}sN!E!C\@)pƒ0 cw:wb D4w"CJ 8yTp(# f8 a %y(ɾ;F:f!Q  ^@$:JbbH84BA{}g-Zz ĢH" (L[Vzq_oMDJ$!%SDw1%1)"&!$!jd5U!@u3jbl?fXɪEE3==xX zPn/ك{ܤ1y5}imuA(k_-xT&RDޢ2-5fa^;Y?[QDW~(ם*BCUHhKSko݅0~cL8!ǒ%\ha?9RKd49Ҍ؇7~Mh 3ş @XAH@i[0$!!%@1)_$ }Aڏ}Dqw5#oQ #iZΧWB~) b٬7OOoN3)Մ7zV)P.dW(!yVQmN87y<'|ܬoBq[RCr71͈(&9 );3'g^Ҳh◃tPaVϳJ˭:lVQh.e\]n FKz2G!D$|c>|FJ4#}> m9ď)cAbR܈"e2==-cTw{ a=-}"|"0SN;4DJq-w=WhV}]^ +ʀ2^!"+ nR̿6F)E䫴kq>+pS/Zq&^(?Ǥ)R׃:mV=|RkS΅)p=}"=Q'I-LgDOK x_ٳ2_\.B\50 uwrJsxgvcqݕax[ ޴) t8`8E s9& H>jjr敚BNj$`p ʷ_@yTBBO//by=거jof{ૅ`=3f686l_X,qN]봇 e\0]]̷KnaT_l29#8൷ވ% vr._R2Ȉ=Yz鵱.x1@[@jUغ5ij+a ::lWl<E?"YdٺX>j[gqb+Ǧ~kw]^6\JR{p Փr[m{S@p:^*=Pw~_SF@ \X#a,%ԆY/P+\;)ǭN^Re(a6[ ԇ 09 P}*_{#w8aaPΎ HXh@zmiVT"޽ǵ!ZRFI_Xe@"b55SysH =l\ -'zPDvZ2:C|A)rvw>b!rWߥ#OV{c=`(9aǏΘՁ g2 aeT/ETA<:i2|F>c{1j#mneEC&e8L?vgY%RmYr5, D:0<[ P;ޒ">K!ƺo?4@3':ΐgXl%ԋb5inϫNϢR6x\s&'ҷ.z#x|/^zk5AF>3_XqİqQ1+z$dwSq~~,a*޽~;K"fXᚯ@ȫ3PLۻI2u5BM=N ?/YV,e? J !dg倪Z\M^uS;Hp"Yw RBj oEq?@2G12@RkP~MP)K5g,I>$0Bբݨ՟BN<!Űc̾ #{(C׭K9LP30EضMLJ4K|NJ+1?ɗ VsT=!QĝGh/¬; 1DiqR+AHw#akTZgkcF Jp8 h>"7@+:ǥMע:1h7j:z:4]DOZ߂p ty Uc/K0@#r>,t#I4_<=H @vۛ~T~JʈPTk@{'MB@6z]8 !J wox Ȝ h;NuC$$Chu:lxa@XP6CxrpC`|-J Z.g*.Q˛hV9& 9Iw>J.c~A& | \)ihuenq^ݻ4 f5Yf>z$eŝ~xp'/(!Hx>KT_?=ݿ}: v27WݝxAJhIϥkyOeMiփ8%ZKlja@ c!WFYW[׺Y <%ҍwjazTk fj)K/p1|H)ؙy)7 1adL8L>@$ڙp$ﳙ_+4̥Ks6FhZm ghmpp bCa}#J;f.`cw P+W}&]hC.5 &u=06+P#Wjϗ/WQOvd OOٻmlWTzԄ6U~H{R;Jy.mMkk-L_Bj :JgA4F Z;zCz8\@yA-+7o޽ʸlwPH[ ~.lY%0h!40&aZIU%8(Yzjہ[dk02/b\eJΨlE2Vf4 nsԙadsj؉s>Ln:4vFy" ϝlmŎ\+DW[D`tfC81E‘AqBCߐB}ᳮ$ACN9@RO]w37YW a H' DG+ T Yl Ej}nRkʜ\ 喎8f2d"$C$F8LA/'D>JԬ8!&C$Qxͦ{^rWߦ4n#jI EqwfCӃOۛ3LKqc[T*`|NƴuKvvpBṻ+]cDJka'*1p%Q[KmwJ#h07ptiLRgnk#01Wee@ -kk?vopcxd1P{NMA%-XN:?p OM}Bk19#xB, ldNrąhS\űHգmн:~oկ~]'\]DSI Zc@ĀY+Zke,tOIQK´Sd0& 1I 9B&bfRAeLTض1O(;I *Zr;1*Lݬ|z߭`Ο lmlmQ<.J=:0_gK$ٹu''~=YlHV ޢbi7{c9w@,(Z0A;?vd:[ػyVk?Q.-~wF :m{79`HH+} bn糚_߹w A08q8Y&==e֞N/.}jŅr'-Ot(T3/Y43tf{fm+-~qnywqX/=~wi8M]^m˫:OMt&l l{/@FT,P1B.Ւu@Zwy$>TM Ꝙ=jQ›j;l+]կf0z1 ۫Z‹>lJ1ؼzg4CoЎ~>&ߎEN~w(EŷnsQ>(>7F}-eO_q)Gs2H q*Ωq2e˟@u ngNfqw)ĭygwCs1}ع=f؝poh^pjO}wsӱ]ދqw##U|b:luNҌFi̋AkBLwyqKlY{X4\BPtK#" $NuTbD!5J4gvY W3- dM@bɫG<,:pln%)",s2H&ew#GM}$5C1ƼP` OC"JyD #B!,#0`830_<դO';O4%d+JPALbu͐:A7DF\ڙAQU4\MF"1[IX+I/rK$A s0e RXk,`CSL FhZA92:18XBQeW6(eVT@()N5yM$(,Mm1prSpk[BV }""K"MV8#H[s0Dɞ؅TJ] 5e}]RNmQFČ(SbmnD2 iMaGha2ϪȊ估su4='w I"*i.v_Hq#֥Bn<ѾN><-pA dxW,xVv?׍';c|Ȱ? _/kH8rKkg`h !vՃf-5b(?*VRv `=UR,LC I Bf<y[1J^UA<&{+"%`uu:Q1Iυ9dP_ks]H akϭ@|,1O`M]%}Q`_,8\C7{pu8d{Ҍ}(͟r0>/ *ǒ>(:{5rP6>^!7>/)0ݯyUpZ[ꏞ\jV$[oduRNV|y$[.J=QlYN!lZLQ_^dOL(ahV.\*2VK4Sėdt{1x}e\{mJ%nA$^ !K^^ Dێ:b @BFj?u[tbQ/PlQԫ_ZPch X2„hrdJE8N-pjq15J >ڿz50T7tqw(:/iWԱu珖_r4L@gg{ݺq+t0^Ctŕz=bѼuaQho[7iwZLYsh޺qKg0΁$48GOF4oc0GT H!Xnk|@1C3gzr?Xe[H 2TbfoFUX /%_nR"B/m'"|rSVe_X(RƔ< !2 iғ*id"2&@"NcnHb!JPT[ndCЅjuӨ¨%( A())LX9INzв@x+Qi {1ʿ0"39E':ڤFI`JX09a$5HI\ITc ~T`BQ.92h~*C0Jrvg W%~gi'W˖Lvedۆr]&p"VhPe,~*&;taIv3!}f[oۘ;svkHƛ//oN LsA"FAgeb'.2@Qe;-3(stiiAcg"62S/̠Aѫ:Ba#f{1|}l,~u9omRK`S!<j5:hi i OB-mI.#Nuް-eR /vH3Zv"3K@:)2 \✄2d錵Ɔb=$_( 6q$TGj]O{-ctr~D I̐k:z7wz?Nðzp5X7T)LJ&",}}gWGyzQ%H~Qv!l"OH>ȫ=}[ '.}ί"9(bTƎo_.v^k;4euk˂^yl$k elk_mnμ._"Z[;c4싹Ҟ~k(]uDyy ~rmRלevkA tZDSdqgڭyڭ8D0i ^vC+A:slF<:Mn n} !ڄ)vk~=դ;QЊkm^FɭEsD$ &TyF'HQ:,2f@6QMGl Qg˵NIYhc&w% 4b@Ax! "ՄɻGɽKk5|t1=/7Gr&j7寢_]2i_)v`B_6;-MNKSҔ۔fR!mqGyfcu#B2D觲z (ȹ+jNcNƔ(A4?nT}ZTbc C΋vVmv,xDQFT(J9N2ջtߛF){@ȁC SV͸lyDʃαv [Bk[9p` {mIi{a:slv(%$jݚZ@ȁC S`n_T3=ߘzg#:?)md#$AkHzU`@9+ւKw઒ju8v;lD˴ $1}*b` N5A13rmn[TkSANYQC=B*Ç9#!=d -퉵\kS9E\TLqm{&)@eR{ ",.Χ.ٴ7xTW7~ozf_/>7S(,zXfC@??׌Z.Sڸ-f>ou X_uEӿzX<*`L-1)5&ƤԘt j(C5 i$CM$%Q΄R^ѻdeҪ66qHJ=^L}%i57fv6͓CFkq&d`G2 Fp?7M.N,} Φ.rS Q7}P`.Q_B)C^ M y}/\·X[~Ygūc`' &7Gbtۏ|i_FVc|2sgۛV9߮>}/ uLblx +Dbo,p K/"zo~N-Řc?0pwߺJ̝`# o5;¦Sy5;,FW`KG(t\;_{cLS>K& Uh7%uqUg~C SRsj_;ʐM~0 +иI{k[>E:o"|k WE&Hr>=?yv|5_Ng.0. φe~bb`ޑtƽmxV|{\3tp׻;& oDGO.eX'5s< a krUbZQ JJ>ka0WdCIYSm. ى)1=:'wșP 0ӳχ# ecHA2@@|]\v*1 083-*%)pSyV$#Uu։N5sTa%ET7<˸#==Zq^K(9BP61')B \T gm1Rv~j˦~s*dܷCWZS=g+IB3n.x>7vң3(BW<-<-<-0B !h,y}F^33Qd+YhObfb64Y].&P&eq 0S`)nN7ЯP 4?(֏ P KeDF'ޛ8:$z |xw5fH8F\'kxׇFf-(Rߢ㰶 A⎮DTdvt:I[~p+J;\$K|}|TJ)~ā$3Cr V1 1k΢lAV$T vikG@KI -Z $rsP\ ZUrLCDMn[ Th]c%4.PCE#а2(bʮD>̟ª 22 A)O,RZٰay8{"F3PrV$Efp(FY+{Uc@!6(A QXI,Ё8Dx0W>muG0}l!VrX+;HX5+".?~zwְ@!Fif~md)<  4SX]SfӤ $Ë*|"NG~ɿL=*A)Kk趚=~ZdtWy]Y_m(qa)AnX?YMc&61\ (6v$ۍX`|)_ElC-+` V̼(r\4[Eù̒ CVQcD0Q`DTAoc1TŸ0 !0|| \J=d%-1IC ,s!]Iʲ.xe*<,+Ss?-k4#=ޭqZ,biKkitY<σsJD)DdťRAH&ŅwF#=^DMZ'ͨ2i H*kW"&%5it)m됥xd.6[WBhLȦA)P"E 6*fe e.4*ud"}N2!xH`?$kܙ'kG)/dHk+7J[ NKw.Mb<0Ԉ@pyH p[Tڜ 0AQJA`l4wbfLj1p-΢o #\A4xʻ??oOum̧^Zzv~lg>,vίbz{s?Vp)nm ԰ G&s?өM(oKLUFxZ+˷gL\&}G"LG8"HDG*NVO{}؟aX!m,X 'ՕOE̸Ç'_acA'4Zz*|W%\ Dz#lrX9|Ϋi3iel2YDZlQb !S?rهpyA=_ eGVݴHN.Wj/X"KV@D&"HF}(cK) 9h!8o~st{r?Č ޵0]>5R0X_q8:C$6в.5G@]݆۵x" κkG)kU ( gB`D VY|;qtP A(X e%@8qn\%6&Rņr(N 5q}oh0; =~2%hJ8%na=(gy`OpBk9`߆ ORPp<儷 !,S *9P;CPbTL,XID)"ͤJhYR͸2!ßre Apg E]AņP U٢z $+uac6J)9.bbJ 7Pbc5uaȕ!$%sK^L")ybKӖBhNУ n0!i؂;}ABʴY밟I@`Eʴ}'cLHvGC_{DL00T1ʶQY܂. rVDx"s-s 1e,V2hZc Te(a7)z cg P JY#b͡@Il哱NTi@qJI1j+6r "YrDɿ^K+E^˳F1&"{-~! @_ G>IP-BA$du1-`Aݼ)/BF"~ 63(<_)%fb&$hD^jM fG> QxiyT/c#Յ 0 Sl"gd <Ás"J7c )0faqpGq,`$Mb 5}6*2؎ɄB8'Vqc#yyN Xz.{V&<,Z0DF6'FqQu.a'rl&D+q7Bzzc.WhݸV>5!'kT(VN\,l:`vX 8=]ߩ2xzQ7h>b ݥ-8kΣC1¨He.PhNLi7;;~pr!NCVבt* &$jթOuBqZłIi4/(! dzs EVb^ZN{ E ;7P:޺WDt^u]JK'(+B=%C?SC7Ɏ&[n~oJ.T`-ڿ󠸩݇?Z7QxdNj+]fF;'Y0%98l`H8rQ#jr Q.Ѕ(tyݱR\MrvbfBTJx9%2 Q>9Y֬/ot}틘Hr wnZͻY(g$9b RR]G=evVlS͝P->ȰN : !ܔ`\1)! H&GHIy6AXxN UlVhsjv\՟A8UcK0P%Jp>u^EePc,|;(;9i͎3\irxd^}I='!UOtucL|I5fgNf`m8T&Rˆ$bb/TN4:03S"cȈ`u zRdq 0J\iBF1HBE$}Ui XE9}3>9 !/`u_9E%{K 0l.%'u7:1؟v(uKJK-g7\`)^ ^ޠыڡSsI^n]f2 hH> : DD%% zi_RRA.DƲ@Kx1BUjo>><:t)<!_aG6}`MmhZ{7]$d/b7͑Ґ xy;U T|%'Oͯ0*&=(?)v6xӻ0aڞDK@Z&#QL4Wu}yܺ*-륫' Ui)J2WsYțͪ} S r*{-bL)#/=˽O" DŽgm6SX2@Gcf1CI"jݕqR>C#Xst{$VD"X S s Nꑆ| Z5ig2ާܭ[~ͣ j+pPgƛ3/ ܢ)AڅzZN!@ri.ڔW"H7堕}!!k;Dp6B&8K^I-}>{׿N-͇m?F1"^{{}^z6Dcc4#(N46D1128dZ#pM[Ч ӲMNفn;üCX3st4SeY̠O}g`J~Z#HcWvGDv+z˷w۠=jo]?=">y}RV?/Ozz0~:_{^~UUW~vU8F1p("p%a10D@*MNH+;P)WvMݗi[XZ4rF`o/B%'B'`%x.:@Si5zA8+-e(6In}F@uR"5ż;KE%dR3$/!R4 zI!RSHYZQكyr m DԲ/9 \/e߇->lφ-|Yjv0H 6+R,4!6QaD ǚ@'%*˖|5f xUP/s ;ѩ30ޕƑ#"eGݳX`؞<-ei-[nUi3Y֑CŬLfddI.# .\&6År7|h$PQatmotZO|]~ԹQ|t40c'ƐUϦފ}ubXvq!·Ж1 ],>s{$|6”4Hm13>\OI.= fqO0XRD,^r97A77wˉA_N)C׿02 5|p(4VZd,*~gnAzBQ˺l5 O5^*1PoW>2F[v}WSH=l@UcWS`W fa0]UjjZmӪV9]iZ ٶ2H!yi}Zȍm*{!0J4`VD[z&r7I΅(Pomy~Y4dboHzV{CZ7iqu)K0371xmodc&a'0ۻ8n Spů5,./]do5DžaDq4G#̼Ge <3Sco*S$+ kFAs !x7A3ϣDd3Ӈn"^+W:߻BsGo9IBd7؜ڊAoL}swyng-ª$!R,So*_wnǎ̖ ylˬm1`l !gYU#n|x,^wI8yj p?yf}ue|tDNᓇD(vkU5-BgG'GoCX!-JbHTF0 i >Wp@y̙_KpT^| Z`x)xhe.s]Yp-PZ\6*>onB!@?҅_,Ng-(&ՙҲO>!6)HVsdDsU`(CF&$iQJG ]wOQ?/ W.:]mJ^ &GZ0h-LzMULbU С6JU^1FVe1[FS8yUDD]0&N9{ z etO3f-L݇ύ`wMԇܫ!暉2#JN 1w^v^._ƌZ1]<xC3h{˓Twq,Nn9hsj4Skxe,%ii94.A R%ml+,|0\z5Ih,|.?cĆ) CԴ"9] 'dV\)KD.̈c w\/t8Kk'!䭢^ç{8C8m`I‡0gf} _HeK'`2Px^(}RH!iZc&:z< ;_̅@4ni}6gmqkCiCiCiC׆7Foc} JES9YgƃS9teuD!M *idޅ(Un9B*bbyp$s9Q@h;7/XmU#Tх 77G6ѐn%`T>g54 * 190Q3puA܏sќ]7^h^0fI}% v|ͽgj$ε[k_D?g -m&G!l]RmJY$@+]?+JGc??U|&_}LأmHk#rKٺuѝ r]g4ph)Gʭ36+pF; kmhv \;#!ۡ~ 2_JQ"5Сd4KIMoгe-kEo?qNH + nP@.%r¢Pq6 1l@2C*Au $5XdM ȍ穠ݰP$YyO7)oI'_FCb: ѓ$OSxt>!{XKnxn@XyBϏ6G]VBrbޑ{"}ʯ bXM<E9FY!iM`STG=SV0Ldґ%'K2B܁)KoH%.I<u*G ť̑%,6FҰoHв('K,HJ;2!$VCPvb:BG#}2B%NNo8g$⿞;'h6 17t hn j6M-H 2h;"/ВoݪvQ"-> i0[iy0⡴<"ŏ'7dY)tF"S2$$-@N%RZH!D8WIWJ6382NCHxOj>Ѵ7xCb<\Q'2(ݏ~,D)kgg?{WƱ T=gL\eJdQ!:9MJM)VVwN,DT}&"NOwGLDj}$[B$]ЉL ,WnV0HKiJۯc1fֽ09G-ď{/??HDW/X`! ?Z{/vFN$ YXєV#+58i,+dk^ 'G/~=(~i yK)N׷0.\O`'>_z@ƿOh;Bw265,fʦ-Y[hoa4_/SY>\n./1+%㇋W#mnFKw7|pln[>|B;xHNƳ83 ȼ2#^RbyH"^d.nzȞbMN2QpcccqG_KWo^.,o+\ VGEB7X-*t Kn:%(i5U,X* /bqU"֞_gNqBqF!` "c%0 ٳ"TQ.D]џիr/n6777%i2k3k $.LvֻA\$%6bBB@ro_ | pe8^[.1ڗX$} O:km;_̿yb͉zeptp#q\eڀa.N*[1%ƚI*dkpo7))(XD]B`#au 0s'L $E p d'RN쓢 Pm;1ʎW~(9K}AjEqfgarο$_0!;!}ۖ.\vOxӀm)IT:MzCýxBrJebnDS&vCFQ?p'ME&1$Hrsp<0eP8.4@3zk]K8sGA 0 r+ nB~D C"&7%~^a &j=<3k ܧ\T'5 ȁN- oNeu' ȱM ye[98,=b) Thk `@al7;…+}P. U9V,X'cK8g L\v'%rmeac(<˯>lB0{ƔZ68e]jYc[c[NCA~ B0]47?9 q[ǔ :_]$f@5pn$ʦ=˔'s@PyMdq E )E\NmS]QyDn̬cNXXc )K䘮޺'Ty!K%u۠cpH~i qw60i%*ͬR̷ƙ@1끛/k23E~8Js.Rǫ?y=Ns\sM.ڬOV0UN(MG3kx1׵~:4Z{Moĩ*M qs%r^Q5?FN}B{{Rd!3Ny_>3ˏƵ52vBe8& 8i]!/?eOe 2ԲҮk]B1S/>apȡ)b џGXN'Ib]j_0/?ǻJliƙSp&`'E{fT?Ol<ɉ1oVe`ͣ3adi0Qe`W`$6t2=>,Si8' ɷf,ְ b CЉix}rp]K3A!F"C_TKjMwIS oQ ocоw `椤No_m̳7` :㴆Brt0&,8 )̙6(e]@j%WI $HxƢJպjlehn(gF5lFp~AKj߳= }nv.6|Ck R˿d_* m?<-UzE-0c=[(ӯӿ\)_2rsMTۊRzVMAoV٩I\D)M4GϓKߌRrJ7OCR{S s%圚4i2dUc]N Jbкl )4Mb'Șp*XfQTfsZӿo_oeI $:~I|v$uHCK3=vKf&.|J<8sgʿӞ8{F2fjqŤ'-ΗAB편m+ΗGjMtLjshΖ%MboNvHݚ"!{Ye\\9abiGD18 BjtIݩj # 2L\Rdy |Sr(!LyolFN(|H Ai#3cZ77q)0Qy slFìq|wjgc+qMtHB `<2qRp!0٘+ms`,=.19OZtks`##tlw$჎xJT*\?oJTi؏JY"N(417 ]8Hi9!5VALJDT20BљT4*5쌎r5(c'Sc:J1AC:8LsZZy]q&XL.Zlڃ,yjHG#/Kmd9>(T>Kq- DSbG9Dyon6E7@f6 EO~' 8집hןz$ښ,,z4e#6;<0 {<АScУ^:!rLu`^ayyG\ |n ќiWڧYh ĕ, .w./AD&WJjuYI$iTݍ6(vmpˇ/ydS1ű/W@>[6+[,omNJ&-wu`}=dO^qcp'阩@ǚF7N)wk߼"]X,K 5GB!T0*>5`$+F0B2b"/nodeIn'+Kx\oDDi9?}$1Rz8GL##},8 wǢȍCnL 2˨ vʠDtѓvYNoRLD $/m#I _4+ySUORI1* #psa(ANibIr!ጯ'szfXkxM` RQ|YQlxS@.d(.qk $!;`DVb9Ih<=˶m!`{{zZ^gAhpcb @;u3A\FUxSMg~s_6> y<ay:LW8cLHMlՐMI8L4aK8;qL)_"2eVp\eFu)4鴣8IQ078ع_vޗv%\lۭ8ki8 z6KFD::';(h$@,[?.|[)u9%t`A!唄nL[r }~~\n+CԽH{[p}VxJ`Ό0隝[zzlD9eY Cl_iyC@"z1Ob8793p[ iȭhHB b0.]gv}sgv- o9n*7I\ kS:$3v4沁z4 B6@a#:K- Hqa EXm(=벯ߖ]JVJY S@F,/8a@X 1T a@m|+Ee^|ȑ:h5*=4zbY!gOwAPc}GL[-%?EQGxxrEz<-4i"f >y㿁ֻIN 'y4 5Ĕ K?#45jHr¥|5f:Xt mQ"XBhXD<)vQ]0!v~)ظIв~&@҈FʹݍFt@IpOp;YEy;6k7򗕅 /{ rw ADt$8 !ǰXYI4Q-o˘klc-tڠt x7"_ sCaΚKIp$uI=<=))H~aCC S`bTX ȏ?YPP][FvV]0? \g|ϓV2ˋX:qſlL/ҟ*j\:7A` J]}Kc{1 _S_V~ɞlPI O4(*U~Ng OR 8܆4M7 78.@͜A:`D9ݠEQqp!6>ݫU4=K5Nop7n^Z)XЕӻ w[]x ݆`xsq񶹺yTv1~;d3 Y/'r7LCUr0]Yy1o_ىje,=.!C{)`2\8^lQ9?<ڃ {M%/('p7@GhqZ^;tCth/!C[ qvnZJ<`V Vj^j;n'*rZ0!`W`!-~,VHm( ϟ,عmyqzK#' ƐB F2NCkZ!X?LN=ʎΩu`|"ĞhHl*{}z1B|aj1?&ry;_LoStp)¥=+ bI:SJYlS2X ȃw\ꩶ1Dix(`xzx/cb7 rL d 74]JRq9A͸OVk񸤇=-9MTZ1G5O wKkofWzZݥaC ucq<'>i~.Xl:>Az|xxPz?!:ImA6tNצ^\!?(1Ⱦ!1*w~3j/k u*L2L㻇 1%אxAcZiۥ v>5^x P/c]芃 3ط|LygWw\i0d%7k>Ҷ{PtS,AڢG:,*' Kb$t3h q)5Bld}߳psѣmFueM#q%H cJ_v|{3 ew6m_?z tX ̆ӈ`ے9¬ eh QqaUa(T; [kU  GEU[MU`H Q`WU(.ATc vXS S7Uܬ p|8:D Sz`LaxzS?jY$ޒh!:kX ga/KލAbb:en]Rٙwnhw+a!DlJx7 021H,վ[o'S-n,䕛hM 1SFQ$ݣw trhFby[vCS[ y&eSl7;ލ8[.)[FcDvwTVB^fc=v;Qt~V~D?zT Vz*GSeE0PVsZxso<(Mt0Ixm|/֞nxרޖ Qsޟ_Z{4*&{DT"`l?^k4 'CT3F5* j$D%7Hho ñy_ww֌ >6ʀ?< )d~DzeQjnf;;5`Ҏ }<@;} Y Y-/? = <-=CNf< 6/T8cқ!K e<}tHjsdT4W ^KYF =^J㍚T ƍÕ] v ɒ|+w96E#&Aٮ5( f 3{)0c֔W8B۲ bLqQ1b1˜hO8P!lךB(?J ަ8tna(^'?{Ƒ_!I WaqNz!3R,:=jG Cr 8ÙzwUWWZ[rKf*ږmKUhekL}s۾y6;7<]rZVmߕjzZSiulzյmwQ21q=;K? ;ޜ| 7!vc@ jѶ%`5RVVg{*A UoGWq@`8 ]?c-\ӝw~LwkϝA?mR.(iZphb_]%Wugps}1\؉7c)kX^+Cp̻BD.tRfOwi䓻9'WyS#C1%Wۤ0@B/:G?9ᛪ&9(g ܂rZEBnC6@e. K6G$m$Oܐ2[ ),EʢH !}Jѽ@/ w `Ԣ۷ăhm9ziJg)d;Gww J*$3OP6 2X-l rp 5-ɨt[3 /C& 1FW, .8Bc f-<DBZZF-\rV/EBP$`T3̲$EYrn3' UNG̒$D$$0i,$H!U`@!JbsR<9 Mu$[cR9 r.zET$Q%$ hf3eƄ(l!J, Ir$VܑL{n#rV(XbASp*=8/2bY[ Q:G FҜI3 >HZyQd,CR`D:T & E_~1 t~У0c]Å"1$/ñ uccBj|dD(+.(\^sW]d |B{RvR 8܌\=Tw㓬~\߄k ÇYmNA FKS!E#{ eL>=ܿ "9=KnoA !%_xEm\x9^_ zQjbY%!FWǷe _Fqa 9+3s|ZFCSzX2S^OrfoѨ CF\Yɴ =MAh]Ğ=: 9Ow~KHX쨉&Ņ1Л;r`Ԭ[=ggg= HqXDeɣ_RWG2 j]+6{VmJ|4]kEG{Yc4:jV[5pwyl6ZkkvuZ?|E(C2VpVe =%\פ#ߤF1ݭ9&*(}l}EYY֬cX/D +o߮5ZYخcf|kb3zb✿}>-ō#KfvQ^-|@ !,,=պ3pt5\ nVʱg=gShNTjfw&a.?_JDO?fx^ws|춝ŢAUYNDr[@-K)ؤR'4ID%|]?񯝻~۷w esFr+/Oq<9vВ!qa. 4?u;3DðݯfRQ_WcVV 'hu~OeC `U@qj ;CrmK3 5mLV(=cy:fJ |wΛ_ n7i\ox;LJ=jաŴRޟ:f ݩ12Bm)jMRF"i"'"a4R#M3?-EEf8O t)QAh-xccl-nK-u?|9ݸR =^ hlm;qb?X}޽y{S=._^tgק]\':˛h?^]т|ڥ7oporQק 3`++-~x'13d?Qx[DH,(.V՗P ĮB/UϦ /?5 h-;;s BRƬEV7Kwlc|:̅{A"G΅z:#̓啩!*+6*O.!lRRBJ>.-^ 5 t+Dx̓z9BG}($0dzƒd >r/aIZ.l68#idžs#SQ0LY@֦n(I3ӏSJ*mV8ƛA8{ôI 2EҔOB0'tb`@v$lS)0Ѩmغ+S1J [[YPz4mֶa)bΡ8Ԑ8nmhOso5]drQAGd)Y:;԰|h>hvC [#ZC ۡD j8I%(zC cpQ*)HBv)Ef朧rE>zryrc1m]avglҳvo"6[yc큇n+<[ϛ=+v| Og14(e:C+BX:X AV<2堥8$H,9M7\G65]at̃=JϢcX{h=rpظC+:~oȠFGuCm3Ή 5=pL쇀I3mg1l?(HF8ΈȵA9 N+TrѲrl0~Ui8kglҳTGO}`>03J;(\v2XY#ZIn2ra^!, XYc]@IN>jIùO2r\(PmI[tʳݢMaNywV0ML$*rT1,±ॣq9&1ŌQ}6E$Û^p~n$G0C;,̔PmM e5S|RИ8M\u2snE4]vO>L;=TZ сYܤ6=xC>`Slmn1zS  rKͩd $dtuo./LֺVi6)V8T ٿ\v]cI3{b{s$j~r/BcE_Bd!!rM7ii kF+8W3S >R@:O0<1|ơM^LVT zV0~.˃zaOBΔVp0E$'Or$'O@pt# h [u1%Ma&N$E'Rڎo-؀ɦLZ[,PZ^bVsX(ʧq ^!+s7y'墱hdS>SU4rI&QI u(-bA<ӱ(qBdc#(7\ hL 1p-*$%H*Lc4pC[lVf4LlQ)QϬ4]&zICӞI?94E]:$ 9 分(EK uzG .n%GPbxIۋyLYh'H[U,js h޼Z<2DڢQ7 Y`ONWq6l6>} .:tTM_E{>;O{S{>C929Q&\Y**5*:%LEuK㢚4 odzem~tۻټ8 # )Fduc!ZMiA "P\rIR 9/v֔h\q>^t͠*)\9_ Ht^PZޮkSsJ`eB1؄P[`5lOȡ oR3f6gv͛qqh<*wp>6xNmEI!J0:iW4]H֟|a]^[B J_hR+<ɫ%7WW3j؋fFK*>myiT'bs>?j>3i-<󷿼`#{+Byb(lcRDd,P&evŬBBpF=KF:*Ȑc=qNU Z(77_]\\]ps ].Cs\l! ?\]Eu ˯oE*TnO_NcڶZ^iThXbAF 0'hЎEǟ'de;c(+GhNJhї}É#K5M>֢&VknCXA%_Af~)G8sH}\,V`UnZ':Ykܫ5fzmQssBUwp}ͤӚ+Vr`I8hs}=CpTv;Uvv箅a 9*j%'$Qd`mU@bЃ@'lz @ldp röqDYt;h#^,&p Pꓬ'YORtz7Z0X3>;hFFNu$I!Lڎ2k;Z>%sQ"dl-I&]+̩xpҠё1lZ|D#W8,( }ԃ *|k|} Ĝ> L} knI&ٚt'u*Aqacʥ3,9E:)&I2ѣN:ηi6GuavٷtMw\tXZ[A9?`=e3[ZKa: xQ>dYT-U9W8YXf, b(>yM4ڵ`Q@`#OJMEi=2*R.'4CYj H-, 9P$N|\Ot/E!cZN8{=|O'ҁV1* gؘL&jEKn DbKGumB߄ p{zi %ڨ ( ANֆ5"hr!)˲Wb8 mI2lZ[]&1B׈0ژ}̌ B>!Rf&C"4U.}\Q7i@ѦT?kY/~^P(FTrvR.2-~ ޮk P>?^O^F&EWDƊ'n^Xo5Ƌ?tj>vE5S7{۲\I45ŝOk;?^sJ ~Em=gK,P3hOE#v]yrDoIc>T^u\%w.W`]U!=P4F)I"IdU2(&)h%0gz IDjJlIZwCznOi>x>eJVH0$r%5Q'ܞv{]w @JmlP| A },n%;zd4PEH?5y..v>;7_rŞbGG8SF"vtTU(*8Q!);1 ":'/<ο <;/ДIVT,_`ч~\J \ι<'yOtxD Jc`R@[WʃS"Z1>GvTuAm[˼̷+ԷrW7Q;Dܗ3Kp:+8X4^2UR))q"ԃה1aT!je>DU,8ٗW!&5,]EE^-}3}rA-ⴁOӫq2ZHZ)B)Xؘ$*!tA50[щlif{r3’1$I  B]"*tpd\2,0dy]З.-s3U&9zA劽lJʘMX'GrSP *. )J HFOӖ@RX@gfWFش1/f Btl+0K|Ly g>@Q+=uڨ@*D#MGhk#)qD) {)0cƘzI*)%e ? qLH/KW_rx僟5t2 C(:D Tݪʹ4A E{]p YwusHiaLeѲ<r~ǣN/on F5 ?;|CʻQRY,$#]X1[!\?!" ve19"fF>^_@<}`H1I}|ӱZ'JGҿ,roIs 5aX ,j&wIse& 4}̓_k*fPa,0&F$ph9Hy`9'ʅL ~s+汛H_^_0ǽZ"`vqNVhJ)LWRZF1< R8$̈J?ρ5X!Tj;nV Ӯa^ͧG:{T+P>9]^I8l- ~JĻ1?F~UiEy+0%jƯxz[ Aa+ٟz/D4E )F\icy9WK&t)'pZbj4ۈت~VJ @p!b~{|m#`'+[r]S-rm'UQ" TpjWV?C}YyX?ܔ,2윷߲ jށ?iA4fz{>Cޥ#>tC1olNuKwKm~UCI@75/:m ?F(P]}(|I.|܋ >Žk~TN^,ORg4 zVFrVg-jsћJ}۹(]۫`GQ$p(.m9}.`=\3:(lm~}_*r ?̦g3"5V5]-3w,l9]CmO:سSWg.†1fPİQ,P dёwl0Jܗ㵕})ViuGԚB t˝^B ^#\Hmo{3bLoP8q{2&8ONuBVc*IJ Sj-rJLPygSc1f%9’KreryUS_գt(ߤ~efc%pP]x= !)[OK|~B :n[q*[߲dQo?`|݆kg灄)/͙nt#b5#g1"8c1(NSaH* %2)0ڄRABjIDYP_Ue8&TS . P+Rjk#D^\X`g3ǁOL1"ߺR* h?/ADnHԪ بI!۪gb#dkO%1JB<۵>KP6~m$U IFm2{u8Qa0A*0n>l.Wj1)LjS[~VEތ$_Ox9†j8ԛSGRgI`gI?WMFC$k */ԩanM%幹u{&s'{GOIw3=X;k*gZ%Q^Bf\%p:ͥEJArp*9Չ[0OkaLp45Z]79xl"jIk׀db5;;~/XZNT+'SM5x2u8ެweApIf\hwr,k簃lדqry5z=g*SzyM- $67 c%OS' `QN{T=:kIWU)8TVkt&GM,GI7`e#w5}}||KZ;H  Lo#H"GĔ`{Px%MU+w|7[\e?)EAsB_S_$<(aыKQ={_}/z _ns/> =gD]q$Z<6S";YS&GQܘz ^I )EmoMʑfD8Q- `71,yW c pApa@i1vEDGw"$¨Qfxќg[Rg@34TaZu+߰쵛SAH˩6T󖋨hc֟2)R^^*;:PwatlJ"-$1quz3/:(a^,De y `E)jnkh:61ia8bvn Qp4;i8wSd{:_WxYfQj"d\f?&ЌE/R>oƝ;)4i ۽.T*AJ  Nk LW&G^\j"\{u c8ٿ3Zо%H.uuGC]^#Ap jT,"1zZοLحb;Ьs]_^xZ_q W)ӯ'\̷pH䗟@ XB>I>]{^+;.vIznCn m V"hdtwDtl H7\= FMŔY7s$șq*04hL𣤫V 4np5k JS(VP}p/V8> 83T7 %돠0~!>YD[;PZ Ug5| K*(W] b0KИjAj31T=6?H!C}1 0$"T38 W*pȖ/ 9P-M.<%k{R:^ĥ,#rNsM P<.G{x~dXhutmnO+LmV(c'3 w3Fofڽq/'=q#ŗf$nVK.wIKvY\"KZr_DC IHptc r rgTk}FZQZ7ċYcBJ$\QJH )ɒ! 'M*(s؂_(G^fʌ#㭌O}.ďYut4x3ˁz419ͭAOƩ5A赌CE#ViZ'I@+}SY܁<[u=QϬh9hQ4 |s3E՞__]+kwN y5<s 64Z6E0'>/_7-䕟e &W&f<ȘvJ:)\dL"z(EJ5$%0IZyrv 9>4ƨ'خP92!F9]YL7̆R Ķ&kmZO%k+\-yb(cc,3ʎsK Z"Gsq "zg'|s᨞A8gsdC*볳X)BC4-!HAqA@Q{/sG[d:չi^7g^-tZo-5q2Jj |iK)+hp:{M6+_L R۷ ubcG6/PϷɨR-Ͼn݂%Ao2ʻXsѸѓ-X6vlN'OU铪~ ݳCgv .iojJV9'!7,*6<#?ӑ*ܳMGgxņ*O3~Ozl2d 2>?oYOu-siO~ p^L: dPg=|/OZg\p0vD(=9S6F#H&Y^!e 1$@SZ#N΂?V X7& /"jٴsZL JA"C1JUr%9X0,-EKT O( 6z2KFwn+vvPOoKphǿkxjcynO[GOzd[8tlHL/ mE^d6[&DIMulKAITeeeD4H [ff_'6tT]|rχl;pߥW,<];#wgT'Z2g#FԳ@1U6<f۟c@]cu^}bXFR>m/ӹҝm:$0AQY;+ ;!v7Ϯ_ oJu){<^r~UUO|4 jߗe܎7="\WpU6H2I De-h@4:Ĺ,1]kyؽ\%Y 1{3[P?K ThZۇ6R앃ϫ~Di־$QBgRzuJ-svMop$mhvѶ:7ɭ'陕Ee\y]5˕ |9E6 &bl) `:%E&ǝ6Q!mܳșa"N7~MmMDB[GQJ_HPOlcPصI$N߀Ǩ畟wtlܝy]ZrG;jrdzm)G{ȔP@DGQѧAJ!$) h*W24!Bxgޅg16َmDhHo,+6DQ -[U!B$raus߁zwALEq8v`L~gdNG͑CzШ94GGVHADpD ҥUJKhd"QAEQBrd0Z!U_qtkGٻ~jx4]7^wݴ~9h䧿YU?y'?{At↩ rWcZnp9O>R7Rai(%;@kUGmP(H u(!E"nF!I "Rb@%HY0%KzQSVf t~.Vl\y*vItb_g5MSWCS.45İ@r4;T-9A26BYhV_ VLg6+W:oА$]ۄp,yADwIGLT vؔv9*rN ^ =9 ?WQ@bѽZ-& p0"{[;}V_qچ)Yz&eY|d*iYk"箰ʳ{!##Aԍָ6 Uh(edz,@"QXkiUAuS6hH)skx!MgA+b֏~7wC`[h>IூWA_\Ny iwjS`j=p-4t|׹؊;wmekQ?\ZXI~$M+Nu~ iӢ]+=:2Tۃܝ0a1pdv)xകwQ ljO9vʌUgyh_֭ .x'UeHS,"Qe})hQ5ۜ 2NU40mR@I +hZH\jkg2ohpG)8[$UBG@W.8Āͫ=Z}Y@cGb l`XKUAwKE StT{I9Q@TE"8ڏ'JڏHZ ]uw _x[7]NW}{i?]s[qb2.Z0 a{ч902+E(P֞;#J(T,w1~㖶i2Ҳ# 5Eah®Ye]0k3o9p$ߝ6lSSψ`?ΓgGosʹR/7'Wypv59ΓúP9OYO&o ؟ &|0Qi\.sD9+M8K٘ߒ_+i,Ό=`@* 5?}x*D{s./G} !hdR}qf%e#;]F-gSX8'ZhZ>Рtk`^ hZ1O i5iXGZ+>m4KE\ `ln]Y6+~}tfݽ/ ]uh%=P*$@ Uҋ\&_&Gf~c' IE){p!OR1G12Cu`+ 6SO9AA;_T>bLNH! cv;>#OH{730v.ѹR+2ƈo&?q`xqW(wkGH-`+Çzk_|T~|b Eﶷ9jb^L'WS2j?4[}~X۲'[9i,w5۵2/B yy{͢mtvLo%4 ֢^ߚ6L-]=ZV{~oֳ)jerSѓ\ٔ qbm_nN7|ۀRS ypvGև\qz7GB1p1ox!ON [r&gS(͘I[y/G5z\}Š=f28X<3@30HZ;H:x&d$׵%-3r8<,?ٹypb&gf\[W^-^7~m pכo# 5<4I2D-1͞Qv>uS^UZV6/>|灕qڜ5׻>'[o5Yi70I9y6|maN=skIry^¡JDaK+++-D)HA4Ѩi'ITJC`?,>Z4hY)㺤^ Rb'@`W˰>+ߨ{T! 9ٔDMEŲIȲB,Ϧ-RǷh%`|4u|bf/;.eeiNijHrj!bFS5z5OݐPrjuaF\,2;M FuCt`C@pU׵G'L<>KGCIqSRpr9rΈLȉdb@RiOH)[Kc TrrSRdE}r+H)J1.IJ%IF5w!'=d)~.mڤI;=S[ZI3IZl.?r)) ,YH(% ^+Y)Y גQapJ>x<j&q'UN`XPJ+RE^IaUA 14P\f ې1OWd^@$Οɾ_.o02^IM[p=44B43;.5v~@dι#b ba5Z~ sΙ I׳냑3p k'b7.s&p,!s[ۨP1e`DBfs m.]jj4T >I-b„b¹ Z:=DA2Z#-Lr&dS?Xw q)B1p1oxw 4CwB.Dl 84'6oxxj4+air&gSѦ CH8F.=(8xSE  [ |@OQXAx.trG G fAQ)v%@ LF΁VI@Ǽ}fzS7 {-ߡ';8~,uJ"}2G{^|!5w5/MS!p1BN0SBWB+싥i~qTQqhK3ݛZNf1zJ(RBᔫ_§'8dwO3 .CU4o5=!/_? }tJ}Lj a>.Ne1(xVSmjn'(JdC۶X5&.kl'X~u7wsB~Aq:k-ҽNI -83Pҩe5sg;pJz Yux=,>:MZVsArx3{uZOmstu{ar|'6ٸd40'ejt虬g lc_?yuNce u֦m(.lRO킢q{B\d׸h_hڧx:¯}PfP~w~J ߋ9IUp{4|%]Ndԅz/I+E z  by}2u4z׫~8[W7-$LgpVL|E0]Vuoc7OHsc8~{eZR 5~Uū_Em.|Mj<"MjZݬ0۷ =a8⽶(FISP~ nln]U{86w6̭]䆿*I:+56fO, 3{ydrhGp`hf ͑;;XUz2*Az^ fXkPeb` 0y6| )[:l.k{kBCjHU/{&RsuJzq`W5K%\$k/7*8Fᩂ6+8{5()!(1‰0ҟLj1Z&;X7UcDk[*M`)jHe}X3Qy)[eP}F{o,bO3epD}~c>0H(mmeÃت4zKnV-==>|LO۱'aJc2=ZUe=\[ַX quO~Y=}tr56Wtbp<>7+1[sjK1tl~yIF.d/poW6Ѿ:*3( > v1H NML `dpA WHTPxhыǤ2 F@T(FKd6|\ԊZ?d } ˯ֽ1J9OBSx|a}qqܟm)&x_wŒ,Gu\ެŽt:{pb q=lL!z: dv AܥFq"e{H!m4qݷ<rͼ?AQA7\]jh#4 XG\8zTz ͰG31hSXa 8fs`GF)h姻X8aGȫi*6%Sq9eY^傈\qځ6KF+ wmz k4  AžAc%\o%٭M/<= 2bXU,^ `%] p .MDZϤw̱LTo9jtu67 5vhm)/;j #-G<%@N!ay`i\ 3Ji{͸G^KW +Bzù DZ2z$l""lQl|YU}bV'$s~D[~ Emf ̑MiDU[qK?@+GP荣,V]J- WK[6_OQ=kX徙W(/V3Fig_p0A䑝戒,sdi}u;LRJ"=\R98Wr9M! :I}7]xpP@Z.O)죝]gJҹѳŻ; *դKH9ՕV$*wߜ0-,Y_sIj?_|:c"?x;;Ii䒻L!J(# wyaA?R[ Lb*p@~I4֚X 3EN C,3?4Jxi|k+稡p)Z0\F2ښ/K@bɏEX<=XVO+_#f{޿[>6ԥ;o12 #,3/xz?[P5ʸ;l2=Ȟ=zd a2`8e֜#f8]`dMnmk =뭡11B_6RIx1F7t@N kYa % -ѣ'Q!+8jRxqll%*eډj;Lh+ P؇YǛOW؊yP;MƝMϐ 9@IfyZG u.]5R^up# ɕ/a^}3$w~zͻ>GɞUb1Z.37ug;yfY}lLjnޡ"8nz)Ǜޜ.doy% 󛸪$(% =WzO%7ջ0:$a$Bv+b.Sy@JaX+ ,h &UBARtA<.@$s̋ |Ͼ&]]tJBxsݸTj o=fRPX\S!* eC`S $m^IE6 HÙ)c]Jݖxq09%,wJ  I;>7BJbc3Z;nFW646IH NI p*@ iar5A pT?l{A j6f#W}_Bi#<|DvXVPCڔrhP 7z>(լ5@ rk"hw%;k->^Js@I* xo##Is$5;9tn9FQiQ_>a'凝UG18vâ}M}_ծ%_5h OMsr1]qGԗ"mz̄Lng`<_S~=Qwu:O!0&h< gyp;oiq|7J=\؉0?3#hT^МMoƓOjtnό\ûZC%Xuyn~Hѷ?Zt?0A+ҭ[z-ȫSMK~Ϩj({ ]'Q* \-,xSHuepi뚰>H<ޡY=BÎfcګ J'L/F [MHn7.n爵Ȣ j u:5? =NPeD1-gC^삠rY}dwL MDPvݗ//y[Km:5eԵj\kciE0ʌұ+Y\.eɕPg;ޱN e i[HkmҌ4J![棩S[t%|9忱wiU~-Eg: W6,6%?Ѳe"RTYyhz4ڃpgKǫsW8\M}I_ufBU԰V4"Y`ZsTw{_>x)8BHGRf0JXҸˬVxKyVZ4ݚiI=26K$i%S']k#iكUA?yh[0ڋ=GѶl! 6魛i.!7$$%%Jt_|e?,U*Ϲh#W@y5~_Aaq@Su|6-az3EFO2lޅFqp$ D00{7`O$tz;=lJj6a4>Rއm7O(>T3B-/\3\ {oqEO=EK~y! HKS$[[y'$(ӯi+//φe[կGlk'@Բ?T5ahC̦t{X väYA[}n#EioBR/*%) R8- G-H\Ph+Yza@r ۂ*N RX(6*#5@ھPI$] !!QqHFu:ɢFfUOR$ FȜj.+.>#De!PL)sm*uAϟ_19'RKcFcvaч^Fz}eՇ.mf\\frmkU !LDD3j499Bd؛K_?b\nz*;=A,p?DL8A(߆>oUxv5_~"+lŞQ4;(?µ(z ՙ0QHT{myj g o7}Tr~HɫH\,$Ʌ cMXl)0e(/ IO 8Eqo4䋮b9K @bMC㗍B4mPJ'`syRX ;ǡ-@qEM#X@PsYYeLwE^Y\zeQreƬw@yo;[ˇՂhA{2:PlSS I8ܻj%㹍W'JG+\o›C{Š$N)U,ܰ,xUάҦ0 {h/ tmˬ ZuX2 ىPI2p+%c 5ΡӪE(HMQ{@|y|RpR7A[p66ZeIcLHB*(M9$`)rjxsQ~A Zg1Bk`{,NSH`Gyr(ˆf-zo"̵b&i0 "hq0B0Bs II^B'Jl 454U4n5ڦ|lpv$ ?lbO)[A5hF6A$ƞ@A4M(JG> Quq.E_Q {65waLMuP$&8$T\p‚4;n:;yq0'1B8qY0/q@41?{.%"!,7Hp_Pg=<H*!W-(w뜁;? 3oce\ΙVyY cVoOvξ)I: rӝs49Y=kC,hBuFV'5)b'ai'S U1T>:t0e&a̜ o 9DYQ̇ 覷zMG#‘Yٛ/$CQX>ydfΖ%ɼf jR}no_M}o4>z'&Kwlgn^||qIvzfיWEPEܩ-M+Q6(M-'_ Q7,Sg.2F3UL:bP\oïONrOi H_Mknp;{_nsT1N_,_]. wT`: -.ŧ~}IS3-&>7˝&%֙SQ)Gu,d֑ e`3ip+\p TDe-S'_ 8'[! EGJ'"zꦛrhk3Lݲ|y+ Lv.lP{~*kpjk_('. *|t}^U G<3]kH7|1\n'$ Y6|~zo#Mi5+Z]+0"RS]f7$gC0-$ǽe,A s0#mE#):w"{pV"C^y+s:m7z$0Gѳ~wz_5!{n3St[]1{-'1W5eU9 olϽ펋6n^cMw.9(`KPQON)Qfi8["p;JØ@A޽z5)ef1)CN(Unsb q&SN1j,<11b &/pX̿F }@zDN)u)_bUb +*!gT$aSq`b~٪mgpYOT|vbl&. "40M_MaZҔ#|*oleE2K3CSDq]. /0 BP]Gbui4#lF81^B2fdV\e Pj)JHCq!afFآ1Id%1/YX&ȚT{%k-Y @^QѿYtQ։X9#7Qߕ5;q ^  &uoJe%G\ 'a2>ceMքJ$ݕ5y>.kR>9 O&8he,;2s~o~ NUK~DN͢:ǵm< 6'(GFMi trCYxm8nP5r5i9w\||o101Ѩ)$M_ 7}se(mLKI:c"ϳd%@J>E)H5]ҥ}o1Sq)HM 9; {, J9}eā/+wG[I\ݹTj2!.TmR/!Ĭk1#d9Y8|%iPe7Oq:ϻ;158QXhsũxv{h`RҝZfBMeu ܡt2<TUt 7Jfy3,%er͓qWO"͕ܳnHuob(=w5*̆0;%w1wnzuor )0$=:WqM QgO[f>Yfg˼ӼRiLgjG3<7F\~kiخ/EeL%. !&,܇IH)ܞ"Hshsx"։mob[V2Zfn`].?9i1f8IR؏Cܱ,ԦyP<*[a;kl-]U5s{B>&N#pu~ F -j{/A~aN1&;O{[cB+dlkmZ y~frSE\kU=&CZLlu|zݗ& TB*Ψ{C޼^=6iP"˪{)cfzw5$DHRu7ŏ9WCUD J~'d ͯӫ`k0/_}`<5b+aa$X K>xscy 8ml׳dNpx` [ }-%o گ)$O(8JyZ5Q?X-HdGY:S%P! *#6Ěی*Q.d)y+U&R'#)W($oNNs 0r]զ:cL8nJ?Bp+ ӜZdʝS0!+&ꯅ|*E+< ܇K$N\pd+dQ>%.JKsɍ[,v̙#VibA(-092/ Z>NߤF}vL6$w1,(%Ky.g1aEZ+7^Otj]Pj'eXd@鈌uFVwOL|Q3ctKl8bBkZ}>}&$fZA,nTی.Ba[Oif|m 1E#fֈ{aO\6, >W}!hG kkJ$4&9iɯh0bS{W򂃾Ҧ$p_CIPż6]`Zi{lo[Dm~\tt&j1Fa|Asu2oWzs&=5ӻ{o6~2>r(yWAJxlbܛES6>Upf”E6\̚ڥ&\'HzIBԷvi2sûfMVmi^91Crb6$p{ `.mv0fNv1\ y֠wA5B =btVޓ .۬ +>}߸d}|qEU~z}9R5D"u>3v]oGW}9Q]mbgH6@M>Jr[!% )J )Q`88Uuuu=M7xW m^<[c`lS2 'b8y (ةnkk7[ymka*Iݪ_:XJ1OؐNܠ>6Ywb٩UqxVMfSL뢌+rR"ˡDj2Sia:NLAX_JJfi~bky[Gorτ:c*zBY}USݢ R]_ACWu*=dz9+;?WDQo Z+3f7ރQVЌar,k Z$] Y.8[Z 7Z/mIqa_Wb{hicϰM?wq2(GLbpՇDLG K&^^h2h?X`ʵvN#.Nd[{l=^a,7b vtƋ0q_VnӕrjXLc _dvӘYWzWV~ Tc\W'?1y4/M|pYo+;K o 󳳋+BrNE`>s֣VQR '?FIM ࿜DsVr+Xtl6'D=_hX1.Bhc5H램t T9U F6X1}jf˫ҸRӓgfo-9 S*YH=J꒰x)T^ahhdٻ^`5"|ޱClz۪_ClQ _) onNF8_"}TwEjݮGTGԶ÷Tʥj iJh98 )DpFRs1>+~bnCTmk1GT; Y\PѬ%laXӨ%6S#lpکh'[Qw?Zgn{@lj( %60d[/%$dl9ܚ Y4+aF55%[arI4Ǻybղ :ڄZF>3$wf5MjSV,1 s ng:J3e!鎀& nJM)B [Ŗ(~9Y׿.k"7+ZJ3"ۛ[PC-UI>Yh iV~̬ǟ>yJkp-2lMm`]nҬ,G e㇑,B*+Vݽ6e J;t%{%ALQhb X O: ޼*xv102Νܣfa٦tlc&ڪq 7cc׮amcJ'| W Ŏc]_hӢ v NX鞪}X# ()q4(ϰ%tE1PHI!BV!kƩwբt.SdIdn}0Shv*w],EbKQVj!ZRҰBB*%"dPf*pII;$Ư— GʡdyUE`%%LA9T$ymmxo܈ aC5u?01A^};~;`^WcXfoq,KaDL&ʅDF-r2&1Κ,ȵ "Op|aI^aaQT]J+4%xAEL̉qbR"#8 Sϖ$G4. bfL[Em@"ɥPw)_A,R(VTFbvaIcVXr('!wBe+È%V3Dnf@s\ULsLyV) 6H,d2:*kVC0sf9 %r,#SSj% n^"G^U,lw[tlg9Mp`\5ʬe,Zn#/͐w(cXM/%hpkt|plpFj&j4 ]z0ق5vi"> 7v-y8 sBKGydZekx7J+ :2h[w7(ba2}!k[2|!g`seV2[]+ٮ;EBעYzft:)a) 60M#Wȣ[{F3`9l yy͋y&k4dzWu/ Qbt;S&EX [m[ |OW%+̞,~9<[ϔmUjv>4gj'f->~KGtAirkkC).AyZś>J<8-jͩ` ~9|bWPcܤ/Ogk OFhz+F,XoVzV+Q8X>, )iZE;Yt{R]EiYTlN%j#(;rٱ\)Ԥ!rjJW-B}r\4\)2Nmn#g++nR'O% 5`@F)!NdkygSVk2*9Rק|K7q<{6wl7H⓿g?u< C(tgZbc)kוYAjYHͅܪR ,cW{LؠLxi{b UtCoJqQJ|T;:;_]}8oBb `( ka9"eg $ h4A|JBVХ^ x~l߷wk b]Ng9J`fK0ン,{5`--aNNd;)tGpkMwۈ$//Z}G NS >bO *(ID1LB|Ad}⋱4c脁 \2Fށ>"({ŗ iyYT)31jOYhX=V:u I!/;,= &X`^!i+vXRlwߗ-"H%i4:w>VXtv!1oQW{:e^vQA!\kʳOj$f{EbK#D{>;{>]=&]H.8-׶>nݶScԶ w&vI$p|mV-/pf3_q*E͢n]Uh~9 n;M[֎Hs@P?o:z {V|kx5 u j!_5ALRD-Ejٞc 1}:K|B6oxW^G6uVX{iGFcQh|[WvݯUs:@Ce2{X v{{}.*94 2ۍYih11$hԌh)Ȏ5 :~6}Wc6; C L\2F<)Y9d`0|'h::a@04m5,H4kQ vGv}#ENl#<Cn&+֬b*e]hĀrfJ-UA֒WT^!fkBˆSGtrRMneC:ݷLӞ6rL[6zvkىPJ+&l [o|O񭩖oW՛~NVzԦUHG-*zJbUU]hJx$nңvDITo`iMA2)i{LĉRP?OY/0fN xwg/?N$3F)efz_Tk%5gX4MDK` 3Ro)lx濊#i4!HCR=7M)tF4"Ym = 0QTGv.\jL`M+C6]T9P 3W\5˼ӂGr0J+]tj.}"(t 6('QzCxm(C4a(K/6w嗳ZzʂC';1u"8ͷ쇀ASkAHQgwhiI3ߏ( `i ]V _^A ܬZ c //1kg̲eOA$㞠c *+sXc{oc4.<6 pDo߾NC2sN$0KyM4PY1jٱƖΧOsHp[3`:g+Q'7xńP`,IM3 ֕ J ;ؒk0!0<ӉfGyuv uԁp W,pYHvPYip p`cz @4LN G UXj}mHŢ +<=[aYZ2y,2gS®$g;h%yE"D:cdt5ٱƃO#ApG8GMk}<ݢvVb)J\É$W6 Ɏ5$C6OpHK&E#GA3ެŦ|%q6f;~Cⓧu.ِ~ے 7JoDݢ\ϓN(Ou'痳"pb'-7~)J?|r7at2':1v+l\&|NS]Duwn(O&)yكtq dH).%}]QZr[쿪%V*H²!ǞѠ˥pڷFRbX^GA/g`E ȪrЇh{_ ؞-iG=FtV{S?w_Ak;(\m!(IS(= /)_Ae4}RX8.sӡ?:'Vj!$FHk)ٱÕNTfG)W=GCkCPsR +>:"ҫu";x--H#|񭬩Ltϩ䲐4ݩgSOlj^5{CcHLS"fwP{J)]kW@VͭU['54S*F 0]Á;RKb?ڔ72M;x[nnO,ab`!&x~x~SdӯS>>(xIo2jv^"RDkV5(+Km{Ɩ~|6q҉hfJASS)a~eZ`յciK4#_%Jg+j}<_WE2U(R'R\ ?BU7-*.J6 ?*]_(b섯?<ŴVy6[][uI88 Aqy-# hfF& Qϔee|銸˯uS,lkjVJmlEw?ƳWRGYn=d 4{GhN"ȩBSuv|I4l i?l*ΔJyTtj<&mW*~'*r J+MbsfH yܻy|GRORI+ъS(vy1*0iԵ;Ӊ@&>E289%DJK&KZ4! ^hyb:aNԭQXme`*)08"SB;$2ڑ :ˎU⎪/msi=R}A@l][<iu@d Ժ6,jgްñs m老D;F̤f<,"| <IקHFTFYyP Y ;@ ^5gT"{F0B'u\褎 qSPdtd8? 4g"z(XLԾKMWpQchiF:P%F:.ilBL4/[,v9Fo|c-zsƂI|Eqf{2 %[ޜXr򩅓-O-8p+ZMRf6R϶.?$Gφ'?NQ"HߙO̓3v2l_󏮶]3}IEbk߫Qw.ZJ Ea  +d)NaX*u_g`?Okgox2j{-ϻN^N\N΋,mZZ*s0 ܠ:~v_0 |nsڇ{7{G{;yh<|h=0v.? h?ǞqC˽D#;0gonFØSX{ڜ.] '&0]w< `yfE w9liy{)G8{|r 7{O_;hrO4Q}aώ:ڇ߿a;̃W]׿.yk򒀴׭gWLם`xI֨ݭݾ^ӫ4}c4>3H+43Ƴ~Hf:F~24>"J/߯}Elxvs} h㛺j<*ݱstn&NI;<$!80_>d}'6g%S1 'mGz ]'cIyp z]wz U($[u>TYэ̩^ ֣I=[zr0~[$1گm57  L7`p)OHdj鍱xOe88?/,@ݾ|  NOs_w΀.|=>|8}Hԯ6&JpCk_& x2u9 JDŭc_e*ZW3)|nV<* c>6Bda/@XnPG2G`hH&\5z==mbos\vnݞNYPnjKAmTud3^Ϻ❸WJ`OmTA`%$h QVBx960e-b߶QB[u[Y.ٻ4̂̌*FOZ ) WL 3?rvt*c~RmLL<~6PiLiy}׺óo Cϔs(5.0i-!/ɗ=F1_պD[;#8˟" G9<alrxl/`3GiďZX]9OaT8~ݠᚴz> קheNcƯp ^ކԋ;EWMrR&ܡm6K@N3"mBe:: $sDY}x Һ/a8 W>BJ'+#n/ar6u9,Kò9,iVVCT_@-FRr`v>p&sB,a%9mW2`pR)S *(u IBRa!\%c(0Lu`x7 (G7APcAd)E`[e!B. vQa3&F+SW\ǜ k) >j xZWB.#ԁev΢L2uR q f?O,Քk.5y#D"!:l–UB,fˮwn9,YҞ,Y̊{Kͨ˯ eYIsE4w.KsYʝR\V̝+ުfKH=]*gYV)l:=0lJ)7옍  6\yO*1&giNs*Mm}v޷PXE*Rm -U dvMd ,%fR*bJiE1ƵZ" rbd,f2Y 0CkQK#fR JK8;cI؜$hO#B),}i"\qTڛ* =DVY_߈Z"/tc(eޙ|*03 @6iJA^F>$BZb*"_#2Ȓ`B&b#Tsm$V*KsQPm)1FQf@g%r̫R1CIģ`#hK ԉtE-hRr7i)V%ύ0Y_^|&hKq^RwdB=7۸:Fo^pf3$^x} X6F0֔7~)o"&MA5+Z~TWTc cɈdH޵ A XI\df0" źdpؾE@^XJ&›[D)“H"Zk*,àdЛk6I>"ϟZh$VSdf0֬2*.)Gr2Bc5_eX.YX=ِU&s9@H KE,2oC gSY@E!NMSM#U( 0cܵ8ey,m\*/l\ 9#rc\ /b)5UV$/02r5(Ea y1ºiqʱlb6_62AJ66q)/7q;S.B&DA]Ѣs@SeJ+* y!v40e:rԸgˆ+Ǖ (O[~XM5-B@1GnI1#3rop*mܳ5s#nbTq6bo]o.752@ZHQq32 ŮJ5ez~d&4pspq)B8V9TP=xU'Jr øWHHyV>L ''؟$8Oݸnkl=zf.$0rp{gί?3yݓ9 R QE`=0HH`[`d d-z?ԇCa?xX}|WGM{pD[+p<o8Fi3F#aDRknXZ_B$DAnh_RSA Lxe|s~X>\r+ӻ1Lqvb=*c)Df a]DIN3,SR-u%&a<ݚpӸInҸ۸۸sso'Gi&Vgg#B褏X74c NLQca&j]v2ԾAʑ]Jf5?nטZk,:nw8w-ʐJ%1y$ y/HQz !Fw=m%I*/UHffE$웝UW C d}qp 'R]:_U@N&t~=f@mڦq~Wv@m=]'Kmp!{އӥhtPmH$6"SS#}hkUG都^O`v /K`s{-oD;VwVǃ{'HB4%YJa)BFAK!ml>d@۞b3퀶O msd@mv@C|Kćӽm(h(]&e1D){DU#ixhsOfdǏ=yK Oe|h{t64hJ:gwds ޚY'Y;QQpД>3@O j7{P;@Λe7ꢷ"K6k3lU) Hu&#+R#AQhW0JIk~]Q bcad}K07jI֎Pz2P0apux*$ Lkl5yPKYd!R:q.'_|&$7 *}l_hv'69<hvفfhhvѠ,߅$o~)jW-u-)y~݉t :#Ѯ`]C nddq=$Yk fAI*e(IK:rYBTǯhJEØ+8fm1?;v,}%U?x?lټEbʹVzrIl]CIr/j:jFstz|=wsW~Mu.hDM_Fy^[Im)N[xb'qd>K'f}ȁc,~o?Lߵ5fh8=̔?(l{iny-6u'l297'oaOɨnMwL{1uR6bMGbN9yn"JV2664}ߏy뺐]'EB&D$e`1LM"JF[eKdWaü?rVf.YG#e\?wWG%Qc"K&U*wOڪSe!(XS K T&tE>]OvC)!vCn( Ehw{Et/T2 4.عZ{I4JAN CQTH_mn /ڋt+:=y9w.(1jV)84#3K/?@S%?T%*2RL.'Q2KAvڀ:ZmΖ,IWChlF@Τ>[[OY[ovё n>TcR/CQ)5+{;5PCQҚa**2D]L }4\7;g,E) h쪥;Yw_^A˶D>h܍>[K#X[aq4.W$L.dZ[@U)>hm.䛦tȤ"K(sj܄}'zPJOD7Cb(Q %D1(nDqš,3Ñ-`B/ތ*-ΰ %Rֵ9QT)x EE lG Hs'!ƃw"ꆪVpFIe)& J22EœPA1@ѴT!(FD4=y3;Pa2u_poFS/1޵Cuk&Gp=_L]3ʳ_O~gߞ?aƑRw"a~j yG'y?jی9LAs@mxvz~WS;=.`ukq^:e00ǃpg/3~J":k;|lƹ)=ȟê~ϊ9G^j28;v_,/3t%_Ć%֚DNdÐ B- ,lW12(ݓ1_]s,;/KaLσ5WӪp_:9_'z|Lyq+(wmn0?"co ڈMՔRݎwlQ?$~EzJ:ju9W|d¬$C9hQҗXm@&M , })j>Ȩs>Bm}^Gܮ?OT΍s R*Os?4uOgke~=9 EllܧwoGզbF0 |~{woh C􌖳cC;֪nkZ| 1j/=m󷜓'OY-3RN'-OoRFʠw$4z]^yǭrBO*9ZÖ]%}ywz8~s/ze%d0{ѓg?n3L? [bh7 oZH;ӽw~H90}`\O_ )у?AQ3QJ.ŦMq#Ne734<$ " "(Ԕ'|1Nuzlkʤl*$%ĔQ&Y2R&| ]o%<InȞtt!An?p vIEv||t;Mh\VdC҇{LAB6՘Z%U6 5֍‹N7h%gli81'o}^HZ2*a˂B  f *p|#Zs YbjkNK|ұ:>IU5m6(͚ޠ]HH!~ e6YYLN, 2&``[I 5J<(Ge%Y(PDx)c)Tvtr`FR'[I~ m 2M+s9m;6('?`&%}JܣhkЮr)1$9edD &@?aw]bJյ>V+9=%`C->sr6WmeNkaƼ-ޫ9oA qG\'ءk5*T$܉3!b 2BQd}32Z!g}թr)SXeMݤg]vhuyV7hk!kKl*d gzn!eLEkn% ">6hLuPEAeShS2q"|=pe }R"n2^;jVMBm{ E׶ݛxk|ɥW(&+4F*݆&gN&iɢ/.6̮.itdCDJj3(jQU ѡ~zuɅF& /ixȜZ\D< GYSLGYTiCgc9gB mNo=$%x

&;v5>wm[4HуN'<P ju7%n7(,덟ގ~Çtjc<=5&T]crl`;& qhjm8 m][:bNyRt%?Hvyk[Ya "f 7:3QKuIa1/Fq)%2M+'+R?XL5A(]~"هd !Nf?c`yv$ͪ0 d#p_IFND |Y uYX2c(C2Qa.p6OOr.,k6WWlJk| % R%2.:km){l5 a}n}#&쎲 Q蠕+*ңU\t$0Vܒwcv;6 zJj#fL f-Q/FDst NJpLNX a[ Hi[K#T'D6X"ǮJ{Kt&2xboXޱeQ[d{m,/wcfJ^i2An:j"{NnC$${rUL+Di!z:lDF?ME+*kP|.T^&-W4e!fؕm]ZZU Ƅ>ԁO[XkTX: 6}7,oۺ|Fw3 ʮ77*6lh7,0]L\ߖbŶ^p#ַ37QkEFo7,o۶nv%W}uqf2^i ImZNJ0lUd Yd L)>f*XںYuJ^י( TҶR%A F FJ)±9) y|rk[?沫?}V?G9$VQX)ŧmӪ+s**#3sEc(ͮkSvj\R+DeNWxуh뒍r5 8 ̍ *viq8@ - EV2-5L } ڥf"ZȕPK:2e=fu3:6gfhiLBhvÒ)c; ,Qdc/XB0m7,`1 n.ڔ;f4G9S25o̝?GUNaSO!ns`V|pMY傔 p*k0zx!rPIcR *L̖ KWdKZnbj!=(+$usNۦPnufHJ4Alwwn8( 5u%DNk [C l`,mdp@5lZkRm I&̱zNv\2¤L*tdT yZm[q5y'Tjq)L*2MibU!1 S+iw]v5yd9a^'+7Uۤ^xuz:ébwҀLT!AW6[hlBeXy?li*~&IU/2dw˧EjqRrtVd Oz#u׼.ruvjju+ӦOx. `ՇwQyX?Zv|8["WiqZQL^Kܳg!8yI^ppˀu<IJDs_%LE ` Kԩ(cvr%s)a[7|qy.dpY܋kjX/)RvC*_w)wHYGU] >-f*;W&Y~Z[u-+< 'izp@+)M?Bc_- oxy{h#^cgeX$Z4A߿IsSEJd=k6bnpӰmQ"ǍCoC7X6Ӱ~|y❖gG/~{N.> {6ȇ3أTx:: tqQTZd!Z.~:Y{}ߺ6fmv+۞c6C7ȮOl3'gOsyO?2_,7|ꀷ:jgvL6PT.j6ת;$(}lצ/9YH~0b["8׿>a%uG"o|ﵡcsNW+AˏZ-܈?;vQ,דOYXiۓUXWNIm*2l"k*nFI;xWͧ|ڲ\ E6nurQ4V)\\1ɰ-(W$߶ ªJX]/#e?|!G܆;I{7U;4Z}^5ͮI/W5IhCP9VѫXg@kMmdk˗U$zra1|TW/v0߿$:B*ϧnlAIMݭ+& :I!])e6}>\EhP8]zdНRa {E*@Gߏ 1w)t ک"G!~mJ_/Vf9v9s('pܬΏN *Zdhl#Ogk]˄B̉yJU]S$h:UނO"(hujIDl3>~h>~|3 Dzo=*㍩g<.-c5f d/ 0oK_sKҨ ƞ-ul u ͎q~f* ي/>;^rZ43{ugck0<]q?g}`b4EXڣ*˘| NEt]>xzoxbAZI2u)Q\[$S|ߎ0ئ*K2{}a*؉Rz.m`ߑQk=OSlMHO]L=S0D~qac$G'ȏ8_y )1V?Q{{zl}y@Ga˼6.mΨS]ĔY+n*J@r6ƒWkڏj=")ݝd3m Ç~fCw?hv|G#œ gʏRAL#ߑvG/~ZhDJ>qaȋ( y* Shl:n]^"V'j0;y^RxFczɑ_6]G0,0ӻ@lJ]*eɮ_G9%KLL)э [/`D_DMSn2tv}SK8WB.! JW"hHl}|ۀB:(C7웂]tS௞X"]7hL0twWb 1Y3ݝDmW)$,βr\wA\_|r^~N*AJe>~&"[)¥g:KET}3_67q.^PuﱦQ?)#Nq:Z=̎D:(2Dw q )@4adRp^nt8RNNz;79mؕ<@2zg7LJaFjʢ^! s1sGm̩[X̭60t5 96B;T],y;byQLmSj}@IYnQkz0^!ԅ]j񟯖5nv.  SY:Xyg A'q[wr k_}6ohAw%z V< C{u/ݣ49o૔9ZdHJQe͒TjAA8D,[ 6"Ij-2XkH=$˳qM?oCgרkwo3G/5!ގ1Ö1&$R8EZ"U@$9d&5pKW}Dό4?yH3^IDI:7uqM}y2ƐVb[c֨ TꍸbRN1ȢT1;5쬹ZT*wQqjLB rpí%$Pfޝ Nc@)j8rl@[R0it5EԺE@a:asvr².<mA\$L`^aUU{gr mk?r v&j }־F%nYiL]0V@uS 3$K#c\*k?߯-KIa:LH|)s 3(mjJ(c7wֱ-c I%A@I9P5/mĿXF q@q= Z=`/CbB# <[Ktj "sԹIs/yɡB3ŕ)q"ւʎՏQB\V_%dTh\$nsqI y5G^?r;c cAo` }V%o{F{j֓PA`5DG^DbXeJon<jUrT'Iɳss~_˟~X}l>[6\N~y>*5l}Kȝ.wY7mm44w|N`LLn̼{{[UD4Jmq9)$Ac̘P ( R` VPA04z~'] LJ u10!-#Rz1Tp0V kuP &8gE =C'Eppy*b񼾍4@8Ǎ?ZgG&jїŅqipgHxN`t2Vz{N,W`,t2 n?m*q@cT[V5"4V؄cgm2ѓ uE5]< @|Kfn~Ձi9~o (< lڝ;[#Ý%!l hߧXv%,9Wp}yrio<{woe$Z9 ((p+kE`Iȫ+ 2_trqflntV&f[ITRNn';I,`.l䕸m%z<^yN ; 0yg"xSHf2Xx콡W=*r*BĀ8律]PŀL9K704CqN^Dک CPG9-? IzB#  Yx,y z7oal>\2F3 ^ů_{ʖ1y@wUEaU{y0 !@(Z)y!xc쾏brz]'E 0$ қǼHozUUX8x;PD넼b.]F599NLHeUvNb:Xѧq *ҧbu~j+h=S8($pw:( x/_B!Y~lvl4VY'LP+K-(8 DMԞKf!H "oe0FNiH,ce$T|?$sف#Yj9Uh0~vD{+{buDgq|!ߟy)XbЂ;`4(n]+ $ )2x#(;ǫ(('6bki=1G"&4Mi^ޞ`C+Jt·%%- &RcVH'gnׁ:ܮzrPKC߿k>y;Os,K qE< j$'rPg/Y9@b"~`~A2跺37ԗGXX'gmk;.R z_5TJ$QZ0iSL6Ĩͦ? <}cVO,’OZ;ބ^q"4{ir(: A|$ks<R?GsYjX}TUpGg5<T~eu wybR ?dQVXw%qa)Qww)VIk)6+`uXU|<͜i:͜i4`2wT1$3 aG`u4qs4;!uN ɮzJL(tty^]CL-|љق(1z./i@b/ iwRјzDcsީxl8}wwF/oE^aU[z BY1z{åxj VaK4 @*dVRH@Z+)]fkˁHF.2[ Z:f.7!:#? d-DϺ1SA+Q\NIU# I$&Dj$ZW3ƀAF inYWF'(dh2w5 "d {(K!Ȥ@F/ 8}740ݛq{%/֖+;Adkw"VˡI(p!oX(3瘈Ӱ~!B)`rMb+Ow2Ӈ!v\ÑKdr-mW}$HI?mo7`cW7> +wq0B`C*`Ylu1FqG1#ѽ&tw?uц 򔰬#UيrFeZ$;,]2 LJpCS;g 6X)ސ뗏SY kU!9eLg)&'1> HtȍY,eqIj_ٰwcv5کM%rPĆB[=/ܹr?;tJu8\&qJD%J S<Œ1ީ \l݌o9`6FFZQ¡&LԷ܍ůI+R Q ata?}Md-cT!3^Y xiyԊUN&\},HEɣ.o9G*2M N՛ ,t wmX‹zl/!of6Fthc.kYR4Iɒ5\$9F>yxHxʢaRڻiILj< FBs)5y)X8^"Gqad,;GL?'~b':ciwfG]|.&PʘY+2W2UҚ2MS+{i~7 J+|22;{ig-e̾{АJ64%fh1pv>YʼnŠhof*׾M+]R$<_B4Vb=3ځftm&Z dDO[WcM8#FIZ7PnN8BW/bBuJm jtEDmp&8k dP]\֓(]@K߸$$sؠx3'` a\B+ok%)o Y!҂Zglhb{Vw>T$tkB‡F2YqC/Ӑd!WW ,lO ;<T~1;k8|T%xĠ<I?x$sVZ ppQc,4w2nxgUO|m1n1HJQIUЭ_qL"Y\aZC?^rE2"G&=B;RHvybk/",EHUcڌwBcjEw ;رHy  ZHHeDždgH\{ƱC]#KSM)mSVݖfĒZ3.͚elѓ〛ϬƎ64p&֍<5ÿzB/uܜJ?J6Uӌ"G'Z4C ƉͶjm94Z\aZg<6gs0gNu(-+CeAWYޞ "q&5=$`2` Ք&|5\E;ӨtMߍF~Vq:nL+,ꁷ~:UGFkJ&|\[㒶 !չӞʯ8q*0@J Ei!evmJC*;DO8,_`Yͬ,,f,?Y|e2/LdC!Y4-cYȯ>dd1R!\f94 Դx_DX6MU\Ѫ%҈Иk\#k!Wj'6J_Yo|*LT[qy&lrӗ?+MWJ۞+ &mG]1V6k:!W(峤fxD`RzisKPkJ3!8pSp >͎wZw_pzdzW//盳~TbǭwF|}||\lsyn˲P_¯3 ~5 Ҳ'y'ӣ;NeԏO/߉b b06QׄP%I"@{U煜IsO.{%}h@@j\\`梙E/"{f>i֥˯fb›<@K!J@6PD;ȾwPЭ$DŽ"E?$s KYx-P~1=6/c;iO`4g@,Jį}njC7'wE^zs@u]CV"nm2cֲ霜`"㎒Ztw?; r{?l-ѱҝqF&&hl9ijxivƀ?mWK@5I,N:T'.C u4 }BM ܸ}ǤUt)O}kI2Ӯ"\(eHȟ2-i7gcZۓ~ZR*P$BF6X֕ru$n%j %w Pn۶ƶ;m9Q q6!HwCøZE" =ח*涧2 lI$j/ŝ}WjjZ_^t [gŵ)d AV(.t##Х6FZdW-1&+GZ\\B10.l3/O:kEq<") zWbvR[B$ .qr@dd/9oqh0\Tl)PRgǁ.B'x&f! wY$i< 嗭@B6>? WUfg * ӬlEe:ק@cu#X)Ɍi^rQeg6,h :5] jQS*wU%qwȗ[Z, Im ?w^71\!qs]0j;pCϐ@7ɗᐥU. Ws`8X̉|hT̓>).icd*>Uk6?3ڬAXM^4ᬚ:L,!MN7KH \Enl{H]ҍGJ0eٕHSs+IPKx- EhAoŶ QSqxHA*Fqceݹ[L֪6?g }dU'cF7XeU3C1f&mh<rM!ueAf|_<ŽτpSXX @4e5SJXvyDҼm!Xꏎmfj*\C]L"}k4 XH8D@XԨ Km T}Y{-kv$o{8e ¢fu.Uff-:B *_:taԏI="xP85?@S:x !" k]/I6 ɣY yq~R1BTٔI$&%-̥^!k u rp1GWB:` 5v=&5|,d .?/("<3뼛K"W*^fFb0 h۪P@q6P؍UXHzIuVaZkbm#-v+^vkqLiFH)v9<~"Hi{0"ILjro,諍|fW?yؾ8j>E`*X3X\ӥ32v4S l/7h]?aOP͍aRm({U!$e9?mjpJ})CW+0pbbA`BČ LQ bE՞.!-jl6 sd~̈́2K*pmfa 1[/Uoˌ3AXٷ7L% {}^0J~L5Qn0i{4nL`W`|>\፵ӱF|'O۟'M҈AFbJڰ?d7hwR :d4w٧E@$ijh;ѕr gҪ0vk~׵=dzi!BqfO}I[p;%  :@}qѳe[A?r-lj QWY3Oo6 2f9PVai7 Epcw =߼x} <s4$V:p?pf+@H4WmI %&i%9R<` *h9: snG/pNIX{^w>|'DWBA Tr)J}Ns dqhN˼Sһ3&&"Nߙ ?Ȫ5>J@OcM}kR?ybV?^굜=o}ܙzk>O*ߞEWԛ[߿FWƭ'ϾNKox41Gf}쇗G&^Ż@g'-zw KgiM5Ut\Z\5>k0[,ODbAMK 2Rkږw*"y a!U(p4fb^~^a;2"h`lQD6A' Ev{WG+V8xO3D0T?<-n-_qw %+2yݖexENx1׌fvu[KjHZ]6dQ! 4G -˞KK ɷQcA xy %t]\eBL>UC;;]ko7+ }l1"܍A\X'76GzY o硙Q==#K%}XuYbE篂͞yGu/|C׵eԛxQ{lFQ|e毚wxB_8Nm e7v/Ezi"bΫ'W'Oű;}C<{{ihu4ZͫgW_=/2v_o>;S Փ{RU_;N_TߴzGJk/Hz&'}R;%CO笪ŧSSI<ǯITYNV3/>בmeK~of@U[yFwY깷4wYmqZCn1Z斆Nq^q vB݈xbƞ ^6eZϧyZDSVVc-B" -7' >'4>D WwÔע%A,,Q}—5o/o/o/>E߇O>E^ED_0Kc|mG-crqB1%!$}>}], I@z?9˯Y9m/_~e;\썰~FK vk]h 3BQ9 iˌekˌA߬Q^D6amKqyPzjL.ּABli cK[؛q|G,QR,5Wc\x?n7oXц@0dJC}HD ֎#I$: 3WE.g-DnVZIqP]MqTQf Tkp!W ]4R괨*E51j_=1?%8ְc}RZ2?%,ڙN#,? NY)km٪2}&loƶhQ;b7;g=C;wuw )1k1 &?Ϋw_>ojל<[:(Ht|y๻r9f.z2>:$=%1 h) &1G  @ڇ6Чݞ+X8\-/":k)>ASbxmt M)}PrtqƲDFaWF%UȐчd+R醴\l~ Ntwu gNr9$9O,%S G"Ñ[0$xS}@`/!ú^ct2QYXmUuI&]zM*k$A 3ho)(Y2E"9 lQO(;3NC=qe{t{V߻PQEjo@5IV8 Tr \Q U.%Lj#]L6diƠsHI Qhܞ;а(e)ࠖ rgH1GA,Ql0 Y>Yr9aB$IKRa(XrMU/u^=$xgjRV7J wVJ[Jؕ?^]Pc%¬T|F16ikztOnl^Nx#FkR.z|"OLEGLK-jI@:9ǚzpono..mQ\\[<pFLqP˫o]_aт pS0 SAu6E_OOZڮGBNͼz=>Rh*_v ܶJ2R5dŀӫ >MQQcg^p(V'j_Nr]_RFnK^+y[h{Ph3[ӼK=ËwJbkz,_/|VT=SvڼuqvLwo4O3+l7$€f@4i@[mdk7Po;F|^*]$ZU`օP\v^{252ọ]0ɷ)6^O&zR~DrE,/ے\ p]Kbqz2Y묳Lj2㷗osTcIlϛV眪S.ý2i(?zV5ˍ]4R&r~ODU轱~sfZcyzpqtVgzĵ)CZX %&)FF)`;Vii)INVzgiVS2bT*Zikɓ=OV񜢛n4:&©%P:xKmdptuIY2 wI3>1X ;)2Iky52x.jFnRꂠ&̳@F 4!LJ91h$K8g !kT ) GzzxZ]xZ]xZ]xZӺ 6 A2[m-t˨tC Bi[[URF݀ ]P+fїO^(%U{)aR>P: 2E)EHsI Ԕv{m1rx\AjfgxʹպպպzvBn tDFY{ȭ.WER9Q}-3Rv (kHuXȼ+rb4QK f*WV':Ataad&`@a41T-% == DR8Q?%.aP~ArJZ;pIrqN2'j\/ O<%@}񒔼\b#ȕp`IC㜊1hLjnMtΠފ,Q9@GglirM=E_9wmLaXoˑQ!Zgj -m)B!ć_k\-B~EwoBl(Ɇ9\ CFSk0k@q}r҇obz66 :Ր.C0\t[X|,%˝g;†#dK)y{^@g]RzIw(׫l9ruH%!G!m2E9zb%!d@PA)*E.ꠄ9ĸ%%(Grb]fK~Gq`$e`,<@DLZ(DCߐ"!d]A&JnJJ]-h\t6()Ƒ$ [<{BX@Sb t͢bc4ycYgA֓*piQ*mVtc[hn5(fҢ=Mlljdz#175:;1 e&m[NK l.E|cX!moE-he@UgM >pĂXO<;P0aI͑̐"%,tdHPޔM<I 7BJ5=7>r%%R#ꬄBh/N\7yf. Cî^jp@[Qhic:SOEeA$b@,+c"X.H ̰cIle(H] G􂀈c+=`+uUTX0\UX)/Yv6&L}01 wo[mEz~)mͱ1h&L!El,,ήWuBN`&,U#B"SIٻ޶eWh SC8o7mAKKHK=h=lR"e+V$"EٝݝM@ZZơCdTBN* 53 Dy";_16Vb$c1+%SȺ1~ؘL|hk|CHeHcƊc\5"\IkgZ#,XQN!؇:V'=`?=(#UR60}X }#gJ䝶`^v> L s5I(ʒՐqIs$& h'7e|e*ݰuP9gyO,x@5SJY"=k$UD'LrT6rYf+AcbؔxBBY3J1tBYDCw0l%)5q0nG+Y} E|k>oQx$m + {DJoi\t-w_jqin޹~]z~bRcH,Hh`K#T ljњ>sHQu WH=@^V*LyB./T+8@`"-Ί|zx7uJ1IRJ=*.zXO MwWOosY^y2NrJ gj@,iyBp26*8 1"M$bKtSǸElER2i^vD$$CƔHČjH9BmlӘ0@b",e$61^mX RSEM );J t)H B 6\a0iN=޺]snXNS $񹴐3ȹDsbIr>ND5f0OX[n}OŒ=WQ=!Ts:\~̛B GXڤVo4.V0 ;&_+#M5|f|00%i9a|k6 (UH ' ?@>33|wJRd&& DFĔ`h,#TD{W޷hDv0LZy B pa&җÌh3)D$ .挕0FB)_KRR ]%ܘf,*-1j&%(ް[?2d*EMԪ#S#6Eig^c0CR|H Niי~0ş}?0!7}_A# L\'i<:9I"4@l otȭ#X׊X2ڊLFj14Y%|rU=+by[55={bR̩]vI>%O+!rK@ª@0BM6!n$&JcFr,aQR1rN@ iqWu`rw.R_[DL(FDF.2ީ;OWFo/a.;# xg{*h5-a#G a]wSFA 'Z. ;">ks_zwpw:={lyX>^ǽ,bDQzҴpooCsè^^6e$wjM1>ER^[壱KB. ߢq^ [8jq&ϗ%xgZ*R巠]όܸaf㋤̷[#jcy돳Cm!G6V |ʮ0}* n{n6Qvv6}trZEX;\(xW[XƘp,8e5lO3tK˧󡬜В8xq:jpG?43η3)#FE&RE4dqO#Jm@B"O8X[΅'16ST>4Vx?5W/a$p#7n6 lnΰ0xf|k'~k xx$'ƓA  '+I`AH2X8r!eD ThY@'T<S^OB ̽㪮:acc~PxJ=V{oL. r쨚X!xeVsFЁt{Eslѡ p[I󛛧5y̰3"TIT?=K+a`\SUE*sR+rxz7VDib)#fSJ ',2I*rڮH/hj%S$m?b{!w3ųǬսftw{ekӪY}f\<@/^!=3fC5ޡj kt99%'g=fXT$`,?^Ar|[9K=KCRYs` }luw\+M| ݾq:5橙E sr7޸O iaH_v;w/a\CGvι~q `rlz^5^}@`adO7Mzv" OA|L _08ΥuDD$%+kXkD{=^ȝ,tOT.41#{ zxmpzI27L =hΈd /si"bwKdgIR#Hu(A|5"UěRpiư4wϾ3s"Ƃ|j~(LU̦̍’ԘŎ>=2XOx"% 7F#5yjf? #iƚGaI#j{{+j6G/uamzt`@$APdI$0kIc ~shhrGVC`FVb&(j'Ig|iT+\ӈF id&*MF9wBk#uJS1X1KBƗJ]Q&z@K՞@Rz-)ͷ%ł>\i&$%S0+0J SHQ IN'Dj'mL .ފ43+˷^wC"N2 pl㈊TYf9HWQbVm⊲2ݨiJP͌|x+훓 jJxtjs"K+ uϽdF%r15[iߜt㵍 3i 5|)̆vO./`Y4gAԞ/ t"y&%H:4(d|rzàH V/%zRTGfMW#KUo7VUMԓx}n6ϭsXsR5Esrn[*JH^I"h]r,BD*ϝ ªF |ka 6?ui:w{q': 3eQ,,3"1%,72%^ w\eǫa G=_wd1=R\H=հpg>qﲗヨ4z[eגTw;,-~;qW8W2rЬg/gE)BĬ 6h%/[)i>'ZN$6f"D˚GrCs.P8l RZ\w~Ba[} y>xx^\hj Ppp-~@a%H mXt|wfSlU0#85 .%qxRlѽ0bsHwʄg׃Kn]+?qkHq{gT<@B}Oe!^")$$o$zvqwܾ==zoEzRj[ɤ t}juR=Pc=wU:[pEЇ,Q hRAߎ?G\0'B2/'|$WǎK 7k($U ':Ki7rFÍG&Zgz8_!ro0R-6~w,/Jt(J&)I>UEM,)Ja@ []Vթs$lL+\7Y4;X;9Uǣ׀Ua,ZuWHlS#(Dx'98\4IwWWӮdW64{;Ы·rI0({'q9fg|+} {KAXH`Oگp_MC_WWmVS{ߵ[ԻP;L@Qz6B&T#+vQDs[T54:@&'h# 7I~|< qAev+-s9`'AW.[ $wOBba߻NIS wGډL)Idb4v;բS-$etʹx4~` 4Ns7ې Qv1fy Wm#]tvʣ` E1x􅑎 /ϻ<Wi۳ E|0MgCޱZR_3, c=#!/Tn@4bB~֣#y;^W8 /dbx=ß ttm^r`+@=+Ձn4Թ-J)_7_uUH=YI@nOlKSI8cR"2^t5 %(4N2МB5O)),:$Ֆ&~l^ `5:@x !0^k{gPX"Y/B+w ƌxsc"%zg:zy`cm<"(bZ08qLėQd "t[F1uYj[:IFTۡ67TLS^VzRB/8 )'Ib>HVe#;)mr̔QE T\# وR~rbQOwfގv2>\ އ5)w٧gG埿򃟢to+*~RcSqᮔgS\;*WGoQTLqn0'і ?ci%Fe?> ׼`.%]E Sld25 dхJۧ;+M徺W7V}Up2D5e@#h3"ķ aqO!Bݺպ[wXB,)bwA/R(#a6.. yؙ%WhB +8c{X#<Y)Cb`13jlxd=4G3䕿Meu;bQ~rzǻ.uGJs:³c~ՄiWg5],ȶIOmeB)hm0f:zu~')A5hR@QnU Lr94G\} يŷ)zD qSD+,94֧hN1KaRw.}2WW;VmȝFYPP;Sҝ)&9!5hdHmVʆȤX# xrJ$ϵ㨒Rܪ.S?є" AOymD3!6$-Ojaʚ )9FP,J0,%S+/9M|z>٩vVlŹ׿DE[chJI'} ʼFr 2pR혣V VNXbߒcf  sZw2uͨ<9[,WW 8)K"dx' s9_" OcHmƏ|= TjyYphםw3( m3Ƒ y_{rxˠn{!5 otbTӜM}"#Pu"5A9ivUJ@%i ܦ9b :_0JX@BREBYGl0+~2gȵ~*!(0xM/]~̏kaB<ΰ02F (WH}a[r>8 DR?N.;͆1'` Učn)\X撀Se=*1KPpte[Vx 9:jJ,3Ӵ|Ք TSJO$Geh}Rd3Tۉ'\_˙X R(nR#~폵"Hx?
0J&9u0#,arwpۄ+9#gJ{YH }1rjysD(W#+4&;LC,GѬ`Oᓆ551J?<J@N'?+UHCzEr|d( TjG5+o\SS9#<2.F+\wȃو¤?{29?{?&f$gN#īewx\xT?F N0HI-b:όӠkPo)Z<52 dw~+G cV;**jA:M ȓTm~4 "Fu#ۂYG^'oo>-(|D] &RgiE>2;Ys~0Xcw3kP9ggzݿOR~e#gk|pd|Tp| | Y #1I)7'N֣:d*]HݿTdB1]$P(K?,y\ |;8I2v>뱽n'.}I[=PT>}>G=XTa3Do?6Ci36Ci3[ {7w _I@`4 WK hbH Lhcu4%߯LC,ѩ;Ы·2ɡRλh\ǗYj1;+_s/ڙ=,PhEϺ[Gs7}|n;7۫Lް~N^sm볋{=t$3YFb:Y Hr̺tvSnM]:.UրJf+В[ 8)ba >NTa<>o( ND*W9TK9^C9@xv$R)Ǵ}-<#<"xO qi`6<#Kq|l=rX7h03J!5\sb).h)\^{<n9VZ7Qv Xu(DSxLuDB:+ #0=ô 4 &OJ=F98MLǗM_:9#JY!Fݧ%ʂlYmkH!s/*մ艭\OKaIR ۠ )4Xq&GJIRX*Ja,S@ Axb3rΫ] Pl !q[ 骺mx`+@#R x$Z:QtќDFulڸ@fN*\~VrXAziDIUWKg J֖8b s e \ T+mH,!fbgayyn{J_~# >rMLE5=0[ǧ<>RuZ+r]ŧr;zktvS?r Fw/zJ)rooa}ۇ{-(zntX770C4z'.)5KY5?K4 ᘩ0/&[|g%=/i,Tͦ<Tc$:ɚ"g*HUN:G^1B0z:5Hi`FhRWYы %E89#])1MiaȨi14R ](&4S),ԅ>Xk/gY <K()krLZj=g`{kMHw.# S.bD/cEo=gg\zXhhaf6fT);F[X$,D|([ɪ)DZ[Y RbvJl3 2XN(^G@ZaFYHw^%GѪtx!R<,!-%,ro]zԂNeNں4Q^ d%*PWygXtyP1dvK-CK2,FHt$oӰGXƜtYS@MVc!%p$t 4 G]$³E EDYuC[N-b$G V`ۿ, swj\IXRqР()*R)!4Bj+ٚBR*D8GHTuM NaAP Ggm$[8rASri&Ȧ~5A*ųDd(}eE)pFH#ZR89X*K-WpxҤL0 {TSAX3#+l%1Pwy8#ho Z°[RieE <`2::ϚW _MHzF)ߚB `bEOcR!}0HbG0$\z׮#(,eJΙNwik ,L2R2x- W˜ ; 5Y-/2-8!T (w iaA8( P8PXpo{:q2h&@+0zI۫kyLnܜ`q:8ro{cmˊ=<*w d 7U,o^\K7Ma}bp*G^C݆ͺL%aY~_~ Y.w+9_)r.+P*y X'*7(ǧ5#Dܑ ϫ:l'ܚ܀X/&ss"kS* ,ɯPGΙ)u"N1'h0q}Q;f.^o=4}=9xufS x29w@=]Osm^HN,l=k5劦7MeN]"XW1-8yQ9 Dϭ"J5D[-/K-B+'L eJC,S $‚0d᱉PFGFK^VJ%vv(tLJS"PrEO+Z&IeyeK(ɌJ"\k4& L WcN") ;%jvz$ThUݮ|o!<ĩ%%ШIgd~ ǛՇ4LG=L0$Č]qddp)ʰƒ }iOTAhi!5Zj* Fa 4v( 6,=X˗( 템pw셜gTlm4ANr-qsy@ ^W#&Y͏I^c}Kt- G 3il7p$/4|$k'{3Ijhq?3V {\4)ڈKu^VY5`E֣^IM;[);frxx]ug 8(qqFROm?yA$_KU+94.3@ݭj)/~.}g^ܨ 9 U!]M7!*ʿ:hC+}l;żiO$㷘yf)֫DSWBzD\+۬yݐfU!ԕW5ߘ頱񖱽1g.#z}z >Ճû/_Ssz;sQ^^Z{DVQS3_Qt-uF=}xe+yf8ʞccK+!!5 P:B/MhB⯢)rMc攕F 'e k+a o9%},=qDXTl@}~zn8z9c U%,)땀%8e=bd%We}\1./l|tB .&}~n7 #3J͇ wΠQ1KCf- >cE^!!焤uzc %jp Y;0k;EX>ܘYq;f 5 $io@f{j6_qO҂ fS6G7_1Y[-RX_;/iB' _B'/SJJqZm rvҶD uq&kI/M78~@#p`u)k8K g!,5DU"n4DH2T::XTd)fN3d? 2;o&MHB_m78_I鋮&?MbiYXasL.3s'#n9lr5Yr1Bk"0X?$ba;52.UPE&RB_o}J ]vb&Beeb $Hȗt+tnғ󛛗i! 3^ROi;%Sj2L(EC}Z5to,Äahb*%N+a{5RΤOAQ"GFQ0k5m?I *I;f&zRtisszZNrU_ @1T"`>Ѡ[G1f)Av'ױ ?=uR.Ȁ5nfvm:sq[f3jsT#)Z 68k愈Xa+GL~<8=&eϪS"7W @*9}CQ-]"VvtC}>{4)N8ތՅ\i&.iKTwU*`D€g&쬘WEn܀ʫ6Jv^N~;  )a0F (:Q)^b$᷎{"x5A*׭xT#-v![ar)#A163# Æל:ff IVŒ8*x , cqNUwrXÀDA6,Xwcu7$vL\q{I.aXh?v ~ing}?cw;bL Noz&E,Th Gf8vªDE,tu1{p@Qat5V#w*۲SѐmM_/T߷׷K. ڎ\&vf=_I0J7>R,e(+%ϵ~]<Pܓ?Z[hn#gVjЛUB3~0 Ġ m.ϩf\k)X^\-jWYc JEB,lJ8Q[IQ(!20$Nr*)eL`4f5l7 i.ɫ3эet{?|ɪZb 6s`춶«p l=L\э|b})r%FlD6bS(iwrՃ\S_>>$xh .J#'6\YIgʳ` :JRKZw=^swozpbFGyPTH8;O5Rv=p$ -G@oc7σ )*?+MjhXփƣqΐ'4O#o:=k\{<ްnRM:?%B=ںvçfj0f!\ͱ"Z soO 5A7-1 p, Rg m,gԷʸL{C2.2.7C?gP>S,&!LLn;=2޵=UdGϲ?=P-TmFklpVrw띒F{ȶkc)*(ZcK#D$TKff =batUEhdUnT T(W1a3+sK#K^ΜcD!#4GFɴg^g\b/TdJ[n ' V.^5E>59Ҫ4/h`w'(qcȀ&,[-C~4-C- OX մk@9C7 ɷփituNQEߕ zdw3(BHa2V2e!XB8ʽ$'2fbM4Xਂ2j >0ZtglZpF:|n]}\钫rU;6 z[lEطơS),>c` +#1΢j^ˁ6$1('cs cp-P:}Pr,pQu8gY LNG+GE 0WeBHIƢ$R3ĴLcհ~@3R*ػ[~iٔxp@-2s 9 =1Pop|Y\FWt*.4{b5ٝ% ߚܹHG D#3v#J2I}Hn59 FSwL:B;Ҩl>Q^`2tXc*J$$lsX6T&L1Bs;j !b)L#fSdt{iɹp^(ќ[LW*.psAփTԡmWfqᢨ(FYX1LJX^}^`J([_ǥ@Bf???vEXŤh`y#qH0/:y.`wW!H_v6_叏}ac!!/9fJR/[-ꏾj ¸cMR|1}x}Ņ/ҍJ #ڟ(2ɝۧC-XMIJ}ڳ*>#d m t\Z< 8FQGȘXيcs%Nwb)ZP ,'HGyn>aY pA[U+Ԙ*&ẢL w)W¶V{DDBǴTrd:QFtwkQT2̛ iw|=\{u k\1 qfN~jAH{:?E6H ĝH[56O#JOp] n* 2X7݃?^@oFק8ZA,bb0ق@c7L7p5=m nPG?(lgv\@RRRz!FQXvW7x1a8Y,,_\&.DŽ&:)^'4N&`s|W fcY)Ϙ:M3Pc*M,uCldK۷֭[vtĪL=<^1S[rYDşl)k:.n k8\]U~$x wͳ,Go'f>pޙ,h\.U]bkav*f:RH.If Jf:vP0-\eu򝴢k-cưK>=ِX-It.[ݪLk;/qE)0QR"Hp>e~P@ocӇxOxSoϦw!yfI.#TbLqX]K̍IYNw&Xӹ>>|]>Y:R{5a vI)hu8&SLpO+MҤ֯*jS:w^wJ#Z |!5B%X;!E5a] ;'TOO.B2KuYnbgpånF8R8keR9klUh_iJ'5>2~'u%jEia6ֻs-7;Fݪ 6$L'KyrG_;Hݖqw Ɗ7A N'!%I0b>ߛgA?ݗ~rIg r.z,j{gBw;e=̈́HȲ3(![1pփSs4:Ui"K\o-fIN2anR~2(zϞˠhdj|ۛ3{E PFc:$TKb84 ̘` BQ:`~EΙ+j6!/ōγu../ M6:zuy`a- 1IwX#D-ʁ=H4h0 㓙e@ Tc|ŘKLuL3,u>Z"c(EVkQK+2ވ/#y<"d8oc'fRA K"qyS5Lx*ƝLzύCOS_y4&PBqm|1C;(~^a98,` C%{Ņp:W\oI{E %,s27EK0wC-'Z<W5.~ޠQA^\x>lso2~:f躎crUUO31ӒJkmXKs2T/ eB_mnhI!);A )qxp.$d35]U]]bcF;#,2 ī(}n _W(WW6t5c=Nb}h͙ofEǰ|9߿*%IWgϫ |`F.G ?uD4B]qp^ƴC.λI`[.[G0v 3iR`f1-3r/8nO:RV+JHFU1fޟG)y?)I~,Vy&UW4>Ff|?5v2)Vbw} >|v|f0!XL;u9Xn3NsٌQ (%) 27ٙ :Er=Er褋w%Nz{jqt$"'pCui<"Oe{~ j6?CH0wV͆r詜JKoiU; 2RxyLW)U[OR]v|Kd˶B% kk߻ߚ:/Q,n6D0zTsA%XjUZ*"$m$c˫fJ:0<.ٜe=m#u nt N@8G5%x]wM.㕙ѕxigj.ˎ #mw`b#K x cZw+epk慉)<_n~_{wcG+lpp!ϳrp8l"XU&Q5uu Wc] s)ƨy 1I ,9!۳pFVBg ]xu1YØ}}*Å?@aX`iSJnjG34DE4~)\;#*() 7pFW^:N"8xX~{iz\b $5Fx1駕ƂK%Q Ǥe ϋNŖk$,1j(Q$n,:ΊI(> 1O`aTuHbAFĈ9GMSd$)8+lPQ?%g0B(N葑z d+5($ϷyZ&94;9K܈!(iGzNi+ $r^hԹGxԧMćp"O>]֎O={wί៯>UBCiԜUVv%o?{͈#bήv'|μٯU.mANtT:{ 3֕M9qƱ: gЊ8]n89ly8ֽ|8T"7wCX!gBUs!Û0a{Wz>\imλMQRMBΛ|ըZF<=r%k;[ 8:mZܙoc~N6%J{; e߼J7s(|Ճyfr99ˇ]*&@#Bb&DP*Ib,/ 3Gޚq!\5@C~^]I> =Yv)RN\]T ODĕ.=xt9 %I+n8w>:0T :* [/AF۹)Eq0wſѰpx/W7S4(#_^ 'N˔_dP= xkhvփ @"ET`v\FpY he0SzKNU1rE&%-M]-RaPR(Vx+L%X33&XC,Q:e/RbK HSl5x J$:D 0}Ee [84B-(x<Э\.JZf{v{ 1 }ڄqAhQs[8}VC U%/"Ç?+)GmKҷym/}[^<z=+f[%IFAxmRraAN0-jp=A~iCW"n.8{qh͙ofEPjƍ=t9h4kIn/seg%V:d0pˢ1cL@gH Ƙmq7]RhÄcIqD) *+$CT$Rs;.$yP#89zu:rZ%i5S8@qG0V6z&ih1v*s$:Xl[y" S\&F[}>?5v24.3A:d!|lg%ws(cyOcѡ]~7ӥnn_6|F/Q𦿄@Xu("g idgacDDZ 5>bAiDpapI& :@,>>ŔN,pdlrkӥA >? ]8[ִ$rt,E $KPIH!@7sџ4)@*;]RAo)2OT4+ >?ߑ D_Кu_=gMwbꏾ_{UY΅@I&-o.kSL7^^'z ү_*>+?%wK%#(VNo_&*3c -ѕ}~z?#^N2Qt}~I(Oe ()l7!J2F*{*nURшkK⒄URVre$-b ekzpMu)z) ʎzx:YXWm@ apd9ʶ5';H& w9Y5FTgplHac;Xq8M 1ܖVACvf]qSk 0|wS\3%'yۗ3O?.+?fP۷_}Ae4d ?DI)c5<@$}hsL:R(C\90G(*CLsurYɝ3A)\TÓAndeItgN̐BQ:vBO6mD,T5`imxlA'j614f/%`)[dlxlBE"S>䤌%-d")W:"87Sxqe̊` r+GɠAѻ!g63pNS1q " *,ւ`0;&K,Pe/T4+!&4,1't+ʘ,/RrШ8c9,[r" impr ̩D 81Ľ 29JE.#yJ(n ~zP%0|Iȱ2rLOVFpK5$5F0X(~ˮtz_)2BDА5(rבT.(i(QrGÌxi(SЈj=w^KL_!UWb`8ڶMV5cԆQAa4ڠJ;\3 2D5AMPe eCvL"R<ژb$懤HaX9kF>i,L@hbȴ`eabu{Q` KW8Jʄȳ*bbYrÔBr4E::fS<1kDh8o%u:D$@S{Ea1ua [[;`kf U3Bтnޙ8;33# \饝=f"7(lӝ Ȁn+vqL-ch13UEQƌ!ۘ[75 RӐ6X͎ym#~\ɯ]Ai-#V&߆ɱIƷ2b(~|W;-AQgM2rtW>yr/4vf8}Qֺ6N׳Q\6i_ߥL.&vj7{PDӔ r.)\]I7 dG {GHmעhGMH6&ZъAПܕxs䀾Xb+nuRRik(8$a5^hIeiS>mjׇ/GFҀc~(WV,°Iw$R&'}MY._ [ a0l,` W} _>\I#:9Vؽ~v\ڡYXExM = a u]X=eyNd)C~j#%mU-fG0u8L~4Ta#cgɷu<ڝ ,:1Hlfm!e+#"-ѿƪ/9pY]v &^]ԛB8`#l.4=%eXHkPOJV5知d#ͬyƩ+W8~sˇ/`nQR4g'9%I@n힎{ZsHGv0CRFe3LIz+SXHQJ(R<1g7}QrYΰ9&ʅs)U8gv#ՈPZ!ERl~xb${DKw2Ax$s MƐ9^5"X2]cΈrd }>@%c9 dNqQwȾ=SQfRik/ 넞 =u\f#%ֺ "h ħR1$mnTi)?ĶnA[?#^NYhw!EeNL,vUR)iu?M px6&;.Fi.fvKuwyzۑ30 sL2kzJZ<2=n&ٿ=GLIOçZlJ[x0PyfʛXy(OI48-tL8-\{mARqNӣ֌OtVKpWoA__w6`kyrP8=,KpxI׷XC1wW ^>/t8)4-Bě42 /'/G/>W|Eo/fO^&;td9h3XTOG%D&d8Rp;-%^r/չȬ5ܶh QwOv52cjhC[/ 5$ `[?\ТouQx Vh1.\ڴ*kw㊛j6fC/FLkZ'߾%eȵ:Ԧ^\|u~:bkGłɅEiHmub,W8I76ti|̟_\,/73V*ϧt}#r+G%sFx3GJ-5±i<`H{5Gt_S,BBv).?n\nXݺbt>w;\.PtZcڻuлa!Dl  SN)o#L.*b=04wɢ]sieE/-qۓZ cbtQ=q3!Xa%;:()Ҩb5vYa'S*P~WO/}"D+w 3՚gVS7^6sHEǞޟ?lsnrP|Θ`p{1AWLskjT2 RBÝrPܑ`= l .Ufaq%![ח[@&QCoGłLnn>S5RpPזDl0.t.)٥k|Jy>uhq($qGч3=tW]Ҕ7fDD UٍBuNy=PY4i|:cUPez+;Ѿ*(qmBu!+\&7iUsYP-ħgb99\41iCK,/վ`زB*ҁ%JbKnٷN6ڷU1T_i^HY]jy1񮰦(!xX)H"~C 'O,D̨}8/5.PyzCgk^:w&=mg jWaĎq1]*uqQ{|g_"gyw1^Ti*eRo Ż:{AHa uƀtFqV#*,`BĽdN8Nm)Y3KԘF"MueUyNCVqw_m:sfі>ѩlD͋O >/V~0T_P\Z%eX~[Nq2ŴGxPv^Eüxq?MүѨR=QhNf?Z~6oRb=X,?_~k+08!%3RSF J[Ja!I@cxD RPkѻ@Wvzv's~;7_Uub~>pٝ-k`A__<5 g5 ߗ3?."d} * >S\)}x{{|\kěMxLNsus:& F"E_77cldO0wNտBw.l<%J?N&>(ۣӿ3mf?㦏 )-9V-tW_ʽA)Oa87+PtX,A05Lq}&}eRxA1#2hdD e\YE\X*~q\_/B.CJ7nWh:i ԘG"ㅭHѺ~Ã|tU4x@|"* 28FAe.HF+/1ͨ3|1s)ʁ@P} ۋ(7?2 cZ#5IVsUjL @j)P$\I7&;ւHKᏌIi$SOT1ےp$Xl#C1X1@D(3K`ZziH2 Q;| )&Jž(C̰x1* NPbosH8͆9Ł)éoO,A *Jumφ2XE2e-,)r5^Xq qe`?l@Bs9[KɂQJޢ%#[ʮ$tNVQpO Ue"*85&XJG莹'Ԭ<5JS|0Q[fӔXĸJNobo=A㾯[Bv\cAi1Ac$x ̃Y p@#RRb6OrGqaC2pZɡ-1IE"[17X^Ŭ1Z_p$"+3*zDq7q tňÇfY77U?O.bܛU[%Ddvef)g!Y-ŧOo זye~|{iNJrr{cJSFV(.'E%[J#Le?n.6gIf\NSS01Vu35k^vzQ\~`s" KJ!!sL,~3DO 6Vc)^:i}$) Gv v A2v)i=GZZLlRvQcIJNqIïesWJAH@("ƘPX `ąRk e1Lv$'0Euc- h$Ij[O,|%ߪ!I0b2y?i5.qE38U9-zս2^ýNv{E6-&eXĔwF'5n2g]<8\wrv;2_2`]24tꓚ I?> 9D]g_4|s?]6ux;w'&u:q2T-9y9&= ]&)VǭZk;?4Sy4̼!3{;6/] ef{uЮ'ɵ:F'3??+s;=R]S80oв_F[}kx4Qbyf3A!{ɾM'^`).)w Kx=9@k_rT> gAzåMIc)Io\|5ԙַvB_5${dB@t$j;EiPt],Nv+@#5蛨Evf(Z񣸏a'Ιޫo@|Z.&7o]6*pI~ƒvn`8v}»!lS35{t9zP)&S•>ލև@Ax*U b) cL$@%\zWJrQKU,'쯮mo͡Zd{c&a?c^lcmWNv0|&;zORU޲7\㞂> Ǎ;{l`Ze$q&,S_*ly [h)րD0Rz}#jRGdXb Yf&y&`{b~I)Eb-}Ci3!VqGBg*E1d)׬?)t彥jpV}f[KeH& $gSzC8I|x.c?YZ.*yN.a8`/ I!H z]Y,jpY>׀:GuSp8` bʼn!R{i) qR X<` owm֪uLF ~,ּ`8BqRJgƔkNcZWGBۀt -} rd3R&52$: #$ذw8)$TTADk`=Uf2w@׌{tEsϦu6y%q8yʝTϸ)|Ŵ9"E9/tLB{K\2cD{v%5{. BZ[?ҫW{gf2-ZG{lpDKLi'#&7rGf2'|(7b'2+L zBe BO 4Nd 2`i2ʵ>ʛ۾`jBG58TwB F=Ukx齰X)XkN9Zae|TG38Ԡ< F][ﮊwN%V`%3RSF J[uY$V)(;Yxhs; 9 HW~9D0\+ڗi}lz{u(Kc%(:m(s ~҃{@:uE dȶv[36{t9j=eQ_*IlvI6csć_+ƜE%ц@0$u0W8ì 6@Q=i*`>9[dfJқ x{3ޜWǰl8rӊ_o1;3}uyQZv*eIղpz ,^; :<WLGnog r~ +??~N/l1I7}2(֪ P\#nmD_&>NZ|7DSa )1u4%0S)JZs(%טR!ڇJ\%ajN'ʳnBק;lJo%Zʹ,~9Ow_~ ι~YjN@~|)9rȻǺ"X0ěd/htL*)/mK_2KKggk3y:H%[,RE <>p&ɧ*{:fuw1~ F P.&38OW7Wp|YuH{2B}wV&J YGf52t~tzwQxFxMyH|5qޅ*'~h<_[ : }RvmЋwU˩˻47ވ^hqz9>]zf\yv/5r3 }Ocw(! mo 21 ż>n}i+.7c2׌(ѫPT8ܕ0Of|N,r/ [ևht*%ˏ"I\Nq^o=KzYqN#af=2@2F{C?1S5JPʊ/gqXcxYkܜU3^>J{W\;vHȺ[2YhBK3K-]ߦiܓ  gMcQ 7C8˱|2xX% h{^)/"6ɉ'yJ-\W%y< :%BY7kR{(WੱZhvOQ&Y@ZJ/lb(8فLk;:=ਇ ENs9g$ڝs^C:T UIqkZ}7OO5.(Ƨ7O_uwJ#&U-^y*c0Pg:OI Ǵ`FgO4J̞CF lbŰߩ?cB@ V!ni1lBcZ&#HAse= c<oziUZ"`R) +2`.ˏ&tWvZ}4{Nb:gw xtK|1}v 0daSb'@?}49IpJ1Bx{Loq,^ rx,@o98iom's8o1u) V)BG%xٹ='fr:bٮ|):c(Br+ Dd*XoG_xn4WGLF #۳D"o[G`[[nu$P:#⁸G})kN/5ۆ>[.[XΗwwM%˅n 3R̒F ,&HT},] +eҕy׶|M[)Z4MqqXeѝnx9nqMXa~Mn~sxir|{ /XKI@ʩt1¡ dd]Q9ڹ^޻[tĦ+2ӍDa'Q*(-Ccs("&'oR0LiߏO(e0ӱJRcɃמE-Nq|؄i,+dcoQL:n.ZJ;NT5 ^xS3BS1P$I (0Jj}0Yjqy@)!cΖ{ש G<g~rGݷbP! AVmz&I/ >'Ҙ:`G$$9|&a))Q&"ֱ}7^8-܋pOEЇ3(iy0 sM92%7NI'! ]Z!B3HŠ"of_ДB{ "p+81`)Ȉ#I  B9LSǴkxG{o8(aQTR'#:i~o]μ& '6sXƁş^]|r{ty9xw8YB; *S2FvS{1vqZHȴYix 0Q1 Z  aF>\ cH $oM;'5}9rV^B|P Fcڧ.'I_G6Dn|2Җ=|eǔ&h0f^Bdߴ+MY D>蜲9t/g`ϢK W~rtҶN]Sl * A fq. D @Yo";';-i_ U3.;O6c*!4N 79SB12u0  cD[59tJ1%d2RJ%'yI*`` BDf}NgjMTϲh:lʛ9V5RB;03nTAC|qN+!c$LeUzЍd;kߨ>I|Nf R=|tS HSAmxDTB" RI9ž7{pL"KMPDډU[*ʗWZF Yxӝ:2C+ȉ#&B^wt\#%!shw~'*:zG֩ PgYm¸o'Ϭ6?O0N4>/d_wwF/̥3"I.|3oofM7|cc)14Dc4Mű&τYr?s֩ HGia9a1SB MJnt $E+(%R'3&[𯷓˥[t䦴t2erAo'3ݯ&v*&~VϜd7oSk+N?yb|1WYxV}vC7:5D@X.ZlEˇ" 5њqqeDLBK"4j] K M-v)x6Y97{ϋ?]] fnD# T9bu[OZ&Di*; $֘ m=B@q$R"U"!PX {1L?.]v(9|#*.5յ ##^CvD@v@ߒ /Ȯ>]s]0)lխ+D  dR4 G+h JS,z]"}smK0eܣ_"=K9Px0v\~.'FU{%@FY>wq@kb)8XP.0I <^!j6@g70X>N? ٷ`Ŗ)tf_-VWq:BջB9ȫOߔ̋G_F^0tK4~yqlX1~4twЏܛW^{w&+= O6PM_< S*`ӛgaP-oN40Ȩz7 a+Sθ!g)V ؋f C9RV6R!{7SM ~pN,yz4ċ\rRF5=r6slȇñ!͋|3 דS6CU>^MX!`|ۥΒy23Xh /akܧT"⁺S+RF=+kȕXcqύ,Du/FgKPIo Dȱ?Jl;ř4Py.[ҵT܉Ym9gA˧\^IVYz9ȓL7#o~f m&hBnvVX-!%)lunY@2'x̯-]}Y˛v!yTp^&ppvQ9ZM? Q8Z't$Mt)9IfPFf&932FcVe9͉AX\9mmjGBɜ)y [J]Rffct >"j@l*eKx3)H1Kȕ1)R:e8#nw7~Cz3 df創1ㇳзrdbIg(rܹT܁!g3rR$1Yt#e99m7 9j_ I, .B)w*[&Oe g5r 5c:Ƶ+1g²؇Ox&_ZU4t2c8Ე!#YM%\&\5{GX=OĚ^Z+P*$G`>S}bPҪc,WڮiB.{1|G*U*)Im˨Na\ARbg`_=? $HaE&/8}IMahT F֔_\P_x_;Cɟ̦?y`p`G$}9=UGA#nW6d"bȵB^1P9(Y'2i $KXqhP7DB0bRԮ5t[*:u/4_'*8X ~%m-eH N8'gfwt.uDi8'ŤmDL4(jnIhf-]!; * >9wz&&8aƪA_5࢜8 OZ뒴-IU>r^G(MAc`f#~O?˿}_ 235~Fc1pԗ p8l.ViĖwX/ܨvכ9T'6;y89sJ^uT?_,k4LV .J+z 97j\3t R_f;h EiXQ2Rm(2heZ`R2˥U6V5j+xק?`0Q5fL˪ء&WO밵t9%RweQ,95#!O/_v&!EfA6}`2~a-,l){Vxi(\t襇|>4~ڭ{gMk֎]vbה4Ttߋ=VizN#BnHHw!KuL&THHEc)BY dlt=NPX{9˔Gࢅa%Zds #l7"k> u,ĨF߻lA_4G-rU?byi&Iɒgqono|"A%*`%k$1nM* -Qۗ?m@9 Aّ\>(rFO2rD0F`āmV9Ϥ$,yQq@b:1(:=̎K;8{*K\0' 8N5 Bf%Bb43L;TҡΜp!=R/:t=r4bű4Po{S#_b DLҙr tnNVk_ O3~H\k k_D(x /SqBIv%t k4eܯ֧m&< EՑPz}TNHr@/C#Q]i`g/fv{?JoWݷ~o32#W$D9kN))[c5ik; j0I}!o7*Ҳet OAs2 \2޺&\ p_+iD֍us! W0Qcڧ\pٙxٝLMtD䂔#ߟH>Fرײ>U0o$#1el$ۨ՚l4Yi \Eewdt,*Hhۣ } u0#!^~$.ӑֳPA\" 'Z~B|r*J^Lz1Y2AlTތyq]gGQS%H 8r5O/0뱉 2v֔.{}(P%,^Rqd/2pc׉wIri$د155@_Q6v$g3>Kv65X u=;!y@ߋKF Օs!vr>./`2Qj8U>}ݞۉ]I+oau> ts9=cA1Z2wb {o1dFY!9^b]Z~]pƣnY_&EDKCfaȧ8' P,wVuCrXdS9 i8)gHhPmmr"12P?ĺ;Zb8i2o&dzt~VNJ7Fo L 9&Ͳ[An zݼsBGx`;,i:-_?My& ]L" ~&"԰Gy xH8=<. N`M C4B)?FbMਨHl BV̞uHw7@ܔ>HfL u,r"G9^MCVgb7vЭ-WaX[2 3%`bK}gQCWrEš0h:vohq=e )MbJ$i An]#yݳD;l)„ط,V^bursח,5dՇX^-bB-o=**>윣砇q]p+ߺsq5*2bK?>}9eO/n: QlZ]Vn=u jX*)e}R=M;gu ]&׈~Q|쵚6g ]%vR*IӴs5Z bgΰ1L E˚o~kǎ*qh~aU̓w*mY7GsH|ca>*&G $zָv DP`]msڸ+L>,fMng&鴝_`jCl `rMmYG΋,3A'5a'~$rlaxNn)a5"AJ#@%0?bLar7 v[& e"bũF<GqϕQ,7sIGc*`$.ܻQ!KjC9)~F#U m~P6E~І! ۏ7 :X=.b mQX@ :"0۶j [[_[>r Z͒P?!5?7h8vZoŷI$nk( eFI7篣Ȃ_߿1Ouonl]YWnRK)i mJj9k3O=i{Mo1nC|ӶPm-ZxN(qۄ~o º"yD_gg֫OnZg[~)XIsxu)~ XtY ggA"%iqha*xIUf/+nת7PbϻCD]U 9:T0޵ׯzՌ}$lg%]q܉+R~ ӻ7o<^FQ pȂR,<$}*VF&RCP5&=u LsPUw'y20{PO ^w8_~&">*]SKijyu (0e85'/;aP_?rB\AeadTGaY FxoFs7h- k {ܵ<)jʰRaw46R{ouOs^ŠËY,W%"TߟZگaù /=D8[azz s=.rDڑ6ZK ⚲SP>9I뚾_Uq~~-]̟R޿+oSTddNWBj '#L;YiGNy8tse{4ͻxw ^"qJ>jQ~ ]#{:sH)䵹f߼rNk02tWcSBmGE-צn:%O "IȻc@i%5IԩגCNfeG`+`FG@/_I|)֜!)c;.[(/Kǜl^L9Է6w,9U*FRUE>V[{)ڰ!G(b(H}5coeWl|`k2Ts`l5cktxtFDu;N3[K2wZ"SMJ7fD|oFD!7;dJa츰@ؽP)Rvo8U~xuǢcU;;SYvzPG?r jp9DWMkg Ù!ZrQDS2(@tyt wÇ8-xJ8hIض}ИO[U o\ _ia_ʼn\7w I|.\S63IwxɖK,d좄\}^ZNݎ"bKXr)rs0)+[f.|ˉگLjQ#|Am %lj:#) 䘺/ӆLG_gșrh3'72Wh<>g;+&S:˯mS7.H9>u 7>9 dY4aP&]hG`ܢD/0\1zXK,  fl lL'Cz9!{#?dXeOBk.bKlߵ]/L,K}r*tנ2Mu$q f rM˱LKiYymxhsߗ+R|փr`/_J/.%#J (Q'%O,}hC+efΆ?[VgZc<1ymrGb)-XO5{+;^?FvUBaL*ej 윫a̜ #;:YsQd1L#OE#c.+pyw :Mq\;+c˚Ԝur1"O$8 硱!W.E' ODD,K8ø4|'@f_7 81x|Jkv]yjxt?e>12_,T'FkA@u!3 L Ljvej}x=4(^ wa$|LkO ]S*XZX\ ra(~ܠd*u]*,kFM q,?=r{.NZ[E4@=  wZuKPE\4QOŲd"[@id<ΕiDѢ8r<_E4~Jy{5cb>ym#9L\wg>;XsxHФآe"B=w%#D68aq3p N2zBIR_TlmF&;ø0oU0DܚH"!1$eS*0'aݢh{庩s"00 *w4Z{y[2w ̪ 3J*l?ιxsjS1+Л}N{Q7_|i}l}zƯH%ե1b!%gA'菟$_V:rU(ߓ.dcZF]cm`x6;~<uUOKn6^#WjJ[OUg?{%8q܉Q)?O@adJO0Ur#[m&áG4d%R 0E;?;)z7>$9D}MJ<}Rbm^-y$ӑ2M_1H~Ϡ Je`,ݵ;AWT՘FLL54P.0po*!; j!|5Ux|_Zrm%S}ԽoI_ꘙ7q4edDbS7OT@G`;i^+DqWnvSvVMՕof`2Z.iűKd'?rvC\)`eldTG2;>6kA7'(07oF6oT iL9;k?ߦFY+Ä *L5LQ Co qLP &5Iԩג,pLbc2\@-ID넺ZI âܞ?@TR|փrt%˗/eȦ6Z vLʨ+4L<-'DӠ?'qn0} 1XA١1gP'񷋳7`$ åEO|xy`^4 ]L# QJp1yg|J'q¨h A,ASX4C3$x ed pwR1xDYpJ O rb*Vhax`h@Z 4A˘ +LW3$_%0J\Z`J&sDn& >Y4) 0aRNdU&H袆'L\'SA;&=:g'nuWciWP:Sf܄q Q4ot6Mp3XTJIǡY-3+Bs Gp%K*౸gœ rϹ7ere& ky(/=nĻ [}} JEt~'q/wj;*|D+˭tgPF^2 X SD`Ҷ&xvhΈZl ||_o#~xjT,-wrm:|҄Q3E@8"$IpXE8OQfY4&9ϘSAmB8]:6]מڰ& oj4Ǟ4WXzL6m!$FSe<μVHI$Bd2An,ң*6IYQZZdBrphذB#ʢD<K)n,E^w6|kUSv RZp!wDr^0O$uFpPK1LbC6H4u sIGNE`cR4ss䈛5 #*tL59{€ϥ';xmI1\=KHMz3.[Og;Y~Jm3T,5-Ϥ@"8DĔ{D81rQ{4CaiGpJ(aLqkHy =(umy =@1yBWYj)HxV8VTL83GǘVSqek/ha8 keјkZGG+n FaJֹmh8R̐]<7}2_"&4eȾlڛhʎGrPr 63ЂqzBBM#6a];̳+yom; cAtնoVi(d7MX۟Mg~ #'qp&g!j3!EJtN#әpZA@ =7 `Dt]jvNLwH('/_z d}62$5 K6s<7< "Py *M%Ư7۸6no튌w/EO^6-!FUD DMgܠt;"Dw@zyySxlP@ӊ4BiOMK:̏@{K7nһGW{^{wB5hija @]~Yߏoռ-~q+; [@US:~olי+3c4558hc1%ͬݟq;Ig3Bnc*5yil U+70I>M8';0OKι:!˭"{"rŋHHRKC;ɵڪAs4:U w-fbơGRj1S 59Q;5zZhTk)1M"lUdݛQ5pi\,3Qƥ{2_r>TG; Υ!jI]19h>(8Z`v_C;c2Ӑot~, => EAC^gN$hh|:(DrUK)rN]&n0|RBBLac7\ (~~#' Qhz E%D$Upw͔&9rj{e>J|bOh|o$vh#bFvØ>mL1!J#]'M IzFp*Fk @<CЎ5%'V";/LjzO@}}Df=^:k/1Jѭ˅(닗%cyEHFB棞GI@M?nr2,UNj6qG|z_wb+aAv3ФnismքTLm+!߼B>9luS)&=X-J .' B8nj0nn4w(9]ȃfXC dqKO I /'"i4 "\T(H1/^S;\ 0j#OgyBŕ88l!ͫ&2SjV?f>٬Ђ1нz('-d2яmӵ!}ɳ\+_fU1]ma 3AlȫJٛܺt!fin|#-<=M?|x#mI0%Cj*OP@>t;b dqu8)M\\ֈBA:]Hr&qsv\_y95h yq jӎPǞ>ϬR^ΥxS! G%.DC߮p./ށv8@8# /+ )|.]݇B,c J=nD! 2$e^zB1<¯1Y/4x^l@Gkfۿr_WbǺݢQnqэS} p>_a>79R35s m3x%<5Zjc'%ɎbyFV5ԄҪΩ@ gG|1(+0ҁTF*- Yf>^8C}u.VAv#5x<)ۏS[3K(90pnN)3DrF^WptbM'1L=~7h*S=&m^J .@D*b LӜd'##`GS~UMIl$5LC\1+u屽V-|dƍ}Ju0'I%KU4X}DZ^nVРR2F!WT=H$`1ͼ(cLy@4)pm,P 󔁞V]SCVmE*lN5>}h() J}MBOb2BV#tr꺲s.t*I" D*`j\m-\xD6? UMWk<ȣ6|狅+DA!@컛.t݂{Q;ܓKś7|ߜ&o7_ VC3'rj9N@R Ϗӈ?z+E'w:8!&-JQz)h+ ÍxKYZR!ԭ;W6DY 3X!L,9ʃwMn JX{^,ӽXka=+t˭L{txYH" &crxԫ;}KJ?D%ZtT꼵HJg{Lkжw "軋T`r/C9/૟,(t*4:I/iF@DV16&;Mh\i7Sŷ`!Z{8Q݊&4ԘIRpmi-G:\:?S0.AwRj:\y%Q k&(Skt)?B3r_/dR<d4 CI-E"^g̃@0"01b\\ (hZ Dc/f+s,"Ae" ԫߎ@)?Eu `+-8-X=?ⷧy?%j_gKT 3WvrM!9bqݩ4oݩҤS!dtTL@wpjVIMb~X'`w}svi&r0Zz >8o|oFk "cOy uSamEZ.ePxч㏟<ٳ|;BǬ %AjiG^#e-60h2c't+W?S6 F|]+oj-'mR_D*tJ,it֢**W7?ucAHi˽93S5NjJ`_e&xꘒ1AW[-3!(Lp 87)ϧbUq* _ѥu:mf6`߀U|1Tc4#Gy_<fZuXg1A&cRLQf %?QUʎ'o<Ղ 0GZvXE),Iwey)s 4~l: wNj&3ϲ64*"ϜahweיMϖ1ENnEt?{B^^lO{GtVzVuK66;^h;[`{;n"⩨F!xsΒq>2WI2u+-Y'RHB^xؙ1EzLȕ 4-#XS^4!Laꛘ4*!ku.k@.y۟)\=$LXfi6۱v!akEjsb-j:Q~xNښGFy /8G}Mu 薯†֪}/lG{" P :Nk- ¡ױY]FI o¡UCeB[KMkB)Avx.5*x p5y2iaRbtϣ DJm>(tyϣWx ,Xn;dUuY\6xBhYtIx4G&fR <5 )OHYL5Y)ڿ]-1-uka˵Z(+SXFu집߃P c^pz j^.̍]Ϳ*J ~೻ 1y!q<3E?f.Awʮ_.d֯û=a:z%VӯrS֜RlNj{jz {9˙̦_Zc*#kl"Zp;R$U6]\7/41C|)85Sx9;$Ll}2W *PW+s«M`nHL ٌHZdb)|dxƁX5qbɵ2}J|.]|{;>[[x5 SviiDzrF>*[>%ZS$liJuoNĈ փ%̺ loD]"Q)Qi4RS KK"qKL9aϞߢE!n\[λܐg1"p54Z8ٖ11-Ļ]FF@.8g(by%e>( >PW@wmpuYk]n Z Xőɑę-[2~8pdd {$ I[Rbe/NX,RjҖIseU.bfoDE4$I g6B /bCXTi9pL5ݢI`~]ir8z7JՋSG#Qu"3G"ͬҤ?sLE5l8X/!Qx5 &"}";DWُG~?sy:k_wD{ڄ3[T"Տ1UȗRD=C#V0  QrG v !0'q|y˙`|.`y[ F)l%(ޠ`j sXIW$I ^pL-!ު\P hҞtg*5ؒ֘7cwG RZ%U#]> S`,/ bt.[ /f69Y>=~%!8!8+ ণLR0 pd?lc"SRna!wKkA%+;)ϧCZSbdp߉!':ug~{׌ڃx*JKNW:X[W>Jfb`,fb`)Kqr 7E$(+G,`1$(6".)NՒ8@nI5f:|ۨ/9XuLfS-e" Vt{D| ڥ0.LLhb-Ps}UEh>7W6zro΃/ E`t&s?qMjFHU48{c}ܖіj.|:a2Fr#ڶ* Q`ltH1ki}x᫻ ՁQQ7 U0S(%ykjuH}v{q汱E N+'; S(oʥ^p/Kb`K=,y+/_/X"̅;E_86%kv(weIz`Ẍ' fc;䁑gKn][mQ*%R$$"28$SuX٤l=\ks'E29|TfzQ 3'UP" ˖dhNjWLQfH\@1n@2グ.yAIL`y\Xd2$~ML`!Zs88O93?ߺ kEd M&*FW:NeT g4 I+y]rxa8ʠ=IԹ8da[e}щ#=ݞTef5t!ťr; >|Cy;ژU!tyܵ̋q_6ĝ|Ry/5"YU(S,Q>8}VlP(+;<H7=Ⴌbdso$8e+%))ܘ~Ovdž mo4_ʽ:%SuĽf!< uψB-5ه/DϾ}+`#ߓ ;n~3yro:Y_O;oՐ ڎ ?8  ߢ>,\{n6ZJR+WzЮzYgNsv~}L-/cXqD_rG.#zYԼI]$sg-#8Ϫ''P, ~6omY3.I6$7I7d+8'S}NF<X>f9=k8g m?~R2t&+uewD%)j:pVSl..{ZW] yxӳ|[]tFy%e Cnvgwttuw{}wo,5ibیi`sr/;? GϯXԏ\Z*,ou+8dZ̥(֏ˋR|@U My6#"5M^P&AEZ4Tg Bri;S;'Nsxy=/P .{Y+,>%z9s 0Lq2BvXELիʿ߈YiK4óXm}~Xwp0T󺺱a'0vc생 ԔAfGUAv_Kum- s篡TϿg8SM`,XzGԄӹ );v e(Ǖ Jёr1c亮Lwz%D%\2^ =ހB'7]Yˀmnhx^5GiV^6\uF6аK$Fe!0@#룝/i,G .0AF~ua.Ѱ5eSE_TQo[iJh>dJ0V'И mxk>]s<WǕZ>Zs:]&*P<''TA%Y=AoOeRER;h3fZ8CH&p׈-Qc6>MDVo\;RG U w28թvJb9 IIu,2 >ۤIwIYŻNv ={q@&}@/zN4Moָ#7lWSEݻ/vC??;St7 Ve룫EB/<*=/n~r~O}R^S*_/L( Π]bPX˵]W/((*WH2Pqys]v]Q0CHO*IɘkǬgIRE6 @ 2@FX('GM2E3 8xKY2 Y %}7 ,b{^F021IxTfe0etx\WFgA1.c)+Iĝ$^%o_3QԲu/~.F{n0dM5-H2YaqCF#R6)KIZ,2So2ևzMߢ@|b҉1ơ9,VT.V+`ƅ!)^ fTTz(mSHvvL zaQ<=OQVm)gҪJsА!Rȵ!`jF',&CVE宎BkNIK9!F[K حFݝ̬Xl([h88Q~G(^GFQ뵈$@Gz{?tST)9Zمz\ԵQͳ ɖvm[.tDW.앁9 h"|fl=Zcp`&&&GS].6ݔXo!F_J ɦZ]}]yu_j(l OJ~ǁV:Ъ嵁V@^͒z.*e`}L<ody~^dv%]V)aZX/?ݧT) ^^8L&rMy+gdj#yꇱJ<هJ].:pIH pv9:lee[71Qgjwٍy;fng=c޽[ʵo] o-/abt0I(~pzZv7ljf{}`OvO ;yR`PvR8Gc ٓ3dX^h"g]hA ˥ҐMKQ3gц!9 dΣd"pW;.ރ·PW !#=gt#=!f=_cO0UDOۃL`sqY"T4Sh4KBP%Q+{)uXP+y' [hkDu>_&-_‘b)4IgW̫@&x^}ZZ0 I0-:{_͋CǣJsn}&3=)\zDmRoûB{F1KYY +Cv°m0Qu5ױg,ajnǮSW5@ݱut vcwᑤ[V e @ ׵ qnIm(&|zm,EMN)L!>zpٻFnkқc0mPYdKCrlmdɑ-_R{ÙckD>@*7M9ueU(Tp0G6Ğ$GgqqD,7 n FgZ`Ms aB@M)ePPFPkX"-0BԴFTM fTEh%2aWeH^i:d4x4-1uH xElѡk6vpH;$ɰSݻo|7L3 Cz}.MDIi.cJlVKt9: 9aըC`ʺBk D^hAZߧ>P*1"F+& tp [CO7Ѽ5qT.ӚҎwWlFD&6;4.qq! ƹiqKq#}%IXxסI|$b$Ɣ76)pM'$ (1R =TtElg_ Puz] ={u՝DZ Rά7~Γbj_3[vʽx"l1w[rn,ҸB GY+Zڪaq A3xD 3LJKR5J F͹NJN;O,Lai EûLS3P" .-yӝ2GaBgG5yz xiYl/CɄ'.qafZύAx0g@V6)>` JF} ͜ʥ*;ҿڱ.n3Qx0Cס}9*W.2ZuJ QFA^heJcY[D`ݨs ~63A- HvvT+!ӏ>pX)Ȥ2#VLgS"6ąѷsFπ@ ӏTĊ]GO v9d^81] B~SI8%D=G'6>bÎ9N)> w`-A^-0'JՆ^)HZnguƅ]I0^@R@I3QD{O&j n;hт\LLOcz<(abYۆGUN>ClzB,D 3<D"n ΞF]f. d,Hc$t̐5핬AOȦVcukGE{d}g$=Oo 6uy}|bǟ5׼?y k)Y3(kX k%@@0b bKk *4!L@3UF =}{+Pʘ!+_q˴$d |s U+4aTC^Q wD6NTuswq=5|E\"s*8w690 skq$(H?>[#BX{ YUD, RKҙ @9 rUL3q)[CD1`giK/#Ҡ'Pv?<1&7+H7Yb0 Bz>datVBC?OAe(5 .옣ی@-J헑N i‘Hj){{~,| VwN<+m{n S s<4SHLbdh,臷vgΔ~6XÂePK-L>LSS!þ *ܔ0{7ĠKN y8+I( ٜjaQ)1J12VYskHQ}8s(Z U<؜u,Cjdg;/`0]eרk9L!)+/f]B(6 U')F4wW8 fJmȤ/z'48XSf9dOԉxqYZOU /L,O7aKR <;VSG%ژq(.J "nʛ٪||{Cf:Vuﬨ0 qJ‹{qPȽ{5NJ۫5wj͗h<|+8r7نߣ_@2Q d6۱}c:\ XP?~|{+\k83oB2Nݻ) L'w#q '%#IP8m/5spUz/z_T}<ݭ 10bwH@9mu)&n\!HR;Y*gD)vXlcj.HLJpYHy`i;֋9Iٵ;U8PrSh 'HC,G- 6sgqOMF#S F?Yvw!J/m8-7Zu Aϊ/Ĺ W(̢L_bsVf`' OA4z>ga! zfgX7-aa oz=f/ (xM 3,[܇ Y _Ѽ2YB يpv Kz`? cHt֙pKi4' D%I|r [Y# 3RTqvaL!sk/yO~ioRr^NܞOfLm2I~EΤXoHh~(c%2r1G6nB# 7|_1n ElC+s"@e?}>ΰ\ޏnYMqeGqmgӳ"t6gozg&z򣗭d@0ۈYô\y}uvR1&s٧_xXx< %HR![Uɒ.1јg9R:2P`Aa P:wGx RdHb! )1G!;BE@^-.<.8OxZZU."nr2j_z+P^hp?:u>,4 {;/~zu|WoC>M! 35 H1iUO N7l]98 bCVƯ" ' Cp{)D]*B&e2ÿ0@^?xwq pO]Zw~u]_htoN$'Di rkh^()\g La#-戬򱵊YNŚLZ(3bT!yƩ1PJ!B0rC!JBRTpT4yGUJs PkE *4d19d$S6HheJva 9}g7I_ܩinu* f4*C @RٝB `CP VV U $u-.d.U9P*K̩@ NȝNÑ] )%e z'smp^Da cG-9Z1]TI = "W mRD0dv<yȉޏ>'T J< ^d>&A0A0ABidH $Շ^Mc!)s9RcpIt P+ ޵q$CGb#d$0i"KZ&~CZĞp%[)r8S* !@JDhf5yZw'VL_!J嬄B ltH1k)*~&dn߾lrI꙾Õ~kB"*K5݈r%@r]l8RVF0w1/+Eս]lz'>&~ ++FBȔBb۞v1hR rDMLڊշv+hvBB)!1ϲ67՘6Uv/Ňj`{;V sf77p?^ID`8M4޼̯nW9ڌZ?{w*sj1֨îOŰB^T<#"!;p38*Jcf% +^^DZ "wϊl/^ K1o3\Odp{.Ŕ0mOAP/z!o)%ÅE2v`83D"s܄R2/@g , :iPᣆʐ^,rl2!p ˠ`xj&?<6F$!339%dZ qV1 1K0T nc΃ R`Ip0-C '#9#[UWk ōi2߼j} jq 0c4 0F^jl S=, f%cqf.P& o`#rF y픋FE]`pbS$ӂgҕR퇋͢U!vig*0`B @tH[ qNW܁:и='ObٞfH)逸͉P'(#w]ȆlS<~&~9جw2$I(&;c"T=Y3fit C7\kdpErdj5VVZOA1 :HQ9 *E T'04{ZnTSvAG=8N RB \SmM9b)wGyѪeEp)FSaq(4GB-2;GG M;%ٱ*&JZXԘH* -n#,lZ !HZ\n'lܤD^Wqw'L> xIu nusq'zVm*ἯFpF`C`~ H1ׂ\dFAO*Mh )X EQD}(F ]:?=I{e,)uf(2$ݳ`(L6MSVhdMpSV05ڟlMÖ4y%G%h 0Tb;pFrt4#(e7_Ud/+H!2䢉i%"o.i_=tSwꋲ+i8S sBi@Z6`n~ :s45dغxNNǓ5snPu^FHSIoH(rDTt[Bx)1깾^^bN{C],hV%Z VCv~Aݚ[})ⓕONJl=*>&2:tX屵~ɮzp.&* 8Ji G9wLܣ`H%ļ\:<ڒ26`vA  Xe p4+q:+wT.Acw^2uP׳Wp0=cNb(\^&S; Lj9{}4 p3LUha'x μ]'Jn4.?U?-tb*'Qj{//"oyDͯͻ,ho9fL=R$Gek9Y9;J<б֡BuEbS]8M$A#scHfp23e^oDͅG~! #3䄡"f+']vEl-dq(i`(85Z:C?GBk16ɬG4xpW*˃k)BFšNzǷz6de?7:VWd *9IAƐs,Hj6gѤszIp0!APԌB>"G&3IB+ǫf6@R]֑7'ev+E<2aK1'h^Dj8@D=6RAKae,m3^~"* NuhL0S%&-~k#8E u0sz+Ub*]Z AyG0 F]n[uw^.T\ˁ͠= ;{77LR.(jk۫%TS{~z9]B,WL(>C{69 >zTA)ؖsx H:Q Fb=3EQU[Tr4E8I!ih$kXv1)ݏ JfEC7U7! Y* إEIպ&XGCb?nDj7M9MJѤoYNF4odE@4wAb#:hOYqڭ|G-Su!!\Dd4/D#0(߽۷?b^1ST'g Ӽ Y#NwGdID3T-3J;zA+#j#݋xQ~m9-vH=yQ* Gp!j(fF+G1St-pnU|hBΩ gҸ@:7(gaF5౬ <>dC9*RL@󅨑BHTk%U՘R$Fd$i nߤB<&,i}pU6*}gStCݫt)q a}<ՕЊAӺs臫>Icdq${{V-H77$~r7TNZ_8I$ͫ_^}j\]2xU?fD|:<@O\[hm$5Eupg$J$/ehy_@?_[`%' pd-/V"Kk7]US/!ō /݂,ywt&u%gc_}k"eY1D:tl8;0V53sLiFcXcG2&vmOwcOmRo)c,ϓ?=&VŎ='.a_Ij[)m)`_e0O}HX!Cu;FZ81LAH* dk J P,K;O:z.eV9[51&z=Y)bd' uS͍o- LaQZP4z:.)7dm'|1/J U+Q:2(\h urVvl1'%u\T$NEb#`R ϙ\Gˬ ~˽fhi|Qh|_&uսhЩ!6$;K 2e3luQ1>:WDc![)UST*Ǚ7H;6A3!zk)OnNQx|; MTT=$v]~O6IO `#lvÐl/L6lv60@B!ە)3nANiQUdl׾6 Wake^/ߧ'3 yklPQeU9#% y(a%WQ@.2!3 c"WSlS459>]}՚pBrݕSx)0˴5"kݛH&QT 8%)cFMP dۡa¸=a K0Nyy6B%Vx "e~0iQ|J ƍg!bHKjYh7%ߧ}T%ͫ6r|&\+b <X?L~ь`9C3xVno[>%+zX~f1J1gK04QT-E:%ps u>R}`FO&zs1EZ笁ӽ|ͻI̧ `BxD3F 9õ ]C:,:}G -g f7:ag~܊14W]lXw]l##~_|47pe8 ͭ.Qzp.&)>-G\t/nPw=VkWm]mLjDEa V߮@@.Gf́l?c,aHO0JJyxaPYZ`Mu[PtjSɽ:VGvh6 SׁSc*񀵴 Z8c z"%m ©h0喾ɚdM+7avT& GRR 8J_) 1+[EQD}NOGifkV.FѸܦz?6 W?"Abm[4Xa'K<5" `I-H-5Tf{Ki/ZSG ws%`(/Ll9a`bB! Z-Sٻq%e m܇wU)*&Gřy\<@[Y҈W߷AuؠxrqMb n4@ W ;$PP*Pڄ!VnbU!ye1|cC!B ]57.WtX@U \V}Gb3!-Om-1t=cb:$J2"բi$F]nU1p(gZ[偺j:$J2DM5p![U \D;hm,OY{E=*U!!?.ShF[!ۊ)A %"/߾ b#1ɐh2O!ܡά]0>u.\*;5_wB+*́:OY|OÚ] VQ.{2RZwXan 3ݳL0 n0o|֙5 =k"s,+島F(O 䡅ÔF`yDZGc*'ʯd$E~\9Vbk_)$?hGFN@=hTxKPm]);Iĕd|U3DVT,0l!L1 B`z$u24=x VU8Щ}+0WR9?2 Ûh BrҢAah@Be @ &ґ@& XULxH+\@Cu5c);[4IBB]sadmS]zV%uWv {j.=T<ȍ[2GA=!&LBa%Dr@ bI`4^$TRWY0OWLj2zVߛ;sp7{~2ifx)iXRo<䈪Ĕ  6_[*ʬĐB޿"}4Gb<(I RO+pӢrsMB9bq}#3_$fjem w[QX} WH:AH\z7a&uOFxU6;.㼪5ISwߥ8!$S*Tg Bwl G0\޳:bXsN'z37qӼ@Gɕ~o8"ZњQ>5Nk㎪ /z?skh.O [ޏf:|@֪+>tK9 ̃ BL 2hd<ߗ/vs\tsiKI ֝Y&L<ʶ wm uQ=~Zs [VIyvpNPi\_8NEknSG_ajV*&QDs P13!:0mv&kj.t$q"Q1 Yl4 ˆ0QFpHEV#B"P0Q:FJt\Bbw_,@Zd}fϲ ̂歽x:ˈ+̥INTlp%|Wh]\tO;01ADE՟~tуufs90K[?BgOAfc]́8%pm$9Aч1Dz7;wqS SkHr ArlI(ԋ[ڮsaEZqZLӻ 80wfs+ofd:٫9\>~"ViNbBӉFl<{7linl|vzS`pޚ`lq?xY4s6 a p4i?+HA<;'oQ!֪ [7Bk >R<oQ)CaEI"?M >{ѤA6yA5-ӣza9=AP@rzT΀iJ؀ đlr5dRNnGiSiW;_o<\8rkܸ,Ȋ4ܸz[4sSk|xN[ySy<4׫0Cʝ;qvܽBZрc7GN oyG!gkpˎwN`?u{IY%r\L;Bb ~4M]C/{2WOڲX&c984z7I35mݎ@S z 9]bPʷ3k9BWfwCϹ̬u LF-E`qTI nvآaGd%pZ4 5odk p-+cutV -aD#^KӍ5EL.V^%j0S17xNјJ΋k\ie1ib+Dx„!$LDR!EOLX+`|ǤՕsWa$Z|I3Պ"Ap}7 1*//xq&$)'t5.W)Tk^Ւ.3ߓɤdڀ&Z1g+ï_JPuAPRcF~SyKUm&^ U15G20|D'Nά'W5%ѝ7 !@ctM0m*+ٶ:9,Rԑ8zv99R\O(Лhׁ*t1yQ.r:2a/!Ba9Ûډ#[qXLa-uRމۼ2 f8v Pj?[]]|| L0k~eǒoi,q݊9Ý{RWW n!q)]>j/K>_y)4MGkE1o.QJWg| ^X{zoɚ}{ fH֚~[%r5 ߈<7q"Ȧ>?#_q!pśBϝ?+j/w!"'m0v"ZQi% UN0!DbW9}7kuOp3>S5/ ׻_K\m 0Ayϴ*ğSr8qb;^'81W@xiɊ8~PPt*EO yc,Ef?A3%nh<ƞ!->8JF)SM#}T櫠3YW.M^™ƺң~9a8(kLIQ.F⺰S@P87|LF[6ta_-К}k蚉Mno&vag2fUlxCe#6i&˒O-`r^2|^JWyEG^FLFsbb4B"0RZS! :R<ͺO x^z{ b<;wo#M4aߟ;y /˓[^^|C TEw~O_V-EE1(tt֫JZ6Lzth? MdHO>KH쇌~$1(2 zyo|yWp41s19uuw^^}xuqyw+,X~ݯ_]_՛w߼>:e6EzMry?xz!L3w:YHnU. yos8{eS|LzsP8ͼQ cIKτ&Q8 UWʹ?@Wx>)H~`#g޿{`OzsS%i^4[̳^2 lFm͓#t? ֝dLR[{欇N ge&m6}4 Fa0x-ڲI& /CIt:7LQjf~@ Tl25 6t컜7h0+3*|)*%?U)Lz eKe5c8VZWju\^ NPr2'Һ)xr] 'Vv~ .GN3P@rE~ĉ-8vdAwE!?Wh/XRɖ/ f{Mh'=^)#Jt%xL^? uG$#~(R"84JX?@ y][V{v⥟"4x)ҽ 0OvDHtBX<> xWG bPܕz_(zFmo;yC/δm?R_}jS*f]a!|0jN39h"b-U_{dyo*s!W[p'֪{ "䒵PZ \>dj mep^ 4sjOU5[Pج XTnGCuxTu,Ԉ7? X3+ZdU!9r $tQ&4;u~*vAH-sGW߽LɺСra+:ńd?hdo *%+6NDX^HgQؼ54 S.M{CG߶2vPTIV_y۵Gmk^Ͷm4(8aNNT8QЈ'! &Q$, bL5O2IXZ$izKHGA@K5W$A1$ЖeQl4JhCΑ`!Zӈ84a)ǔX('ILF+ZE)IFDa!HF0SHj;E d#>qcö2טTJb fX&LXdAg$SL%BKD֦֠R ̨zΗ^F߮I`JڒC>yk/vN(B/;9-J_׋{iŅէ>M܍{#k'i6t0ރox?gA[gFs>NCho D"և1 +8%ȢΝ.q).tL88Dp[)^ͷ4fƗJđ vUèPx&q+Xov<S~Nٿ>X*ec3il ̈z3#Pj882Q (I00"̀<Ԃ`I`tH%אc& cB*D@~gk;WEh}=^0\_^BeTWB)nˑDpx00IXpXjIݵF,20 w)$5~PibKVSM*ԧc!2(#[| pQba$+qc+}nF _c$܇aV6qI&`^Tȡl'CpΘtѿFCKk4t#B6D+g_X;\u 2~#&Ɏsn B_/ -GI :=! ',A()M9}St.&&6jHIBWeGx!DqFv{ GX-c<<#B,G/.Dwх< 鰷额Ȃyc'JʚS~G,oN0Ms֑Ezud [HZ8uB)vl(۷6J1D}064d6Nw{qK &a<dE04H^_3}J1x54{qZ~y[g6(= # ]YC EV=HwX{kj RKw80wXn&t"FXwF#:wZu3@T^ls@@r <*5a,;cF{TB,R1ID:VĚD* E2}Ą (*A`,w~kUؤՓXBIiVMhdžP"vT8Z@Zz&*QNAͣp5%{}^''z<&8RI.`<95?cNЕx26O`&lckfY-s͑Ͳ\XܲM|^G9e'h<@|\z^zܙ4"Ŧ 9q-!S:ִ'AAv;Ic@cڭDC[r"ZJr 慴[évG%P'VɓjǫR"ҧV0pR"&^<1z֩s Ź:\M*5yP+s$>,#P'O8ƦzH4\eh}}eLV Iܹj L{ė iqE315L*a-W,pJ5_ yW#Zl#z5 Xv"^1]D)yc^٤^*$EL5&)m϶,BDE1Y`s7R'ݪуe l-U#vR˪oo_Wu_U_9*0O~1)EaZO ޔ w&RM@` +>6)YV}zO74Jӏ|Lvۑ^'/^}bFX0v H%ȥkynnü^!_CIͻ?8OCVY 沛*`5S)t`{: Oa$k;!kL܊SpF뀡\ؖ(卤㭣Ӡ7KLt^&] kTƽQٰi6+*%`eZt^ڡ 'f2u<|A̟Ňfi:+Pۿk3I35hK*t~I׉17 #M8 \r> A@6V-eӻMZÉ );9D1{na0/Ph% Ǹ&T&W %uq/RNgc0l_a8[N!'&ͤ{Wr >+CYUcp :JTuWpl넰Ub~Qr?o@A8; 0db^;{W7@4dM휱,7Ctco`pX6y i|[ *^ :/m|b/J.R3slhX1$>Mn+i]Myߙa|}(nMYS4 e@Ng7ӔON2)}VP0!nK"z8¼' ${s ౒%2<Z&/5ZJU)D3jnsP|8 5;ah@p)m<,OۑgJ7OVN HMjU~w/$VA|14>dwgY؇~5TAAo7{ƍxG(ߞ/Û~&tnuWa}ʾEόz?=hrX=\/ά-;} &L8Fm>3F< = Np'{rf¦n?6~<(f5^(5T8kP3' 3w5_~RLҍ"v~nkV^R7s+˴8Y/"JZ/L! $^p5tjJ4I,$'bZ"Q0Db_%pʅ@T "%LG#ޖH4*-0pXgJ4U!K0KD+EwVI?r.D18*`)mdn+4~r&tM괇zTv_"(}a3 \a0" 1VGέuaTj$<:5y<ǻ]NcK Hm ax@B-ct}.DZ,pm',CV[EKg׉zRm;-JB 8-S1rP.܏~ýg#0zP]6pTDD:&H£QQf<2Ʀ+L!#0P$P".O0w `@wV'TLnvѴ(AޞQp֥K0ֶ<r:ɋ4碛ζVkI9z4?dY)y{z~i|vd(j8Qx8a>cBC(=5\a5"Gj>%?<{6vd\_Lq/x@?/rσ70{fg[z[./} /b_Wps a/Տw?8!%fS7y+D5N5ZLAAѾ_bNn&nu :M %!-L`e$A@yÂ0pK2 -Aۓ N9%S8'V5#3K'QDw*ri*UN4'miuL2$w7Z.VLHҏ_D^8qJo`n OB ++i;Y0 uFtX$*8ĬJ1ᬱZJpע'h|uy JZ0R SHc& N(J(s(PQl>wĮ1(g,(43ec5@̠ŐVjkD12RZPkcK\yeI0*q4'o:u iL!>ɼl[ Y/m>|cӫ~_z,:n%r?szpn,?T;nF/F`za<8/*~ù2e HyuS1HY]UI v+T6.uX;wN8%꬀hd AJ@P1sC?{{Q]mpwH'/)kWFC$KCʒaDCjAU+UYD+EAPL1(<@RFN gN<tgkyqY&*Ol ǫ.Գgg3^0;` "vDuK488c%K^2/lbG>F"9sR䬇_RFP[HфRE}Ml")xȲrQ 8cCiHNW,d׸Q~FiF,j\ | ě@5^zmQwaUjBl^ |kk\ε"ymK|hd~*_M(a6c^Qk~| C5}=kFhӼD{nL'PMW*aU-Pެ}ckwEF mh'T0`34jYr}m9@hSmǴ9Wczۣ ؖ)^F1à^\F*ıqUU>ɏw':ÿ]Ŀ;9{cl[OW[O>4Z54p[1E7|qÅled-0?qMb.G_o6$b'y1>eN{6#yER? $gV ϴtkL}(g.ufU4}Ў#rPQfBaM_ZӼs{z>b".]O`@zZpq9z6+c8r7SJkFmB[6$Ӟ2&M !Z(p #M@hN&p4\lDKucZ/fiGvf 1kтI&H-^Y i̓wZz0*p.6 TZԶ2sIHH2@F쥍 ]v`&'fbW^e !eه*2YJ$+`+Vʒ$-d7˞x - |(C xtJĠPMއ8%s`)\^u+)#ZV *+BDސT&9E&5ltT(cLD!HH]A.)7eCTp*RU¥.YrNJ-@y.:1t e "IUbޭZAPYf+dLlg;Il%9Ntk8=k9HzXqxdcHArЙL#YH!i>zBX9ql}9+1sbE~7i>-Ļl|ѥ?xqjӨ8(>~2H#; ~<]}p"џ/?wB6nvs/7}NU$?}yrϜ<۩ 7LL._fs}J˛rp~֟SWyM#ICF鵻0ny6C[>b (!E4]mY1ARDjb Qf)T3\g9ϔGmu؟jm綿g):?t+052YM EnD$rdydv#&`FmS4F맏_fYp^(?/{?D&)/wgHQc/\ܦYEoW/6ǺhH|ZM-S0UwE/`O_>ۉ8^=8CObP_]L ~tVt̄ˊhv cw$11iz}{/tH/`8F#Пy-ᘇ_4~C=XPghr-+hqV [1~Fr>LMfw3!T_WZζڇvDZř d\kݹ:Ш%:ޠjplsŒnB 5ǖݝ}MIx $PW5toKPvw&jٗmI! ! CD0#漬oNkb4Xo4+\ǯ 3?-X-֒`M RAzC~ e7Y;s)0鬯$L*LN|fθZ:Z5 ъ $ Gs@EFHB֝BN2f)1^`XF4&M0 0/9W6jϚd6(u634 |t rh{Foed";e*$px AWN( wu(5C:Xq~V'J8CDA޹R( ԬD5 7aQ jE㽥0>h=b0xdB,:$nU0AV$H~$VYQd"ٔI@0HIR)#=Ł&s f=(kٖ@W2g $/"U˱(Oo*WsvA~vVoy5;ao;Kw|Y\)ZM,Lr[/{㆛M6_icFf~B;bKl]VΑ.[IɗLjZZW9 2wqj$40#_<ҋ}N0`SHAʜB`g 9'sYKS0(XXSe&1XȔC\YCRrS"Go2,z,ҫrB+Hf%k A& УR|C3mg6Y b/R_l%2 HgfiIK:*@!v򔡮_rVܛxJPox%pȪɗj]6/%{\;ft-~igO 6Bz3 f JB5.hڭG4=O74OiC= kmyN[h12Ym#㖞̸V n6f2z݂Ry|ڡ"vZLj [W3zXN>ڊN][qOh/#PA*knc2/2&R,aq9rY6#F^&`lڣ`^߱ i\9~v9G9KSruFasZ2/ (M1-yʘ"mD0+5(iчMnBmǛ!u̩(Y%φ +ΦyM8hI4WVOuU,X|22pQ#0g .Gtvcx@r2p$,.DNP\l ܪk,Qzt:mQtOL:mB}d4Bmig=R܂./Uq(9B& 62DjJ(YcdIY4O䬻A=ksGr/so h;L\rbHkU:s.%Yԕ<E9kvлfcn,482Xlx", `]n4&O!m Wc.[&P\ݓi$x$r' Tj$v%4hYA6_6pZ$[ .('\G|E)F6Z3 =oEqm׏wZ2zj78kqۙ za>hOօ_ aPɀS:?Z Na^c%&\#Zp[:&G _gl)m'R5w](pf<y,R MMdЀX\  FZ):zIuD<@`RS`#r\N(Z[UoBqmBΗk$uu!H\hFFW 4g,kïوж{gB ƀc}LK-1`cLRJ53V ́͗ssj1ib A֙.4ijnQCd,˭,'F/,Í"w/2/#h$ q*33(Ի 0j`p-ӡ\Lw5j*$L' =p)я00/:˓1VgH lY뛥C-s{*X7:XūD~NPoo.2<߼ ՍG0c5bšv >6j?c 5_%:GDMD{k|Uq2/e}v? L&FmUR&Ţз 1X3 PQk_S_C_`E*s R?lԳu֬ K"Ɋ.-!AM1 -A8ώJBqUUE~=t+e2Dju;W[)7f]I|]NO6>SƋ*UVT j1Sm( m6SRfY$瓛kq("a#Vnl{@|u~5|mc(Qٗ EZpZWZi|ݒי',o7W+z#-q{cew( MC3Zy=r̉<sa+V9JWy΀8J6d֔OɖOۇD"N˞iv) cgMr]$Ah]dF)?]$`؞$oF擙/Žb`pD9Oy'J(Q'[ ((~_kU2Uw\JA9 暞Tms;P'izL-)"qğؗi!Wl}0dk{0+fW.4찞,Dk"KVRգ #,ERMh22%Þ+<0+0xwJoφAMK6ݖϮ ihL~ rlF !kBi-j+d\95l4:vHpVD ;&p`_N?r)ҀՠdYN6@+K A *-D zhQՋ'(ʉ9/a:۝[Pc#NvNZJ 7%L~+81D<[P X̑ {Q z%;oӉ=:gHFsE,_9z~RBJ]_x` (R˜cSou'݁343@.t=k(%<Du92M>4 ȷp Z|9E;]SFRFM0'%܅&kn y@TGÝd3cJY(ι#ZHy@7ic޹)B-a2__x2Hx;#_NesWJ|毃ɗkrx,Yٜ-Wf@c;?ߥ͛oL0h{U[o.-O{AY3kY${;3:2HE;W؅A(8QK֨9OFs:{fF*Q,G=jqŢԁ)I5=n ,z08;6o Jfz:_c5]^_uZSE;S v8m=Do"чDt7A߅8&S.$ERZaS@NmDP2Mm^iۭ'ޛ=i>6}l:d) ]hPs 㦣ym+<FK;¿+t& /S%e %UD@jrU>. NfKzj|y}N>8'\Wg/n½E~%+{wbFe e ^AvObwI˥"irV{SB>a#vҚV`k.*EałBsB$[Pa֞ 4lzI]Q9PTmdP ڠ5műm!zq ˏ=dJ{C C7ݢxuCƴ"{&ӫ˞~4L;1!FXU:T!2*B :/9kz&RdI[cX `m4bh]0OP@u9? *dgi"#܊,'yp-FoFCD#<1񛫯5=lw3b/$}q+i=Fe86q>~8t1zxf V:Xg;-rч0> ZQ7oͣ{g50Rh]=dT@/6.<0†N/j j(΢IrD8ZpjA[ym9S6Mdn hi`z)>ݛxsyهV8'zrPI\حK0g-qcw^Um\zU Vk}T7`*j߮ j#pg#o8>C8i~4=B}-B+{,?Sd6.p6gz?v8ksF9~%s5B[|qL Ԇd}2v8#~m;7ۚۧa8i}!߲o~V{`AyMՏY]^z&LVU/ oӏQ]'y3-fh'ӿz  t=|zp/ PA _~WWEҬE8vvt5dž5\@YOB}ʩ`Fkƞ ʣ@NB]$"_ [8v7,(aꓳujS5l<HQ04Ԋ<~2HXO2|!x4䇷x:Lj`^ڑt=Qi{ :9Kor6՗d0W>5?+NLPͿw>^UsWz21Q&#JSV&hic #mb"ଲ/,Qbߎk>Mg|*wC|s\wZoU.4t?+O/(jPom;834_Pn.&g<:T#ފkG+M$JP_R]+/{|1 ْLn|~;|߹hfzC:Txy= %T=dr׋4& {w8\N9tɆ]tcB٥BÛ [?TゼBjw_ r w =EvY|D(t^ZP(BA!@ĠYclJÄz`N|V61y3W78WױYX^`- 97$^/)fx5L= ZM i7%b08{ʣ>r=}7_s/uٰ!ޣ mrQP@:\ {4ʡ^*EY- s UfWrG !=){ëON%uU /|5N*݆֫k4{UMݵUuo*Cw /B$3(.Q"12skBT܄ ir< ,9 Pk )QEh(FZwkM\>š4}~ڬm鮿6,y*r/iNUeҕ`1`팚 @])Z|T2kV<#I"Yٻ8ndWzY.F",^!}':h^щ.^IHi#5Wdd}%Y 5ID@ëɨ(f;QǔLh )!D$7riٜ,DlK|l"YR8Y2MLk47W%;be%K*xHҚf]',Yl+%tJ)-Sv@:r˕]sj&蝖ylZyEdP2芖FAs59Q k *:_9m*M'uY ƿ`vSxY5QMOS:u!PH>sXh]pʛoN׊BYOS#Q27kѣ6 H_q߉$@jqvFf;mKQ,>{):hŔ"ВC)#O/nTZf=[KQBv5R솮 /6ȯe0k=vf1_41*y G]<"] F[\8 )ŇŠXD%UEAYD漪j JW)bR8:60Q#eMV+Ë^%M+Aov&sp*+83^As8^;b17+Ǝ2J隬hjF0ZsN1k˃.[㸷ܺ*yLMn/Do܋8($I /VE^ɇ&ϟRq?2m gGEق?6DLHe7?w+d'dh ص ]N0`nT^]*q}44 'ZlΣM}n./g/l;o)5y3yo&mлۛgO훉>Uwɻ՟,}#FY vj^ӠOc~U&*s-=Ӱ#ljj>N9a֕sSlu.VQ_3zJotCCB܈K'*. ub\d۪fΕL113TpTԫ,[Up-U VgL-u{M!M5NfݲԴه P@U;* ;Q<4KxHq K(`_8 GmqW(0\>z|c8|ȼx2Hwy>>0<cuo'uG>[o̰u_hv<: ʵ;@`4B(VFA)V7f"dY %)aUS,r$f \۪Jޑ^n:j+ Z:vejwvMʙMWз_ 0GṜYzI%i",6GX[_XG*x-|218:*_{L9IQ,Įk KhӭY:}OwOSnsOJhюO˕}f0ZY#VˆdằEeB\(!Woyf1Cv%2@>w;ET(Jz +ÉSsҏl*^qxjݐG뼸ݐx톧WwU #Vvti6҅oMWf1"7mQi888TéSR-B77_͏oN&w7o}g ZT72:jr&N5>R_{W9 xg$;UfQYέI F=*VVyGՆ2`:{WԹFRґYiPQUqwed`ƨhW4C%ڃV`PX2AiRxyq|'eڔ{BTzN2P{zSO۔A1ӛo΄ґ1vJR8Z7$Vh6Jִ0i:ì_l7AIn&'iQ:ODFh-/Vjhv4}6nA| \-j'FԺZB|P'7G4"@3U1.. ?{~Sy$> b0Qi=V vVkEv :XMwX cx?HwK__NSC_b*k980gUJhc"?ҒG΂\)AAG)PYRFdG<Ջ9t;._7y㫓I}q{wCl'mwZ)SODߞ\K!N- q2kA`fS|y*drYc^4 ®=Nӑku>nx(EPpDKAwUz[&:g1yyj#w™ Afho@^)Lfjw6Zl0QIY = f(?FtwfoM3#7KNZ¥ Kr~`BYR787GZA^d'oo۞Z*]U5O]u|#I:Y :i޿]YWKUr]CZfNq,P)3{rշ E\{ mtw_|̊ 7_}KƆ` =ik;'Ԗtq@K?ۋ\t{sI_vv#UFurJ{0Вj"Xp+|q<ѳkC;ծ)]*b",mlRu*A9Jqgic-uap-ѫQ9P˚]Ջ:f ʰVCm2UZ;NVh~-hUY3Rse]UC[kH gtQeȶO>~es$hjÓ/;[,{Y4gwtq|:<-nw><[zW7 ngE.B 0/靫:{IPЛ:^ ؝UKQzKh^Yٓ:^NNw&1N-,fّ O69;Z=B%:ŬoE6t<51/d{5`Z-9)̎o NdDƖͱvW0cF5N'Ϙ,*p{(m~pX iZ;U1ᴍV:0zL 4qL 7IbEeX:V˾в>Ӈ`LI^_zg o$P#dB#zBSiINT39_okVF}6 -1%ez!ۚe:-YfnB\flփ+sͶ(0.Z :^7ZX9t1nGi;*%o_iBTJ i3UX3t;Vg4V%Aέvl@sviQ분Dyu>ufoǜ xcy}vWf1w$?<%B %ODA9W_V8V^{6LAMIJ>@Vi5?5?yɚOK0#t\hхsu7O"ޮ|h 5Z;W9oy\عܠ9W/Pռ,ΕԠU(,z`\;kgsf1lTB" SJY-ꊃɚ>{Ӈn TK%j>dHӽk:N_J5ޕtN3~'ҠEcD4CYevZ4u]`,oLXaxX;uJ(K߬4.n4EJ Y_feeeQq)UXʞPa* rtiOyAY`XOM y$<_A68ؑz d8`% `s®KHs FkבftXXKxG0B1a4G؂;=j͏ HZ? 5HբZ<_0ϋdO`JMfo}ژ#B)SX˅6p⚊VoP4sQ'L}g-* ŬqNJQ U4yfI+q^$JBSc8Ӣa`/"S2W>ؾ^Rh+٩NnƩtrYfY*,m@hpt7drWƚtl:LvkrRfqNF/ &A/OwR{er)]&er)]V]J;)r+$ A]; 5L`}zbk9>E|%P8[̗3QD,)㆔E$%Df^oKF+a16wcmnK |QdÒlo,0Sx5NjJFS5GxXܜ)=C))@[r X+1Df &AG6')vAQ҂'t)[Kj,;QDm0b"(R 1ۈ%1! VRL?҉A*+ݕK [XfcXmfa𧙺]_ӏ%;GV)q2%^jhuՀ4Bp1^eD"B2raa;%!#Vȍ!xޖϰ7߯~oK#g n*ކi\m:[,`W0Np ERvG/rjO[ճW 3g 3ZVR {H4 Y!_;LFLT JZ\` գE Ss2܄&F` csVCrFOPoŖzoy%].*ZvK^JG(n1a)q9R)R v<Ұ01P d--8DqX&a"D 0h=V`$`DG$q`*~2(&AV8~+4hf*( Vm{PBRU~M[GeEzj{TSk]ARTqXfUZ`o{VXt&^G6leE)buy.伇] 쇄HkZDz5z=T#ڝEnIS[{(5n)H-z$>c DöGki5 lq-wJQJsa`'ln4.j:g]r}:E:JBC|3$j$T$wX6`vjiu\+iu.9#}8V29x];8GmW9@v75f%H~p|4ttr<MgeaɻQQzdN+vu;KnUkָpVX>V|ᇐga!x7wۻ&_|Vɯ>Oa>M`h_ĻI:-MXz?],+%%CM9Xc8o7!b2FHGӂ@ M+:`F;ω>LF9gN`M/<$v}PlaPƐbkY5^>*2]&N}qUlOsxfA(b;%Fs 9,tI"l؄X)a؁ #9܈/ȝ]M`9^҆IF[ 엖Q-+R(g̘.7G7c@ lMRGċ6DMAzG?JOǢR+)ԯN)z%U>X+94_Ej]y,$qI: O>bgv>f9PkAE҄\]ݹJNFSpTcP`g H5l4K"ETРS-ܚZ&Qlsh8Vak4"Z9EoyL~fwp}"p m%Va)+jk>M(n'  UdfAɖ]-sKRRtyM֖#}C"*CAH)Ns^anR'9 Gq'Yn0,R[4 o12Ը @ǂWL4|DrU(Թ5 ,J#9Qa^[Z!6[!_DiR/z3}_tA:3ZTW$l賔Q+l HÃ)ܡK^]ST+Zi {ғOAVq|s5$ tꢂez|f=87o&^% ˳_1He%j㛙͝o>cpno`d,U<8;S9m7R0ךi{~IO\t}V*o6$!\Dd[>MVQQbPGtrv;5 SIg˫p4St~͏oAd*=dIv;X]hsx.s:$ +d3')tE"]hb#C[ZՒH&s:MvQѓYHCOFYPcg3Z0u@[U7! , +@;D.8}` ǝK&L|xyv>QGFYpNx `IU::G|3J2dWdN3VL6'tc8:-L ܎o>~'g8-OOȌSwqq}c$(x+TZ@S-|Xmo g]֜=9t%s+{l=3te?cz"_H*a #9cptИYP}a5dx2T`w4KcH7ſ0npˋ˸_= 9j$=RC PhVC){Jc<8 nAu*eMw)+z2oscS bIRW̠#AZ*TJȮxU=7_UfH=Z0O)ng|3|=e6[iq Ή+ѨV3`핶U*ֹaYɵWlzawE)gLt)לH0C ՂBEM@QZo)O[1PP[Jp.;/{ȫU{i(}i"|jD ]ZB+d@Ws־B~=Ҹ/ݨGÅe' 6(L{6L^ŘH2JFň'xXW6~]9BoZf_+Sqr_oz {1zwM=%PXP HL˂` 9\줦TKj,;"aRi"DQFN F,y9>(+F #B )Dt\ 'S#k]jXl? Pipa4BIQx[LN!%v&d>Et%ؘTgGHR+=iڥ#ZuiMj7Ve0VBev7Lp` 9$b3 +u\iS~ a5f'}ɤ-/;0̆sE )3f##@ ڷ㻗d x˻JWenj0;]5ŠY#%f-S yMS,^hf^q0SC &c*G}q?0@08+Y Ep$㞿^0 J/[%Od4taa2 ǔ2S^c1T R#gXK$]$<+\1SBixʗW: 3'ۼ }X/b>?\A`vVS9vcѴz:׷qj_~ܠ+mXʀ_o_H}<Q! ===27X38<985UuuUwAN4l!=]ưyݝFPMAA47 w{b b{a7Hkޅi1 9֪1K sdAL9/(7 83{7s`Nّm+<cr4 q<v5 0ZL]a- QR6ĞD {:>Ƥ|rd_Cuh,t O1v>R/G kGڮvAmn'q4]&YM ?gg/U74:u%+O[s(Apvb!V|uV]IXaΣg -tYYYOq Gލ#*c)_(Ph|̈>(њ7nj C|zrGާD1LFx}@!X/0y9ûO2"ObEr8gy@_wKa(FN'(dڇ{CN1E.zf-}WDE!ɫd#:S3Rz>EB`7c 'xf~8YDf91oֶpk2テrqe{w`>boN:~Ϯjz?sSЫW?_o>x~| 'MP~?s:xF~^[0q7Y14;j[Bkm9FϻȌ'.5bP%ڕI;5Zh_vG52 NTo;ܻ] p {VXa䍢 0K ח'!d:5"KDwmG5b ^zSi8m}e^8 +/3)u5p>8Lr\9ΆxhFg90 χ?ɯdeO7?6~zᕪZcE8޻~v~Q`*˱GadP|l4Ga2wg0hҪ{7W$L*m3R]uulx} %8=;*Iz]ĴؙӖ@MqH\U^=_èV9}yqfH4l9nĞ( k.Q;abʮ_Ai3^EIZ\ǒ0U{IVG)͢PKÄFANQhZIL(͆w8G-^%q_%Odm2ұn>wV{,wVq Um0vlk;$?E5tj(=_tuC&A4z(8D]ss_my{: 7bIm2l |BmiP"5H&j4 _j  |ML3%EBN)(k@Z;!g~@IhMGDR=S~(A4.Uks%{^a s+|Gna39<&ZPT J𬻿SI$;I:L'1 hh_9q,):E)qZC.7MIByS씕1J0M I, mր b)9J&AWF-lruөBZ(b`g vjS! +d}M"Ryjʅ*P ?O+2d21Mɹ 2)٦ ,hb>Y:a-jlkq|EZOww!u[|>?d%$@.+xAJmVbʜNh -]:Ҿ8UwN:=L*:E rF.+9z{Z D IÂBv ,kĹ-N}FiSCc4ufvT.\Z7 Xq#kdSЅ=\κ•yf҄%RݙVDtf!UT J|V9c$>`}i7Ew%HsCґ*ԧ譿*ӥ/nK]$1Y5u}>Bda*A(J(LLTw߷w,%;XOZ+N)\d1Ŕ|-h04#/F}[I9w3ccE,f6:mFyjS"yCߓYq#zKD `3% +Y>"ăvy"[aOI {Oɨ\t,^NSHʸڮ^j47i1Ԃ~GW5%ȯa%i Wo쯖I9YWX+êUCxU7!^7N^\d8nq?՘2R7n׃꫏1CD!~ղaZ0ռ V/׬U#|m.0u9CKh)t@ )]H [u<>krMfՃW2.gjj4VC_%__>y.O*N>,CP}`vRDHFQ 3SpBcRMJƖp1$nrh:/-F#8 +llOe1S<qC1lX,IDjDwsGMERN3$@%kFba6ŠXĈlQb:I\ؽfBhݕ#D,IA؈T -]Ŋ*h 4֦ )USd~YeM#%q bLb$KuQj)Z:P1ШY[p%6yh<)dq%'<k+e s2C,8ICk+8$;c0DLBk+e +JGYJKVZIAラJLX~j[ow!69) \FI7o OYTo7X1ȃKA,]$lL}AL[}lx!Ƙ[ⷧxQhs"&Eg&pk1~xC6,EO׃ `w~`hh] pNb?'3KXu3et<]4c8O3YWEݶ>uĥf9Z_Y>#i:\) -4ɹ t!LO]1Au}UMt:AJx8ISt|C%wJ5iH&e ,\F2|8aj#"Т[O1VCGH^9aSLפxѼ[+YO5}r[u}Ezwt}9x0{Z;Jw5Z=);> ̍ `aP7rcRxSx7ZI>ھ%]T %!%}cv>.s<طFΗ>(<ߔc| Rմy/7*Xӱ =fj,ZuigOMzhJE2?^w:"eS[XIVn0]K\}^ډ_BW' ՟hjyES\(^ɏ%kX>E;uLƁZt?}gMdp%O-*'*|z0TElL$#k-EFgw &3Y#W$D,R’ԭ4T1x'8DŽHTQX،2~ﵷHxq+es]w;VBv&zD^|L  u~=͏˝Fx%f{bifӎb폦aoڱFGRCY5(O]IdRTu6ۋb˙ ٘+ aRypqK>Ϧ,0> 5"6A'(9 u qJˈ͚˄4 P7x?{6lH|J2[S{sI%3"֞-$3 e( $@k$6E4~Xi@@8:N;£YxI@mgXbD!)ј!*IF2σq TKhn.I-i|/t  v=&Y*skO&aioi${=nlVQ,j뭂#x^ tm9Hy{OvEr~3v.T*n?PTEʂYXS0H`h*ͩ K~أZmq7wJT2[TwU ZdyUq*j-0kmo5[~GiPjx5W1jk0:NgS v/3{?&eob`O8Je&ph>W /SHO~c +őW.A2Uv5sowL>nNuvEZZ]-BCB^'sv78F-ti)mu-8Ww !\DdJ[~ctH0C-d.!\D˔@+Ahe8f>ɾ:GVqsh D rѐ0Vdh (M-չ Rdϫ+[ƾ+\tDž(齼yO5bɁ1Β9.HÌKϕr8xB%Tg<ku*82VQ'mJ!T4! w&}֎R@ (`!=LhX@KE.X[:-jS_a!Gf Z,STiOpMOeh.2YPA\ΣPwx)⑁(58@' u*YRPxVR?|+(HhH–NTAQH(\ JITFIq{9YK"Yh' |,c8&Y4j$[e0_3ugPew7#xEP^^"Mz[&WtA"ńc>Ly?Υ~"$@Tttq 5/B\]}fCum@%t_`oRkƂmdblA}u9$DFr-T/6Ēz?A($VL'U\ga* m {qE\߻;\LJlpkY!Y޾zM7߸r7&io0I%#ͮTB&|NRo{Tp)` {z(05씗3~q9jφ)P0 x pDA`IAfMb * Tg |ɣ*]>Mq([8 FQO2^fw҇h,5 GIJRR-s0T! i"\JgD e4J2Zh" -rjj:`C%@E@pBhTxPZB"q[E,I`iPh/a0ά L d\&0͒ eM*2:HUR $Zժ۫a5TULJ-5}x,hW?NL3.9+8e]7wWVo/+xwg+e οʟK&3JS#6s9.q-'B5/XEE4`(zIsH[VBi"@&Ra#R@*$8)DB„Cy8hq+#ت5g7 @k/j2a0JHTKQ D #R(QXB В01qSC -ȝ(EM5"خ nnovI3u|{uXYm[^ ͛|o1؁]x.FpLY"5gc+U"!0`jR C@HL2#Q*MFxoBdpDmE >V9ۇ8!.yϋZ(S@~e݌ˆt sؐLCXt(0ˆ)LCرEq-giOYr '!by習v`?e]N1Hrtt9J]S9!1GRt^Nw#r5稻+RaxY5cK,̕s#95Q4|{u3[]G7t Y7Lfڼ`\; |  1q^U *B@=TTѣ@]CѪy({/iufζDOShU,& qzk=$oU@rh+pF Bh/7l.X(eى6vNsm\vSCM)+QnT DTr2eO̴Z Mwt@A)ӡ7m{ &:Ed<[y4P~BԿvUJNR/J~w~v⌝g8c8c R*9J;V,khMD+1be7:H,ޝC ڝCo,Vб JV׾uLkHUU U5#0j 4`RlmK=cxAtӫV~Uh&㥦kTQ1㢣. ڻcdKkpFڻdZFuEd| u R/兲/wϞ̚ bgi"b(~{F4B䖂X*`ۨDN )'d gT(bL0.\2!!K%TأDFhkuZroYi̊!:Kjɹ1V8B{5k9j0 ~F>oml*K!#$W\-px) Iip\6SB"t0|7Oe>%[_P88 'ʹ D{kk`N d]hC->[>H bSa:Yjmǥ*ھНaYpɜ.+)4VJ=pՒ,A^P~k+:?y[!VAe}d,kd}RV)= Гh8='_nDTqr Z:؉|.')JS@u*M B.YcV&Rc`8VB{ǤA#HMxliJ)0)HRZ,H 3I@8!e `ΖGAyCm@h6#X܏v>o8"ɣvGES\#Hr]CB2Ai- IPО .!4K!O.Q00E[upㄟe jvuaȪHcAx,7#kNQ+L1UƱ=7*q;ɖdfp(fnmM 2`'3"| Țq t8w W޵6v#"evv*^LX`l2 NۉoN ߗ%<:W q-*VEҿK-n#e_󦄛"nI_8 ƳB9 S Dn,6Gم\œ3sOJci4Waֲj*= 0My}5cGYWJY+K\~f|qz$@۪fV:B8{H h%:EX/3~Pω /&!L1M Zu`mFtΫ4ogI d}xW] ^* .uT(t%ϢYNv{Wv1*UCZ*"[)k&T65׃DA{5t.9po#*PDMVRC}2UjeZyu9#jDЬ {t.F[-L./ Z ӕiJ>{| V~̽/7K=eΞWPlv^N:R( pfo[/a8Dse a(xsrB Eb7G$/;,w1fȫ چ|p kT;jhwy'Z\ѻD[B^5=1@<Fw \޾윲BM؜HovǴZL]~wn0oF)PΦ}dpCg$Մ lAIn=o!!"ٞ?k=k&ŸB?>uM?* F'3h>Ox3"S2oB@=h#Xf.U*pJ,p"%Y₧iA^(dY \Ojε^ʂ赬Ơ**[*枻R-]%9L%XRe,A`(=)YCޜeZ H9| 2(Y#<`#:[}7y#D;5s.?S uU%apk%HWΆ S-jTڶ_Wr,Lɚk1M&+"E}5AQHPɐ w#W&i eJw>878%3ڪȉ DŵNu~_,q_0kTJw*! F '`Rq5Lj@njٛv֯-(=7|W2 |7t9km5o~=EWAŠ\De|J4щ&l.Jjrfvq"fVwtΛ^e~z*Zߢ 5*J_Wn2n4 k^hC]qgTWv2댰auA7T^+jNCAcYO_ xPhwcSd[`k [}ۯo0Rjfy*@B<.Efq gElnRNn&w6Oҝ@Jv,e5a`I^ 8LDju{:rB)N#B7-wf ղ k ƥ*]e g[_N&&{7~QK˵_yHw5l??']S"7T}z8}+|G:UT^|cXnџJp_D_?'"Ewg m4~?S~+)*}Z7nlP)J4 5 tQ݆ %VRf/nmX7F6Ui>h&I2 vkA4 a:7vkֆqmdSn뎀ݚbc:MǨnn4`%)kvkJnmX7ѽmJ %`&(BC|I +(XBP3Ym-Z'Sג`@hczr@(4^EHy8EƊ>J䙺W2 fK%".z\J(&R1\xX9c܊!Z4zea=PWː ii .;ͯ2 !=ub?-zO+$ ]'`"FbqaNPLRd2:Ěs6oGe8OKs:o# H mT%դD06$6eVէ'ay /-`~x|ҼZTq[y ʎC _JΞ{y}FkDx- 4Hg$]+n\$=M,Dئ݄I@5Kp*6Vnf؎*ZfFdgyeRW0jD()xRR 5TwЋ]Yr{jJcvgk76%᜻×K\>mt% )4("՞qc5%NnUyW!_k/IVŊ 1'P(P$FJP I^N+<<Њ($R""hg)ELo:Bvq(z{l<ΎɵU}`8ϱΪ,iMBVѱ1Q{hZ˜L)w< bcag{l;ᥐ/v"E YW(}<Qh~_ʷ ՠn ݵIԎIj,yWMʟJq88| $TNho8WK':Ke[N@vvcfkZ+ˋ߿m053^}QW{D3@BNCj )}e0PC|Ic~ wy:^BxzЕC; 5%F3J^-NՖd>|l#Da /o֘Ytc$~Mc \ :|]]fEGKGg=ju|?QOE82ŅsB\| icջ3,>-f53_2ٍ'<V 5q5Je/EdgQh/Asj535}v-kS\=}J@.|CWm_ŧE;KI;8p5I"{rc=]Y<8vʯ|]GۨO5Cd]Kw@e"z|a6b4o;vv/ٔYxYz wKOnયk'QN='Z;=Nb[PqI6 qZcz81}>.f oX~:DźvY-qȗbXy/u90UAx}1/)&D0Fe0$|ɿ?/?}d1ޥA+a6e|ONUSETu*V_T;UWw%^Xg+)R[yB3AW&"O'hNխN"}UIӁO~lgC'? Ѣis2 A_N|cd{jΠj Ć?M_?MF9K30ԻL35M4KS9cJ x2V^3_ωYe|(P.yE9!UYϗ b"61vi)vihZu9@0FE%wc4 CwemI }4zW 3/v0.k^&@Y j@& Qv Y_fQ齉1 5]_t}R,mhG>+H %?o!d?߿=8*pAZ {4ߤOަUQ^)0"H*L|ƣF/4@A* <Q҃6 VO餇G9IF(ƹL0A:鑿o]tO?ℕd^~{{Ai9G?O.%/.oԘqXo'>]\ I')9.~碘1Lsǻ+|>XG[0o.O~~&`üǚAqD*x)2GNx%iܑXJ(=eدHk=M6(җ~ :f1TZァw#n{!L*"IA^9LgY? zwmw5CfාN'pl^ir5%)aF_Rܱ)1GQJ ~4@\axENnf.j} r p3L|߿D+.0ޥDdi/<~$!Z?.:1://ozĘR{_+AE,e_^qW}E1-pq̕}pVI-S]&<z Wq}M+r0/ђ4)`,FbС &7>to'՝1rZJ+Un!v׬iue3VNձ?Ԋ(U'Eū ߈3Zμ%ق("B^EIDJBbG8PJpM $!Ծo+"(B*:┢i^F)z-DF$?k4YV}c 3k@CUpP{>eW<0*$J. Alt0Js-Ln{1]WNACM0VNJ}`Ě?E*+U})I%Y@I[:.;4d>/1JHA$IIq5:WfR7itzPx-FP[7(Gdmd_ۃN˓.cT\ 24b.(sP;eRn]e`D!=)p=(Fg_3i}mYQ3~(zI'+%ЍO`ujҔD4$]hL+r[X7(hc謳!e;IlJb /2[Vj K?z,+;4 hiYL'Us(ڈK{B:0 0Z=52%v5!Hؐyj#y㇄1F%\=9%wh'WvE-J$J7DZ2ty *J T voݠtxei`SXpJ=);SM͘`,dܚ(񠹍(ԪZ1G0~E=jaJأ}/ \EFJw.*5T)$~(b,5No Rhi4:!…n /ʕ *i$tU7[&)[ G YF .BΛKO k-k 4Yx{Y2S%Y=DC8:ShLVoLUY}h!\]EZ5Apq6`$U(s^2*U3h[-O 8\bɁf(iQP K@jdi;Эc0)%FGW!X\'z߰^y0J*SDiq"Ehc͠R"A^1BpIUB*9h?Iu (Q(M FMժmԔA" VHJ0$eޚA EŬ1"cpFŬP@SAh R 30.galjD͡Y/B5 SN-B%I>=]<==$>A3䙦ZTKq\]v-.g_9T;=ͨ D}Lk ٻRfy'y·i@pº~kO"f` eȕy,‡?W7κ)nn&9e3szfZ⭇wgxh&>م=B0q`üZTڗ65zRkc3dEn=O ^Y1J+<Gryt댬)..nBs-j:6n 0mms xQi0138/kL!T6LH_xbtOYB@g;k(wڡfvc+zݸJR.RaǰoO5צ?Ur"pq}6J-WFϗRt6R>w=G_RBNS]9DyH9Z0B۲K:D'`gh`p;ϲJ7Iv5T`'?&e԰ʞJ+-ٔlN5^)ԥK]bhν2 -Tghf7GG2wAkq-f>D]/]l~盋 ßm[VyQg1s&o:a* ;um`m!}LI[#]`nN+շ%KULH0T/-K߱h_de\ۄ<͇_~B)% N-F[2*^7}$uN]UCo?WyĥъEwpRvjRUK9[(ټ?ӆZ͇l Z۫b<:wwNkMnhkƮcB A}|Ft'B֓c0-j* kw\lM5&N\^2~BwxUdsp辗ݫ4撨J~0u8] R";`*)ʷwi3PLsfvd]N z͊=WlUXMu9.c*}Z8|r;vrMw~hhvwih8i؂OmBc, =e=TŽyjáԿw{u6W(uOodU'G]Zs#BM!lXQm2:y-9z)nC"h0%Jmm B񆍋~V#g=-#ys|{w39/TTvejy=Z(mgUR lZd!ʸ)&9߫=8Ť >w3O`nx.C΁ ?qYjֳL?}anj;Иzb,nJX^e-diVTyS|4Jem^Oo]+G);txsFs[:WK`'al N]H[vX-~7(:o?ѸվyO4H֋ȴ/Ewat+LGRiq@{VILeNO/ D8AB9-YD17[I/gۮ$>/qa:uF_rf$v,F,kzůdWZ{=W݉(JZ܉JN$px0I"RWT5<3|IVSxV!5֫qtвK[F%&\fXb9>EߑK ش 8]ZUD֖j D4FQd$KMhkcMFj1(aKufEF(_bRH(߇KSY4$h1% H47z'-Aഴ㘺ٱ\Y1?wci7p.kנI#Y4$& h$y -ȎC saXxMA$ ΂"' ʑW=VDƣMĝ~9xÛ=ڳ4ۋyD%9A҈'or@qa𸘓G_|~y2)'3$#6 YiًO|.ӳ\꧷?g*v~ZST?b Ј'UUŮe!yob%kg?%n1-kz~YdrF9 rvҋůaggfGnUBkeys mw Tx-XNb2).61$GMh#u-ɼQZksex:4G9wh(AU@(D~b>H E+1pC} Zy#/EG^rmt&ψT˸ 䛭SOu:a+x<_YV1  B~?=9VGk|Yhܙ/r!F_G44Xb]|TX4HlZ@lL s AMd7zh!vhhPp8=XE2&Ztgco+J'$=;Mm;+jmUa( GF5XgADWFcHj6H,;8O8VNEFZZ[{2}^|&G;߽{ g Ud:ƓcP,FK"j%O?H)ApL/WBb%"Mzv+SSρ+qZ 4' 09 g OQRk:Z JjR1:N-QX_oVa!D+l6f{iBQeշh-hÐr^KҘwgܼ_0+됞{iYu$$T(. ,-ro'߿mr)f6l@>kKhAM—ܻ!BZwؠ٫Zꦝa9\6(Utу:vT^Lh]fLוVmV飕2;| Td֖>U*isae> *,u@#@4h#4F-(שCzߠ %cT1߾ħ= ~q m)c> ՘z6 JXtTjbWnP=ɼ6͖?Ŗ8Ťvy< nJ2K\~,'>;OLYl_؜I* åe-vD~ywn%b?g~9Yݗ4\(TorKu.|=׹ ȈaeIC.&xٞon:eP Qπgs YFZO#LntXtC@:eXyko%|%|KimNį4}ENr5cmͣiÔWV7u 4yF\eFe-Kn?{9)DO!bW٘zZQ G^WWW.^>,K..8\WaF)Wz{AYjuqA˵w8O`} } 3ӱ81.zFL@LIybg~6\r9$ymxG}U[ýFVo'oql<vV9nOʬIIIQs&E]pg:R,HLv#ҩy{x7G)9S|p+'q] Bu_ >MYhE? y>(~l3A1BV\G8t!9JA4D(,' GF/'ǂ$T=O1NBAud)eRH1D\5{Ԇwl,ZbLo\|+^#j3 쎌22}zG*8^>R/HQ6䪎AR)#2ֆWM7nxպI,jZ(kxz`o?撰1_Vͩ~K(A)Q/vd۾-C7|kۏ7Eq=e}*VX ?WN)4ŠSJ]fvnS6BiMu_n}DVAԎŻ1*c\Nn-n}X#7 ҂Hw 8-#ovgJ_OqkMWj /ߊu+[K8Ogɱ !%W?iҿo壮w]>z>Z!=>MPI_ @J^y7)>ȝ0,x pHm Q3W٬F{I@/O ~ߖoaո?6j`.9%*s*J:s-|h4"1e IV=)cH9S7}(]^m2x0lkAe}9вmQwNq|-Mp\̦ztn'6. [iEinA?v .-.W^Iq7u#(nEQR{b@G$N:j?(8`"bA?&FԃMBG p(aGяP\V:Ga^H[cYu`K2 CLѺ+ɄFju eZ-5;r*S{En>gfe#4-mT"Jq-vD[N-B.h(al IYǼC[YrToqx=o>5^iD[h1i%(2iBbRhd+1h9}+cS0>5^op@؃ּx.d웪1Ji8>{kvcThmCeRQh{TX((>GYe<%%9EuRG/xF3E.x2;4s ^qd,Bk~V)ݦb?aʌnN_N06nǚH#' +h#DHڮOV.sژ}xrG/^̻F&sxɔUT'@T0‹sRߤ{3+ 50S9y/_߮/d"/"ނH?IV$i`mr ,z,pe^l)f'wr\WhGV%G]+)C~}QWqfE\-o@o3cD4f~j\>tɖזl>'o"7M: 2IC"C0F yaК$q~?geLKYEmeOŭD] @ ycJ~_1Jl=Z0U}A@reAU;5Kaɒ/Pm`*naYqsW @raX#L+K01 r1- BC R+䌼umzl/?]'Fߑ%@[|="RFSF.`J[ PfFbZ!V;j&NE.7CXSCw"1T &{` $U! 7ZGWӥBi{x=zav=. ;OĖK(B^Pqe#3/sSh* ,RH{j*$qSd\]JѩJ:`8o8N 70Q ^#P ;iU,85'mlv: i*ȭuxKt-f *.l wfaXчX3br&:ASA'$U Z ,dIjCpEe|NVr:^k7\"JNSL)*t˵!T,T%_ɋV NYk/_ qlە<oS\0>ݼ~! o27Klp&'dZAwװ+" @fcH\P$+~0_II6j'LLZϥW/ŗ~)nJ6XN?XE)0c̐c֙rs1bpo)jZX _u*9ªi@U"-,97>^!߭pdO*g*U8Fem1Fȴ6jshKA,|r\yZlTQUJP=T1Sr!TVz8FRQih7hޓkmO' Q8]8 f@p܃.o gŒ^Ax5뽂*0)?0@ ЅK~ „%9aO_ EL{Bq†HGd" 񈆢9q|+ZOu^v`qy~}ܪn iz "H_}PL2Bypv{9!Fr &&$ C4*\4ZWXoa\e/HMju']`Vi0g_ΐ|W`K"֫2?Fw~S(X¥!70W.}ؾ{*jJ0m` e>/ jT@Hjr J*-PL0jT,/ȔRcLf'k%j,JD ى:L@C/fsC/P)dk"vʃ;߷yǿ0F<4_WW3[`>,BN0>7X&H)d' 9lB%L.`[9(JUjh%gME0 QwźZjdv==r 2 iּ\T..`JBk*QdMr)dqrLDSI9X@Skq'RmhAb- !t# "JԐsi+j ,FB0MQL(&xt@N 欓Tى$&X sFL*$e] #_"dU df0r Bf;4Ģ8=U ĘsjA $1Ïwnso7 IXlBMD 6Y%T,31vK k䈮a`5_=|5WܟGj9p5_HZeugbB[&~2ϲDmdم)e4=Hfm1ƐjǁqٙQ"`\ow!͓nyx!ג`6e &SvlM yN؈QFc4P -|:2p2v7Jk-g,] wDgzI5O*>(/YJQTI5at8uPeqWFJ,YǭRUmm/Qz1G Rnk{\Q΃dA r'K]1BfS0zߖ~ɷ,16(ܚH96XLj!+坜6Z#P j8D\aF»oC\Es8UTiFt ]tH#ScAv]1M_N&&Z1he}'tU0E,-ob5Y[t#p=Ė!v3mz]Vã\ 2@E#^$(u\u$ A0#D!.& ܛ7ZE)h8; Sǐλ#`5_NXT΁\'|?ij2±:#/n]z>w\z5a7TVci5 W xP {grDKU0|/A:tDU!BSE {Dr$eF#PZs4&=6}.LVM\J 4q)yVCe Ii:%|=- ݆›r[qٚW6:[}¸pxR\n=XsIn1!]cӗ]DQ;njܵl֢r/ZSwY5sdbKo5bPVkD셙9̂H/5++*1E/OfV-W5{.qT a, sN)6h바!(ak%*"sK<$}r ROU"@aj_~Dm?>F5E!}uw+Fn$C8R}z[ =n<ҷLI1yz ,ԣ0eh8m<9{ lsvFDXt]XtS]qJP2.m9RwwP!1%0kc+M5*,@#klD p 7fdC9k+85PB _k5 `ݳj`3/U臽<1 +im]=<ֹF _矷^Nu& D FIT;Q&B:JYQo;UۍOg.p.yk^/'V~ui+սvݕyb*49ysa\MFWUji̿[qcͿ׮GJQ5{rt/Mwn!B|R ֓sm'1Ѕx&L<X{11 oۯ1 ҹ'^"JC.{ BE & 0FFC rRp=>t={YLa9J94+0G%OR8G>4\,!aL٘X-ᣑΑz_n˂W†QWs/_xMEۜjͨnױ9(t6jNnMϐ)quZT Lo% Ӌ9ѡmyi,AaB- Rr,p@YA0okŜZj@;z.lي14 2&,]L`ƬWflZ3Y .а(ΰ:՜ՖGҁƲrع|&xucSy Hi1Xvo_ݏhV>L(^ x"nOkCl]@8ɷZݯJ\wD$3 -L%JP]kP)EgC lcGQoEMU~s;yy`|+136]$>&/ӐeD$IPO"ڦ}͎H :=Tj\AO{OqD`"#'?#O񟸢g|@.|+V8?]gOT'I%&YY1tsaQmǟcxl>|ˋ' OnYm <3^6UlumoJ_;.uʄN2ڷO%soht D|wJx >#PQ L9aGW'jgB[|WL*!{.uyF )Uh1v}[0~er7E,ZWQh]EѺ֪ cPǑ> ;KS=Xc+J'&0gϏп>̞|[K 2kчY;QDgay E'[?->g/ܵr&ګVmnFbZxKWh/+'*>JF=#i), /ga-K}UýG r ^o;/QƝ;̈?]σݡDŽO0c1T cH;(R`^1[zЖ##D bUHc1 $vyssY,eOjF T$%HH |W+Wj4~:.r/տ#gB̋.siM]Ұ 3pCٽ^~xU?Ǎޯn1xXKӒ5^QJR]ΦVig#fhšh$Oi31eO0rk$ruy5ddRLsM+ŭz6nLDVsyk!g? )>8^i434jw$ӳ8d?_M% =}Lbj B&Oϔ?>#({lZ=$@\@.}{s Cb&&^QPb1wz4Y6醡vɃUtZÑ& ]L@pOP=c8dKZuˣ:-d1$ ^ jb+plʕ +W?YHӶBVBAYtEOgظU?/+_m+: 9 nG$Iq4*BZÚR5:q[Wy|~WFieP`u(0Vi%ŴZ ?E[ bNINmruj}[^/,|INѰQIpF"" [=Q췗O0/^ˮPޅO9>}8xS49{n3W|2|qA+%i Io@P6'“pp$Hq#)ml!Ga Bb r|" !=F!D #c+a&Kn3,Ȩ XpŦJZP~M0,$ D%;ceo%XZjaI%*tnJ`?a8&$+*ꊫ;Ze m8 HfJPkՊ1?w`{!_(Nm;j$ڶLn  Ԃ-ɓ v!bUzF݅zXE? \$zB=wAl 4xrOb0V{aA, *R TTXj+!Y~VQQR7iH>xGb Aޥn[nnokuk+d-a统A4J:rs6 rۮG\ f-(08A%a)[샮Oas?HqTɛEL?&1jO.:DT/:KvO-gքj ALag}HճC8wm>ρ4Sz9 Y<s&.Zַ5C=2)nv *ĘT( +'ڄI8zHT*\*"Vy YXQG K aHO0# a"y )-XA`8j>&jkoG,AT@cJT->"L6#y#)ve\)J(bJg+P`[W4J}UP{ @*BQPypD1Y'ٴ a 85w4J)Xr\p}QJQ6®?_̪ߗ1sBE`ߢ-&aJAC& :zd#2[ō^Raw'zT7L5'_N;QR47ldt=CLJsrT.Zs^;g=tn;V(xwV1iA@j )J_c'xX)xUcF0/>90LJ]}#J"yE# SK}4𣏕]j 4AK4TQŪ9Zۢ~?c5}yG *Z~E$!ongK{?d_ gFtGt080;mӥ1-)|UbfD bYi5^ZmP_BvE}Q2χQ+^F5M wLGl$جBxAZ6EBRaoBJ0:4L;^~@?lW\~s6)KTW[j):hH /3{MF;AgE;머-aXPo۬Vb m/{w`4׿ݥ* v3(c7*38~ÜG{?4:gP_6PPHkGy;C!6Xx|!mB<@ !|? #Ȩ2߮MkW'k=ٻږg~pqTǻukH/5* h/tP[MíGVϚXy*mڼJI=#խq;{wbgT8^-ݽJɈPHke6UOS.*l̼]|ϴrQWT7Ac;6׷64Ɠ2VGOE5AsCvR] V f;g.^,{q10wkҶTKx?ncwe@vQj9V/6zt؍m&6:><(JIxxXJ &?]kN[349T V5WHj^p6HCh*ү.nkh! *bViX _7Ko}y V]o:wyJ" 71Kä. 03 u< \]*9.FjV$|ec*s㉐d :YZDYQ0G7Tio.x~P'gA1:{a@XJ,Oj<> pƄ Xvk)1c 9qOhyPɕ1-G9!3C>)(M8E4EjXpeFeY@^gW9!(k*R!PXBCA%&yv]N0jP}Q! - @iAER[h112IF0-{E) ^/!Z/)ym(( a F;YiFzt$ږ YB9ŪP$;s@pNwn53]$@=zD6Aj#[F>"tO_sZrsnYJbcx6 ߜU~v팕;3-6Z-ʷ4{ګ%71x 1:5 iK}놴mb żIM.4z?'WË_RҺjHo>Do6q㎔)zO3MjG7Q{jwׁxsɧ&ݵ"ygNO-2ncb{#> Czދk?-/A-QgmT#BD):nNAԨSrkQ,nn 5dwb$I:^j#ٟvH qOf+Gg_ iǪctV r`J )f[4eOZfſ:Z..UY*zu ֝;ߑ501*XzQ_I;#'UGq=4a6D7({R%x7Q~hTZ`B桐ˋe4c()#t:u_PΡ/}-{s]2%H`[߽ӈψlZIv]|FCJțpyWEFƐs+§sw׼E}|X#x=mIF%joea`@85eG m(IHxׁվU0"-!]oeRԻd*~Fj/Q:m,nQGD C "H_pmX}Vn#M{{h_:-_-v2:V0$n ]eon>޴E;f >J[Rܬ0qŀk}K=gĖ|O^\P3}:Ye>PW<65r_Iܹ"rśV$MBSpVuyk?lg\~Jw;= sgoZ}sy^?keݬFlӻ"_-=n]c+G_,c)} zo Rr"/QY#fGx%+QEYLj6djk~٧kpG#8*4NaWjS+d\xr?+o=bZTOny$>)3_i2CEYE*Z_!I;pl{;FofL!M(VJpCp`QeRyWM\̥5$sN`oy^|??/U-Ŋ#>wc]%-0dpPĂ2 l9.jNJg;sqy Qge(QO2FOE)dxF 4qDvꗨ.{:Eml_l_'~1bQ48U1Vä3[Mm(@Wn+IQA08Gڤ.wGp%8p&=7 P82hrQ'T»$ޖ{"D P䒣q҃wI" Ε%ņ! HS KYMtKcQavn=#yz/8$51L.G*P)*T)q_2sFwY)Hy,[0Vx6E6Z2ڰ'xJ`DA$r=V''<$/J积OOi,7zQr!+EՒ嘬tWqnXƈ!ֈq$Jk5NR`I>³} ZNWn @5p1K!6-KٲL wN oUu*oڃ!nXAohV鶁 vN2od;Ӽ(D)dYTBt`G.EOP٘U$G  besJP7@Ǭ2vJJR9TƮ RQƎ&خyBi m2gNagа-ѶJUMN=(Y)x&M2vߨ }j(܁k묨=Pgh58$~y;K0*amݤ6j= DDl /sV*9Y/Z}3ܠt1mutn^W7 9NC Y3'W~1ʚiQٖssٚN#BS$&]:Ht`WG~ރ:]M.x,,QU馴qV-(D8yql:1l c7ŐZDYQ0FX )!.{urqaԹ vy=u~ S*#`Wڻ{nd1]ᶂޫ>89 CB(Ab(L‘@,ABo I儠rWʋBCA%&5)x$h~{h<.K{ͻ Dsr!_ܧо.2M LV-x>p8yʸ?U詹ǟ cԄBX[h|!{uj~ȷۯ_`V|LAH#Zr*_{΃`xLj!Ljx504ju%8C19-5-[w|_n{du])G"KBŮJB@SG* naTQѴ^ !Nz. x6{3d-߼md!fksڑ7HWo*sjA@(ꢨ7;*Q} ؂Z*;, )p\+TAL$C\GipdX nV"TS9bTaMJg@m>A阠G#\1V D:͟cUN@ 'Ɉpi\H `\1qCՠQBa9HXf2O`31w/%WKFC#+gѰpf PIѤ%q 75pk%0! I,P^g٨ѳk<9P"6XAPv4kQ]poz1ac`Dɫ=s4ߴ& \t?qv,t=_M]Њ3A`zd"9F5Iw*zz,DGZteFJorWc2L秴06m$˻bntcP?{bKIwb{^ vWb̒MSjg1ʳ}v;*ي?55\V̵:'dÞc$pJ,ZSDf x\#:;EޣS&[ QŸz 5\穈0N1%%0t f LDF1!偡hڼ|DkOtw.`bFy?v㘼͡NnEᗝrL,q2[MVEѤoS[O9'RqE7_jy jioOv2]?mVW+(++d\b$PV, i#4}..oGdjYx(&{Hi@L)v2kr[Y zߝuӱ,hŤ~^pZ$38zi 3<4ʕtژ4O3L5Ι̵pn[FϮ>H6`` yjMfyD Ԇ\J"p(85] jAƐ[fAgm- njGt5T+&tяeZJN)ז)#UVzVwN_uxÑ֐7\lI5 C;(Icډ&6\/= Fʛ嗩R_!VH)`Y~+GʼnWmh۫=x%lCЃME)m-hN)</_|Pp)ŤDzH,z*#qqɨ&$%P6NVJ7 cy3QqGI]EOeH7SM%EMU h'=[B:y»Vۻyr_?>EVH]4Gp!zM[$E=F>h˗eBm~R];Diq.Q>Q?]vEۼ܍Ŋb)CD!B!F-SJَBJV)-`=]Z6~Mà5 k!MF+ C"G-øԟ?d$+1fU7*nQbQ"A-.k5ZaȠ}dZa%m QJJͱIU,TpbaBS=2BSb)?]/.TzW[YgTQ3954SdY Ξ4x{3ͽ#.0 +n{_q3P`&%L1rhX:M! dD=[Ȼ[ƹ [n$O+*AALd7w \B#W2Jg ?ʅMp&U wR:xX qR9 \K6>s A]56" <"4OM> -L^p_- Q煢0j):Ű]xAϨA4Aڜ4\B!izDWC c|_4,tm |kt&6T,.G vq ,L}vK{_߽7O0bs)"@_ti&s,ҦQȋfEJDK[M}=s\7l$EWn5aZ-f+ o❓Glq2ˢ_\tƷPΦ%EAzQNv)\]_wyyZ?>gAǻ<@[;jߞ\_sO1aIIyCCqd%5Rَ(ln}$VkaQA{{y2vqԧO=}S]G/N9wK &~wrq]atJ< ;mrr> pS|3G$.$'_;rE1^v~(}! N|T L`65^ O;?<4X $#JSDݽ_3ڿŮEbDZݬ[*ۆj;8NpsL)IY&|m1M|TM D KSʼnQ18)&E$$y%"25I9.(j ”~gSb>P6;}z4V)X l0!2S 6)ޓGϫ`?E(u7X<,7Rʱpݿ3.3lJ$ (ɔ@";@cDkupxp5&jكSΚ"f:_l}RosяB"Vge& Yӊw܇φ< (3_ .GoʛnAC{]dGZXA&i & 4-w6 nα}‹gW)J ~mA$ lk51rFrB)T.l ^s,Go̼ۣZ_|qU(R5\Z*b;?W_kUW,3W)=*͕]{,ݖ-I7.Q2i35Z!'+Տuw1X B>utV@܆|"X8FR<#C,'C9D†8Di! Qb{]=ǜ_xLj!'LWi%,Y8?wgPgIFO¢k?{ȍ0/ ,^2dvqpp y9dgK3'Ednf_d 3V]*9F]<:"kdxIHUbZ*l{ڒބV'>ͭL0&C}4}1hP8 t[d @ O*ل^]e\6]b\V%WUʅ|c'iSH;ui嵻%?bG͵Yً}@P8bdvvJ58U酿V PR֝RoDL`ޣSM/mޛR 5Nku4E>fY˯8K,80@Q(Yi 1g4f.Qo!d\MOT;h5ЦyV/{#9?lJ6I4ѹ_3PN xOl#ZR9d]M /.%6Zu^)ĝNRcZHy豷er$ ҝN$bIrNTNP)2fogJԊod`NoY`)-`J#$/YR:K;(k!HtUܻ ˔j'2CK-,"/-SȽΉQS:/uVF&Ԅ+%&O*S4 7EEۀ5hLIJ *gPȍrQZCkSHKCa GM0 L6`qQG_fxMF L7r!]:\]G Aja>BX|%n$?ot^2ՠsӂWmƽiwԛ*6Ќd|H|s4~UF\k~u1 ˴=]_t1۔< op4[_~ؕe{5G #sW(LQ&ARi)G|\J*UqrX'm)Mik-Yb78?MԊzD*[5/gt[s2W_!9Iun5b5}ǤܵiJ!_y5oP5Πu>y)j퉤MAZT<=Aj@w{Jυ loOA=﵇s€_FiTִK(nt o]<߳L*r nj`9ı HXrɕFG gEnPAD`oz &r,c}sјNT A4 %:N z:Pu_C%7WЧ'6t~;DS%h /paƿKp u>Q?G% |* BBaO>M%[(qx1r\r ]G1抵 쫒<8Dh{xZv]Rh>x{(Wa@)ͼ o_F+w)kJ&D2okɐs-{`h%)_D2\z#. QfLQ,s}1DLgH%4q.#.J# 8)v{)%I͆2@}Io*z.TM`@q?U[!@TނUJR9lMjD}O\Sb3%Pg s!c{.^6?渠Cn)";ŵq[?s u}n̿fq0Gak}+^>ѶYOqݯ838Ek*>c9g*!dV;GsRBT'jj=Ӛ^FVF"+TJ xH/r>W0bmc 0嬸%SN!Ʊn#BEJdSD XH{bCz zꗈ{| pjz {bpm^=8ݙqmH[Sub cM6b4QI:ТBѓFhTU}KJzhZ $BMICuAxj\wR HM w7ס %u%}CIkw.դle{rx'&*l.򁵅$#uB"fs Yg"'!BCi[/LGG{2$:RzN1d=saAR}:y2iٶJUΚduH(]~uRη$uX=-I P Xb8wN &R LT uce\&}ԥJq_yr[|ѮK_^]e~>:?×ŧ%RZxJISLq4թC5TSJs=Njd J"[U \@vJXwvK)2rRsd:ZJJq` )YbsŸtWn˯ҿK(tq]0+og&d;޽ql)=KYΝκr˯J#QK[vu"Ԇ8NW80`GL#,eP:&IM:50ƴ܎ #rۢ#p8qC=Wש.' X }4!GG]seg+2z6%>~u~3>- wL^fn曯D/W(?Gc_|ѭM+غvo>97it k j;$#8u_n0?Zh9 qp&IKTϾN|Hi`pĕDPt(0׺( 0h>`O D3 ymA[RrEJLzA%T2D%3R&0pwpA/=\kbIގ0.È #:6-6YA%yni4mnA)QXarLJ˙Ug jC69 {$.f!r?][s8+*?̞2e/ʃ7։SIf&9.^@YRDI6 R(YRŖhh| 4>dd LR?IL+lF1EIr|X7)a CYI3יH05LhQR[,+&IW'6.L0A6hIM<鍟X&k0' ^g;F2, ;BѤxl$69,'{%c]8cUBW90mLv)KneJOk-haֶͅKJjj /=KVb8 ^GɅV-LNǞ.%".r#<0;EdXfv^Yo7r4pSf hޕ BBlaXǾ-ʖn ݹów`1"h`GaBK;J}mB+gS8[=Ͱ^x4?K5k8|E)ҜqMgث,~7VQJ =xU0KuB/IzPiR\Fu%Fŧ*oF*^KzZVUmݝ4|iKUXwjBPAk ǘf\לKrԆ`K֞(仅.c e8{DrbeNz_`JODGۃ"~VӌFONFC?Vyy}cC NCMO/c EIZOkw fn(L@.];͹? glGRPu"۝L" @)JdYBD?62Hg̏4;-aω+oSXTyQsO/F_9ģyR@ڢ3KF33&?/]Шwc~`t,PY+oO/~M%Ϳ߾yi²v>MLwoO_o/9ׯ7~ڤOͧI42!vE߮؊o_(=s;އɃot Jf#B5S{bdyW* !'0;ONLi9нMf˗_&L|†n/!-v xwK.>[U}_+d9Gz@Ï g0B$.L~s>hpx1ϾnG\o~A?~JTa]T1SMy*o0k*q2Wy2$!SW6u;Q?qzӺof3yL \xZQ::C$ 81I;UV`<;)g!sw~3Ja)4b 9MpQ;I3CAv$'F&QyUu<,Y{e@7eR0U.W'FxkZDp [iQ`|Kn3Nja5R̊o5NQzatR?G^]~ֈ50 4.f[ r&[5SgȢPPrŤc9}*65ur0SӀYXl8JB6h ߻C*6Ú\8a`I(LdW?P{q]tRtʘtx1J/bq5 cJqӛ I}^;w ۋ7L~}ۋˎ0f,-Ӑ{h,w:Job+Zus3Vucjk 7P ɪ9rXeH-Pʤ|PvL<#7Ja4%nh("L0Ԉ$4$/9!*˼"T#b؇ZRL{ks773/ԔLDYڿ@ah{ 0$֗^(ݻ66߈8D J.i ..3<ˆ&dl t?/_\ެlj*"*%`Y&8^}0 mz溑}@XV3/Ќx[yoݜEƂ~{c`o[gy`)]dE%K5WH=}rBs5$mA3tzntI\jīcr Dj # 9C.Aq\t2A_f)k!]_sI:Yҡ,ݸWdzY!x_@7hXHR"LELKG%2p708.{"*00U=)!90Q?XH픭cF'q!a2#|uk?I4 JWUk-i#`ZmSvE #2j9=L;D v |xԘՑpBQ7*JF0ΑKpnGjtV5t3HC$ƸRʜ*(sUT#MÂ]8\Wg _}` Չ rX`], PDԮ2,˕PM-Mzan=&. |*y=I X…Ӈf$!.%zDHkj|"QA`A7$iw1gg/^ i ,Itwx2C[S}seCNB8KK77%c%(&-t8݇.*lFzK3jWpE(*BS]=X(*;zj11܇l?$ Gkg.ɒ;˫MН$.e jBܩ%O]D_ܲgH+C7 _JT!q3f7nKZ^ȍYbI#|]*!D: K0)Jau6Ʈ;ʜSͪAf9[t 4W|YaB`-Υfq gZ/uJB]L1(vCA=(PH18 e&ܴX|X+k8P&r)e3kaB!r|\3 SR߭oayJXw~u(1Xum4y5(N9Q5c鉷Zܥj5Bd?@7k@5pwE_nM,1_խc$s5y, H9Ū.ڣy.t+hXW Ӫy x+y('ĎW S x-{&7 ATq-hve}GUV&fW G؄:V9bCQF6£jK-o[U:x鮎4%v[I?P7* w fe]bk8gi祍ZbZ#A#QPLYd +Sҡ8zfTb sٹ*:o4לvpO}_"X΃QΧc,.BW(WzUc,OohZZf>M'TBRvK9Mt'#Po 49QA& M$p7Uo hg2E0HRS"8IBTTX׆[mpZOy_~~ e~h @Wti2` TyܼPi9`)T| ַ|weB2vR{@ VHTLS" e "p靁1MTk{ai"Z#l#in'Q1\')ElP=pFUdT f3\5&ͰlA CEe6s H7)0`Z +ZT!%7_|>)_H\ۻc=^_DDH*Ӄ?#3.@h>EwU;#3~3/;s{5 %O=3B8D"bf6x wR Z!<)9z2^T z_/{Qe)3#0yM/eOcxi?f4FC 1%W\H|5|Lgxvw8JdN̐:=k: pZr{T0"} 6 X"QPqҁ+,2Q.A Qb,j<SD рCRSa%=Si~y+舤x!K" dބ"r.2E"c:*@@'2-$ed(~JnAR q-+%CB/q} h4)gkT VB C< [hlr! 3xk_BL6zEm7/dڒA^`q`%$ͅ_?8~O <.Kitʓo}[I{ /a:pi:}$y7^AĦ.?3~pq5xq-(7]|=M\!vdg 5^c͋ R WXiTFM <׾y?#kZBJ`ƳxNbJRe6~OR* Q ~T<ę3uW1տ`KAgi)+ uQq6*-urE2K @Eͤ4aܯ_|AhI}9|;C g>Mi&jy w&miV7#8a.rǥ%' }Z8fWZ ϋ7˼PLp}~5#@Q7>;r/y&E:ЇҝYHهr}>-L>%?ࣽ17ZO;4yD|$vlʑ/s$|+&k\[хOOU~i кݹ\f!J{Q*ZUʣ y إhnaC mFdLj]'*JkeW8uRKS297gt; 9JJ+n}pPFB2JN݅Yy_tKIhsJnPTX@E4)bEw?u#<(.1__zڬU[dscW髳ߍ#UlU)lwVP?[zܩR{H}m|iϛZ&ܫ?{GqBP'β 1gg7N!_Z~O=ޒukXbGSQ-OEN].7dZGLH/jXb%\F^?5ݽ'k"'0yGem=k  o|ejupSJ{{W&{ oEȓ>A9[;ijtmj&0s|2Ia+D,8><@4g nƢzexfj 'zJ&2weޗgjPr0{_nfOdZü2wu{jM r[}u^F\1nԵ[@'4 7HzωUBQV;CC@\ RaFk ]k񝩖1`k屽۬_3V :5Viđ1om{gDfodsi>(tHȇY>AwJR}'pR`;N׬ǻY; p=0nP{MT|J5|bFmT t8~Y ZO-u`Ənw$m{d{4e 'Fۇ +4`Q6A1f ]0)~L&O𴲛ݷvk=\օEtw6[L=$4Cd!0vb~5 1M6T7߾`dJ6NX>BiU&{ '{y!dN([Ž'["L;l#6a-P!J@#4= rj9 m'ȭA}w4btkmfQ,4;KZZa B#[v`zJ$`'W;eA1+L8JZAC0' kf%VG6['yO=v-L,q]Um`7tyy y ;6sFPQ((P͚ 5/%`-GX մIk|5 &E &OIvi)43ЗIrc/cN^eN^ %֜whi>w~eECڔUC(<Db!v|A W`%3&{iK viKIꭱ5M`0ӼOmSbc"aJ o!! "ZJ*a+!L HֽV*uoTZ//QptzA Cjey,_Nˆzrm #A@ޗZV=z uڮrg`\oY`/),Lr(M1wUr^P]fH+zWUxXB\y1cbxk~U8^%zWv1yNLd={jFFGWpL#oﯤK=wFgHtBB…ؒ>{L{|R|Gٱ]3LĶi}{C@Oޫ8 qЩ !>2LLSFBz!Y BIR=G@Ehl{Bdm٣O}զcgjrIܝ*Tw{-> grɫXkBS2zoF˙x .{AiRGJDi EDFs (F .SQ"em_oi%EuUņfP_iFpN9^[9mMA\M9X h0A^bK ZJQv-K ?޳\q]T@G/2y3w͝7g*x ?5)8%ءRk/&psf~rH_}Ln?9Еd]mVbȊ_ dEþ\^e;^TzxƋ}_ .#cB%Ea !kKC[%5>I@*_/gz$JHTid9e_R^L:% IݥU2)Ւy2~ǨA +6jϬ@GZ:׋[#Z_q7>B<rT"a%_;9rem]!Z[)ўZk"ke'͞~.<yy87mFFW m 8TLy4)cdR g'-"TDجfXoMx1oo j0_,s%3pO1e&Q\^X0)daP\ TuoRU1Nm~gzY]~d"9\03J0t O+GvLd 2't癑{TkHUZzyR q;Qݛc&\udlVWo붮~#&v{Q oM kH譩 `TqÅ՜ i9!GC&(J8?KPǰv ZUSӖ,krl62][sDZ+(/8YqN\RYPs5q^ TTYXX,t)H,vzޚ ^1|jwbL|v1(h4m|P[gy e. FYH[f HZ̤ˆݻPմ85r=&ӒvRw ")L]o{ϣ y?O^ iEJ_*B|Z6};1ӋT 1&t:DSK=lWR/E,|\O[G:!LZh]alԔp̃|WΞkaΠT!D~~/ѲR9;UsğozAz.7>H'px_jȣ("MբQ}bT/Q@G' GoAFI0Np(2zۀ!xfXejI0$vbKbOR{I@`x7t O#UA! q@P&ҕ9˵vR0R4ؠc[0W0>.ӻ_L2 d}L@< ~hп4LIH~WRoP4iҠӰ`n6Rw.0ݹ;dt](eBσbHoXzs5vq~b0Jc-6ὶn˷C t~? 1{P4F;Ør)(;Fw| SH@8>~~X=b07vF! rہ1x/ʒQHMs1vVn;f4 a|Tz Fs8CH 𬃕ƌ!o؈{gͽP+)\ Wy=6| NsoUp̈́b,ZsrsXh#n„@{f&s1rByՆ{ !y/07ñ+u8[ bzjmjt`*xdǿ9.ryoa妏~]y% DjO}&BncX).?=ޔ/"; ^t{u'hAPB=|wy',$h$%%s}K|y)VKҙ cS砵P6X7}$@%ZaغfzFEIT&.6&k*V> I`EP4GBi0UX fZF16GILV 6Վ]; RWջ{ݾ@6 #I^: S[jm3ߧ0v;S:eIUWG -jD?aMk8٨uADZ`+t@ZA~\:\K)vT6:|@'&Awڗ2L}t=g4SI.08/%lfuE܋ӋlB@ӎvS)C8E:Ta D&0O;`;xҢ١;Jy>9̋0pev;_Ȓ^!8B7G,,Ft#UIQ nZD@&ĉqW|_mc#W%ON]uIQ#V1Sgp#P>P&(  *t. Eaq BҀCC Ѫ(pG/Bv|ہZ%ʷ<?C0юE23( p;_bN?Nu9 Sm!0ET&WՀTneO S䝓P0^wFu^WI)A-Ns<ۿZ#iaOik2E~DBIi{YUtĸЃaFOeѽcRZ !^xb \QF\uڲΗuWeR\9;##f .¢D2a1.쳨&8H>:hw3,3hUzwG~3kQ% βH i,fmXHPH0ףa $cFxcd؁E2 v\G5NrH« 8:4zqT0G1~e~*BV2?ϱOykx@)n4t[5D!܌KNBo4ޖbR?\_76}ǧ~kw^ &2L*dXU!UwnSMjL>LqH`/].Aw~v/AitvauNvl1UdLI`b,c0#W4/?7xأ"\C;+h=QP,:%n=6ïmՠZs?-&t|4RW[ {в6#dlzZ3q=4utHp>AFP0p k%/\",wd=Isal): qA—WZΥSȁ:S?F,r&0).9|֜kF){ #ij2 ֛QZZ@Oe)DY gA:%<»uL1ce|?Pd\=p p 8;p(R i-h%/Cy,^i*>.^Ôd߻/Tw67р`9'B%Ƴ 0 dPMor\7[cN$iXׄDL;v;OD9|?ߥC-$ ǤM!zɔ]*,jFBp-|cH Ju M`yeQ.i 9; [?jF /W'>WVg2!QqB֯7#A[:fe`}:{`r ʩ `l1-̣6 ;U`;V@e`/`BR G%l4@x *:jdV TGb׀^jfݼL(_N\צ겻ʓm{r^Pm76&D8{N?wfnvZS95hB c2˹{;y|9-(9;bxkosz\'m _{:9w_no&j4тљ9f|"ji>o'?Plj{_k'Bѹgnde=OsɕJTb.`:t·ezVM&I}FGSK(Қ9ǰb W_QkAp`-..r)\f|ݝCbp ۈ5 >D暴ᥥ(V{l Lor"5V'bpm$y+65eaq Ygrz)rZ ڊ>FEg&)p V٧ &u^v¤z8aNOR[ tώx GZpipqCf91Vi{"l6? 7fUMGQcGwvt:B|S.ht|j<тK!ObtI97wЬFsm]!=^УEp3X' !D6ZP(pn2#QL^17 ba3i$ Hsrr WM& VϷ^9?9sR[4! Rzb,)NB҂jCVkE`VyANu$#1X qLQT) &Iٌ2(q1Nz& f28dPLp d!sNV㈱5 [+BϵRP)%佶As*3D&-IFF:n}(Li]n@ wږƦxEh.T}/~^).?=ޔߩ;y="T1Zc̘`BSڨ\s.ֽSmBi%2h]oow5b\嫻Z-qINo{hv(K] M.5u/X"vGEi)rv2`E,]֣ r> <\TnCwRk"dU0R()ĭ :ZaL ؚäo%" Ԛa*4C㒽u/SF)ڠJqJ7!ܚJd*?/G#3GhЄF >3+L[tÅ2\I("^  yx<`RwPדI/Ѥiټ38ݙxdC%dEvcByy-|6Lefd 5C D~J5ו~௭cYсPj[oH_ Ņwj&i!7gږ#TF %ܑPD?̴5;Ŏ$[VU'`JdQF@"$Ij6$0_}d-nˮ-u.'(Y dBC#9Wvoj1黏n-sw][b \S ']R.?ͨKƥs `5U@$>CXE\!JZ䶐yň䟮=7_?&pMD mVװi_To2[Czv^y2Nλuq#\܈El_(sBu!ۻwMrL.iwMryRAkaΦbvo=)U䇏?e5G:~(nl/NuWMʻOI;|S]~1[,tgḴ.ͪ{[6s_ں?GAg?~8ڵ-a`n"ŽnY*Te>veB>)5myew;J*MCԕJ65+\(*0gPcd\+0uf@/4ὌMS,o>=R:ea3\h *j\G[y)J(Ɋ5根% s92yY ̸hU4{LS$W5bp1%-XNfkA[s6l^HOMm6,.$ ̩$ā 0lD/$2C@w}7O_(B"L8HTuާ#xH8ƀ0 }$BS- ^0%R$nn(a/K?`jܢq8d~n{Ay$Vd]gaunc}+|~~g繱O9ϛZ'~haox.Թ2 '>4jLf_)^ƈ!JL2sn{7-ent*zwNo+Vf+þR=K3Iy:K|??5{;IG>31{#Ɏ?<1?=XMwyWbWT̵b:͹ɴqpQݺ^֣MDp5aF܈G/KɹX鰪 rV]nLco!G#o,UU qA`Nro|O۪I`wYI\SF{&t:gfN@EWeseqޫ. mLs59RPU|d|~VwA΀y2/iNIH2iݷiIA'G-P5PaǮyqϣZD_ZD"!IQb ]п|zp_7#UAGqft\һpzJXO@=i$[+0ɫP]V )bR`2|+&\Ĥt7P3bX9 ҐTd @Qx;~(aFuǟ gRm2ʩ yAE&+ uɩ1d *^VPH ȵ%F;ՒPH 7]$I7{zrVtkOޱZsmj6SHnߛEkO֑Ux7?Ecӫmjm"˻jb< }wax#lScM~}v6p-yyN]hryV|]sRĝt\?],HXq믿DVj*txn. R 4}3g)7b\XBⲀӚB)ֵKf SZT ^0Ä1;iC@CrZ$_((d^X1aD [ 5 KQC"UT)dJT[f?2k+ei]c|Q %%3Ml.ga 1jͽYq׍Tt"E<牢Q@&;arCp4s[.w7i~I9狛ׯ.?iq9v/Whr׻&KN=k'=6qŕd~zzӫ&ZW/?HfDD>6|Aofƍ信+kp/\^yWkűƿ{%{/?oLErngSHlV3pRY3IeU+{kZ\HK16 Gvr'g[㻤!3 Gbfyf7u>b`j|:8^H;##Ҹ'{DF-h yả14FL oڻTNm?x#F&gJ&lD&C=;AXΟ yzp7#AG1'iGvܑ{lxP#ܰN쁴@AwA9\y dž_#bb<11+GSnݎIb]/;yIs`__!.dFNb{C:-^ PiN =eG9#$A"vD{D2nWbWB/ "+sUz Z UIKc9QB[[cY.IN)%-SbT)FvWKN=Yn3K2 A8A扸­޶I#SfQE瑤XCپ/U+vE8DNꐵNS?$!dЀ(2wȢѪ)NwCVVuf$JSCcR+C[5Vw:[6Rwdj\|6o1#SBkoDV{DQM/X.߲nnsAҔۙ0ƻS,L<ӖL(Qe9UD@v #U󚽷c'1l\q3UZ/|c : f}l/׫ o!û>*GtH"ю5^ CO =;QeJ.cOz"{: \ԅظ7>ִF\/`%@,/o4Sӳg0RK~YQϓ?L.seU OMů4PIU_b(y#%}H}S? GYH ʊYSH|5PKֈ\ԢEeлTjzD]j1'KsS2edQ&^Zk򲢪( / #"SY0[TFG䘪vU.cӒH9V9&OM{7,WNip}ys>O_1Ufgx)[$Bٻ7rcWyJHF*D%FXdH ώ)yfM 5ꯡ7w7@Hb}i\o=aY>-gO`igl#4BM[O [tJhtdQ N+HibN\; ӄ'6"iSL5 & {8^3QpAx/Yaa.P*SD"DVXrh 8P^7jab&:B=HT̃I[Dź4\W9c~xA?4rh ]Xb4\uEE|]F[6hdmq}]\pNPv7s26O, QxhFkݭ5GZT̄"It'ڸiU8oOS*P %*ZrܖJ gTKlgWGyq<(EDiq_n]ŨzF EHwLʆFP|ܠ}ͧq 3U}rй\(.(.B^DսF)ώ neWAb>+މ!QfNAQ(V8^,צu)w8\R,TEM:/G7~Q`AŻe JE ^ *tfY}|ha G(dV+<t |}GhwozXޯ uE(݈ڱMއjp^ 0~Xx5l(oaY_`/, 2X'|Bç/3%+NApm,ηwJ#](ސY5ܖoj5<m+vQ X>\j vtLTziCO %rG'Oc0V&1A,WIB׳kXnX_7?$WJ,@Y6gm! 6>mLɔD*6FI-#!#TJ}A&hjPØ\|yL'#E1Mv)(Պ(qj>V#)zo~N[eXWKm^ZEJ""4GNIpj0fLjHJb GXOib&|n ]^X˺|uzs>FYrژo#搖rV3!h!kJ^-J: QB {U?CFWEH1{=1Pss O(FvQ>wv^Ev^6eӂ c <1S#NXf@ c.em9(t %K֔qv}1$ƳA Bؕ`2#6ȁE %N$$ArJ+IwVAF#!#AaSaaMO 6I:62tLT@a ˆxw<>xD᯿!&=7a<m5(4}fʃm;l ??f8]wZV:U[atȔCMS;4I){z i^,~KwMY(,j^r~Wjs(QJQpVRfV@Z JSkԐaRF5mHX7MxZAj.H3ep*L0(u)zOe"BDWNa$*'  :(q)5HRl%X%agf 2(+-g*p20kSĜ!h ΁h xuj@ 9"0'5`4f{R\p[k<ғt$ן϶@6i`$ꃍyNj~Z_=+u c;T)eٚVr6bك/^hΘ _[w^,5Lqi%Y'( ,V*ia8#[@ Л&{Zܕ?7,!(gm٧lj ܟ>.ՙ:FDM6֭ϥ b -2eѐ(FV7!𷿏09P\ᥤDb(??M&C^eߙkwO>x@? S"nER,5,yࡄ0$C\pT'Yx$,+vt+Dv#1\M&8%/iKI ܁9uZ=^" )}3>"/N6LF Wv .=]É<d# >Cͦq^ tSᆌ {9L&^W{u{Bx8}u{kO1ImOh"yRdmD…xo%| AsD8Ezs֙;b_/ pz|f+/T?דkmI~kcM[C3{;x,Wii9{LK;`4" .YrC<6zN(xInR ˁuǨz Pfp3H6⊳?+50jHN "Z6g3*B-QuIc{i ĶկErE.Is ޿~#4 .geG}0D7g&3G\:# ErEx~Tq E%KPySo{{23%!ѲrM0Z[o?Ydưniը6n]8uz)]]U*^[_nٖL^.0b?E33 c6=8B7.()3FbnSͼQpxaxBS/K|R3,J-!T(\J)MWR.3X1n$!0 (AD:d x!Ihba!v>:J#<߼UD5#!|%[O v+ oh>JFfo ^<|\1 ͧi$ S@?X?=>=Xˋd\I]Bs t-=!m1V-3*jiy.c2ZD_˧ OkA+lm=d" ?L!@nA#-,nOAd#*n^KO Ɋn;V wS>itKF.حu+c]X>~>ٓ3Μ.+'0/\q4B!ZeL ;W[7Ei[U bNwn@=V֭ǔ"7Xpe/̍ڍP*:g_|~ >W~Xfl:E vA:4Za Ӱ|`Q.4Q%vr(yQN 5ݺSB{DQz(W1Ъ=mum4JQ?A Y1*AlE_\ )~7aI_k^T"qQgD](1#4נ/R'Tq:wHxk5bgl-,xa`uPoau`kIPCX6В5tF_^]-YJ{ev8#ƹ``$5c;qkBww֜]%!+5v=df$>@A`sٽ NЌe}M%7'JvBdxDYY|d?Vަi޵u$Be&;L_dzX` ,Z)!%;bVDQ4yeiGcaշ@eRxgi1w[^;\k>ŗNZpzVS O%ב'sD90wL~ -/]hR1˞ncܺ7Nu^^}3ʾU <8=Q5mͻ5ry%Z\a1KU( *zPlMv"P8U!PE"(*VZaWI259,DHJ%5BwJGz 9QZ>79'<&b9švoWQkt ŦMIӦ \m߮|ioEK-Q$R.܄bY)lBfe }!u u ",` Q+t˥BR̲9}uCFr&ItEElGYs!(1\QӼIk c&J 25%F$Д REI`)v\5$h >1Q{#:%'1kI$e䜨+1 = b7^ZCn6w:wo1ڧ', @fɋavJ;i|CWדTLNCFy’c腕h9 pɷjp DrЌbCJY⶟ha9=#gՐ9 F!1LQ(kWFl'eMFQŴfOovHdDiZ;dXgiU;;>N>uh魎D,jLك s3埜^W;qWq=v{L?W (E8S{t% x/λ6BQ\ޠW@ԓZAI@YKI4!5K2G^i'E-#lJ ^p g <"Jf 2֐EYB2B gR:YǪ iGa;-#AVL|x2C%&:a*'] ^~O VgPbhɤOT$f-#Bu};14֓򙄗I9ķxTmģ֛=ZṾXNj>ȝ0R?jKm=k2)EISLIb#`-Pc ^#4I᠑x.x!3/S| =8'XAhXP// 6"cMƺ}L NE~<@PQVQMck#B}nȕZ.gغ/(a2p2Ld4_ Ô+F.3¥H+Yl*!b(FSI!!Cu#;9h\$^,ca<]+YN d4Gzy˕^%jSK#OdoF=b{#[B%(mytYnvg.]0)/(>L{|9yi/-(2N|rbg˸|91#;sx{r unڰyƋrQ &<$d\但\DDr*xK%Ǻ|[}{*w}[+g?pgR׉%W"U6Ϳ?㮝Z45?i?)_&w q''ҿ_o]Qִ~nkŻ+{2T^2"-'zK"_,J|=HLZ6@ MQ\BX#;YLh9ru߃MYZJnQ¸pJ|SVYQ ̷h'2_ ơf쎘WGW,_~WkH،< e,>qq39; m:Gh()03L3ޱ^tmPv/Y[aǓ3&񤸺`/Fݾ($ v."~C_#pw$G&"U?fYzb_1DP#3|G"jR[4N| RCQ7jRB<-$}˧pE.Jnus>T Fg1R\H{6{O=ޓe{3BZ[Ã5 BS4Jo ʤzzí."+=rlzn2yt}wb dWzq,FnWwNK|:|E?Lyj V=KE6ߣk7pD[A^)9V PHJ)Ld1î-#lX؋xt$^j,hb=Œ2B׊}22t*+%~D[Fxz&ުo"YfiBdZۉo 7}ב6{ zPBҊ9_3ҋ YzXB Ȋ 1!DB5[FD|ƬC-,)|\-[Wx-SXZiѦꗍFl:F-b f ԪHudM"\ *HÕ=P[+e:h,YJdc܏f!&R咒 ech\ &UpԦ}Rܒz3/"J$9 "pWҕփ"eDl}* pb+:(X" 2H (j7]QS܁kt!kS1(keH by".kjDy5F3WhHH( =z!>E"EQIX6&GH;6F-\$1U$} B3DN6 *$-G,K$'\ݯPFB5X-I@v<ьJ@8I"Q2ZEݸkܐC`M-cgEr(@^lr@>#F g(vΙ-q@ 䀐g X8: ܵʅ,4RyE hzyQ d@tZx|LYAr9XSVd$gmZ!PׂDAbzV5Eryt{ jO.]`(9cZ{8_e;L=|J2̇M 3V"SH,)&)ـ {ϭ:V-j2x6 BzgSqd*dN"eeGNёF%v䭤.ҡFC ^ú('FN.Ю!FE0bԤ* !ā\&LiX: +'dp L#Hm 9SkhulȐ"ZV'lC~H6ȂɠYvZ ܎9Dj})E T(4%!ʈ&"DA* 5iCF;J(H.>X=@Io $cBe.J5~RvG&HRP9q6>G%T==Zk͎BV`Taq^h[7>#ZEM,4@jrw&4.\F\eqBpIﶋk萩:h ]b IP2!wPvND!fЅJC}I}(Td!,fW[K #+aTd3Ў_҆`JsH"6F숯KR5`taBzg\HsƆO( ż'#4Hmw]_fSH e 5(<P2#VB)f/p!_B@ m6y Fgh@p l/:ڢs%GI-Pvˀ6ڱRvclPPC)/2l1 2#bɊA p ]UZ IdYEQKmZ?JuK txf8>,  F*D`dt$L-+H7+kILZրZ[{͟V,Km\|ܔ()_&kRQNc2XitU5vرꐩȔH_-ÆWܓ4ܞ4 YOy+ѫҵ& n89BϏu'ǵ?IV_Fϙifl,y3eznZ|?S( )OG|#Q?4Z! #|;hqۿ*wWGZ=p^OigȚljA:Dxtx݆g|$7{ 8<&ٓ xR=9'Wjz_=p^[w@ty&93?ԯ:s͢J8uD^ߟtu,Q-}:>xl9XoW\[ѺoV6U돿mג1F}n1sUF(੗ *Q.OUOS4Ruyu~h -Ey0b1toiirbv^f-ʾΗ n$×4rݜ~ַ~{Xz+9dG~0s|k~ %4K3JpO%qJ!fcE`ݵ=];u }'W8[\;ܹ{i׮8?PqGFWQvZZc4vJP$B6)JնX|1䅑h  )!Tf5=D5x=rOnmsl<܁{G$QZG Q:o*DӲqI>wؐGTOZTG}t?a;m$4V[Ctf!Z5qʎ)>n^<6sT(h %yi*3:JI`; n16s !Nj:cl >wؤ驼i[8\~o"zl !{AР_cIQv1Ye9!>wBsm u|+҆"17 U ʇ6.5K%we^ KFk2cΈmq*6"j=&!wINޑJF"JE5娃VWU32 \cl .i|];ڻLT Ur"&jd!l!}hLQ9)v1#s9rNKFB#Ͳ(W7kHqԌ?;r@a G]f/KuV%QxBe71ӂбSrʪ*M{`DZ_z}L\2ؼSɝ-U8?&RlVj?|ชwP~ ^kY̰ 77.3HVˁ_{Z= @ܾg҇Y|g倵}MM}=_|\LNk]~k1v'+m bz{rf e x&gp?wzǘvkF:o{J~`R}W,i6k#e&ͫΣ>aXYw=9~N޽r}yuţ{я'C^YVHmi)H_>iq~qmf')6pN lİ":K*h}lʷ p]cU(2 i mG%W hYYwmPvVKr\,8/R@#G3os-tvqZR\r?'fj8fZp2k)vѳ6;us#W MfǿL Y;K:v]Ko66:Hvl t_ܲ9OO u(0:oB0@3Fm~qh0Np}:;n9;?~,ڜU7Hv'*{Գ2z4~lWR<5\̻2}7@^%n>dWgbkқDˎ-D阕_7lZ$1KQ$\1#v)EYmxΆ^#Mb6nL{htvIS/f?їh7|T>]?wdܗLOyK46h|Z:xlQOM:sŗ(+?y)?ojۏ9vb_/OFXhA}>ӛK^!x͟4}Q ^^T%Q͕W)FUj4ځ*{d,.DtϦ ׼~u<-1-_{_u0AK $_}e4KvyD9Kur)ߕz^foMYU/#DWWs_/a3ίW(7*!B>sFhR,xdbZ>QEOE QoƸNn"?<;tx 1UBE8;j:鏣yhCe9ڋo㬜[ []}/(i\s^ k}?~|jwBϿQ|CJ<[ܷgF( խy|6_ ]Y$(t>xvgIHvxBҶwܺhOR#$]LWS9W"y05tGuSraנpl'wKDCu䲕RSEHƶd0ڌ|,~=|eY"~mW8߷Y7 Cf=Etq/ׄCd~SQl/ ZbjT:KJݣi,VJJT& RPBNfVwnRyݱ|ˁ;l(ma 9"r.#'][8&t=0$푕OΫqHGZ:הT_>wCyFMI6-w ~S*ZP&UOʴR9ڐ{ۭ4Ԥ؈T=֍=2ْϚ;MP^w\U*q<| F=!+'t; S$K@ef##k툓> #詊Ʈ zV)lEH_cJjUrc SˆTEmJ }և`] /wklakM{f4"ԙ(fAR.&kGLUHJQ8`dB:r*RKHt&1&rA3;]E!a(\f)䎔adr*($5&d*]Yg<O\vZ5 mb0^Yz.fYA@g57I_ج`=`o<[UbR%yUI]0V ٙ LΖBpVZEZMj/#g#2™4fַBy T1gW<:z*pLD~Kj eRKiJڜB}تΨ `5~o֮{ 4eSmNMR4՚q~͜#ShX3C[iMEI\u9›tʶ;>i›#lOz#?{]NϮK4uM׵eav҂C4U dmܒNOAtЎ^T8%t;4MM81#)@aXeJtPe_Lc݂~f4 17ΰ LCʙy7w;Ma!Rv3@!Sߪ֥ lS-nuLoCOP1 2lNstS%J*T4JŦdFN^ UN8%njj\Y4Y!>Jn]kt#'p贝"贅c%3.fl|vo؄~u0rJ߾I]'.77`5GBE}3Wz"UIg/"T&*W9PtNeh=8)g"DHO(<ӪaNF,:/{#ӭ.ro4n-RsbG.eP Sr$cX.,\nˢddbTI2Iɥ+ .F0n@($&[gU=d] Y#W |O>2@p\@,8F1zcU)cV uz!uZɜzx^(9z!S_/Q\3 ;ݚP(%^Z >13aѥBXPF=3)'UBѲ4ラr`^VP>#$ ٻpj*P~'cl b ;i!yވtLb 38)wѱW: @MPrga1QB^cZ^Y(\񡻳r>6rO\\WzAi .VH2:CF-$\^EscǏ1\ڍ_5f>6PUJh(ס C@e?Ņj *_MǛ θkNuj&\8%aXAKi!kv%%O202c$}%bԼ 4lnϣ`:T1ٟ!XWgUVS=U2r1]CZUk=Ϟ|G;{ D}t6b'adƒP 36bISEnBH@#mpRȒJSNqen0?P \#y9Vdת{}LLsƈv>B2WOP3'h'JdwXxČh Z +*i} GGF c2 "3M AX<#'$\+39#e=Z 7EЄjF)j\LLZh9i=`1sޒde.OEL3DcycEn1QaJ82vFgHAK3$R f/)ΐ@ ̞ u¹#sEYDeh)V*Ab58VRFuf@%8%b-r,/rc+> 4xc%NfթpswUg/[0l΍'W̻OuwnKyeُ^]?>ċ3T ܪ].>~LG?$/^\çpe/bvʧpJy^K cdCZDO]T/\U߲O{M4)8 xcxEƙ2ZSϾka752;iR%k?!v\XC+e7YOdk> A(%'_˭A+3ѐw&4䫧'GqѐW! ``pjIW:b:8`WK8ؑ{q !뛪1;uԜ MN-9eße ڽ~^ބGT#ý?seê79Kok[Ud/]/|q2>5c*; 몟sߎx|IK 7u?Wz˅ذ=ù/qтru;·h1]#z6(:}Qݎ{eLtޭnwb|Cט2qx7%ZGn}mP2t~ƻsɼ[JB6·h1d NVvR:YdVgTQ]snbp|[#!@4rI %Nt0=\}[霚'R6AI!ń Ipo)?:!RKAU+Jj)'븍<$$Hb M@0nW:ZVIo-QG4*cC r㢈SbS (LlFnw .te9Y,o'z5rra oEξlxԖEoL npS#Me.̖ E<[gf}KVrzzo`oYb%Qɑ'i;g}b ]Q%eNոA}>v5DO_|6'hrN$m ӥv)שne:խҔ֕ 2Y؈43ҿڗy u A%9%tX>?19rϟ'Εy-<+H;=.JhJcZ6x¤ahBH~xKu&UD~}|=pqu?͟O[G)\lPgMC+LJX*47;Y/IZ /7~YU JF?o6 {2Ooӱ}xٳ ??GF՗ۛeΑT؅stH zA $& { iuq<,yJI%v`<,+֛> BYON]CSD0ocʥ؃3K!eKȭ;GݕҦz:u?ż̄߿|_%ݹkUƈ$xhm?yM?/w߇ ;˼_<; AMigĀ.Kln5y5\_w.[ h)5隆d%L5A9E1ˆDNaufTɇڻ"ھ}1&"@|P>O-I~P8^$qG*1Q{I6Z/= JV0>tx; ڀ#7HPNY w%08^`c!DB-5+eն[R&`s75vpc b*0j1!I@3mplFՎBF"3˝ fZ؃%O;jagW U +]T(]!Il9^fsRTtd6rd& 99I7uVk)7:\AE c;*`۷ B0>QB٫b޵Fv"%KU50^q$=@ ߗnIo#tˌ9⭾P!,3Tkj3^qĬ5&Ii8u6 Rq6m FM$ٷj&d{wpV4lݗo9mn.UeVQjS* e$UHAE͠/F'JW;o5?ފߞ kɅݓ=ʗ=q1yX}h.wF) 5.e"ս!c-֮, ##آ W ʒ=ˢ +#m8 wI.h_U~4dQ{ ۛg%' qFt#/%yDIJEbEQ;P?S`,9jJ)$]Q6R/ ҆A܆ j.09麑 oh@)е(jj!f)ƿсtjnVZZi!eezeajAKv(|3tByl}3T%<6VQkgPKG%,%e 4-Uxܫ"R萅;j&9888( I ϯ@"ab!3~oo|TZk+QÎPVt73]⫄:Ǖv-XzhF,H[gG;ON}ZO^ <_ cO6}1yxjذ3x!,`fv/8^q[L~|+A I(/:W'aBP4Mz$5Rr=2"nvӗI.@aGAۉ~%9U VO}YV`B8 Wree=`quNSX \7tYbYn1A %ً"]JR igA 2uj6Eqo[1o>|5p8EAjL O8nj%% xY@|!n;}nS)/u,.B=I(0Գ}(V_C\< k0ӲȚ@KA>g#jvh":#9^1 AKŮrTx*wO#@+ɲGG\߽v i17K]y%TBc%7 36R4ɺbV`KEh=hyQXwYU9Y1VB Ђmeh|竕Q(bYe`dJviJ ! z[uPluVXS&(^m+,|d,*O;DWۢ%CP&c<% ճm@冞8+ZFHe{7ʘX;i]tgiY4wL\ [(tB_((A8}p)G)JBHj隸xǫwڱkYfPa/s_hƺ]B0RQQ*\RK": =GhG<~%Q AE)mQE#ޣ%ŃRXqu(-c[*7J7 스;,ηR{xXj-~Fx%(7'pJ 1NbQ7`l>;rU/ J.ul猾`!wHsjk?MghI+C8xR>[,ч$l;[ J(SՊU[ RzEw{bkO\VHُwVXhQmVuM۞ȟm:ski j-~^]Wן. >(>,m탿ݿ A'X eކG~ItMTŵr%s놢PaWwM֗~[O+Mq*?7 z Y}J|b냘6j&`;N#[=wѬ>*"4atb%SFLDuw$EvJEl}Cy]tOEO\Z 4IooqBw)Bg(15c͛ˋo‘BWAJQFB5iCO\HWm0'Z_*.R/;KD1vgl+$(e$[ko[QrX}#'cr`1?udgQj Kؘ"Q #g:шn>=vzI$<=N4yP`esAC?AOhaK]B4чF ݺQkqS"k,$haA׾+&P3𡐍fTV;CT EY׍%T Yaj?TޫNd5&%gCc5(1B>wfYn&%KZjiX?uNSX.K@, ԍy՚q ^6J$R}*Zb}l{Ы+^n#30f7Aq.U\{U#,c7,4F~*l((;XބCHTK`rw~j4O0Nқp)@bHPO>J \ yQkj{$i#]y -&ߘڨi#=vŒkgfvXjFd2yٽ9Vh_ ˦Oh8&qz?~ Z1ۮ,?o6pxhX~,ۋY_ߦdT=\6G=e7XoO`i4j\}α]_!?KkJRRC#,"x4ͅ-#\_?:s *F8U baDU/praAP1zyQ=¬>89QbP#㡖c#KK^8;[ +l |Q;R*lMwL5^D [}E_D7Nr}urߑݦK"#[=wѬ>eyi4͂部ғH:YjTxQs]»h>e}HڣgGz=Az4"Z؁`|=QP"oY8u V`zuz@uмNHRVJ}VUo}/D<ᔾl][kldv11me{n6hFjaZ7"+X bSZVҸƿ޵Jp#S$A5sDCv@ Bӟxhe;Sx7HwZ-yY F)yg\.#}S{ Qx>CG`=?apw'FW|z[3n ؊3z8(2T6-<^v|^_6'sia9le$t5j`KжNڃ: JK3{νof.q Z7Y)Wϊowo{l}[Y$=H<ۣA 5u#1Y̸F "f韶Bk$ dV͢E/ZԂHF/ T($ `N;cz**@hkRR%dԩT]9H6k*.ehH68éFvE!(]|u%)cE"F J,baH]l6A?[ ZY8?<bMm[lSۣ>.٘$Q]mXB8g[NKrhvI ~7@k4@XI}T#3r}0X}Ѷ4NKaO"Y]O!cOI坎bd9`V3o!dmџڸ*z'vjFTiIv+є(O/~uD141{&^ٻm,WT~L$~qf{SSTwu'yNxJd#t/@2eS"( lq2I8vU-:ƴ4i=Mrݩ0n5[vԡ1arځK)֩N]Ȳh5kwzMe̳4ɘLt qp) Zuq&K'6-Wv55P6=V@7jTMA0T 9nǺX/qc̘.xMC'5-ЇZ9Q+}M({B- >Q[$z+N g !!pw{3WvCě!Va ,s恇Ҟ ;t &hH xA%i#ACh0 =π^;_uHae\sp)5lgZJt "\ (WA >EzwîwY|7 =)90!z[Qpe #ꐷ#Rt]'x8JB)3 ;/*(p/Lu`ǂ=c(Az?c p$_ }Ud4ǖ=\`<]WԳЕq(EDQT%4 edbJb* )V"bA2BWbLkU)Z."3#xHH@3Yd8ȸbgi bb6B S, wWhB" ޽}2=CV',U7J7ts.~(.W Ff3LųU:~T|ik4ӏ#6Az7- gc~kd5UaUbѐ&^/ F6+w[wLaB iJg؋{PC7nt@f_nn@MTUʪUN"GG._CJo{COct~d3l}jkl!w9/a h4~, V>uyثFƱOl`NPH6b/܏Oxi߮aE;ޱ<$gt ԕk:&w!ճJr2|Nhiުi>Dk疪`PrꩶSg9@]Bțs#^Gad;2ͩ"a{s NSM S=SĝjZFSPak S$>Kb(0؛N ϣv$[ڋL݄ V8:%lU^3d[haYkx+x]PsYVOZU({qcZueօohu$m#2vtk~LhY"FHnny"%GuW6O[_y4 ies^60{2=Q7!~kpF/6jenyQDq[BwoVFSR#Z^R) 얫٘f:^(>fS11JKw{g胋op)_IfM1~la΢i`qof*뾑ufnReL$ NMJ5&ecl8ԎG;#~:Z__G4>4}n{nMŎX"fO $_bP F#"ll7#tfKB pW:w]5jIK]ʉ`#!PG6v=BqpCT:r>B: O_4$}NAEebŘL0 lF(rT=:!RPر ?[Pbnm&WoG9 -IbKsCB+ٜW9,1]J؁ʼ͈sPe$8%HbCT= % N#asf$,JGi ՎȒ ̤a6 H0LՅ(A;ah޹6>WG5G2Z]_z>\lT,z;#f/ Xj?mg=cO=NHv_MT]mt.&(4O8m4*K4@0F* BB/Q˳$VR1"RD)4z=!jkIZ;l $b~ ޒZt?]-fs*&P̂\/*8Z(^l<[6pb?* eMnۏgj222*+f-m$fYQ%ڽ]wnn2tdvp gOAQ `Ʉj{w;6SnΫm Jz(&P%{SG1p^^]RA6PLKJ j,=$rBwi!$TS\*'jL來kLOo0&tmkLbtRc)zBG <{y;6Bi uJP=$\k>,w8$&$e:ȑ!{Dy uYN쮅8Z/N6mʵγ5=Z~s9ؤ^DcQbFS 0nSv+N;ca:i{7%|:U$TbG`.kXO޵,ǿScA*Mrm%7?{TpK@;0aS>6hI\Ja8'@/ ցE`׊/M;V|i*;,@"B5 妎=!0ٗCx ӇӉS)qtw%a&Q \hL2EƱ8THd=m<7VoݦJjsŒr҇ iVNڴ>XYr $,[OEK]cUkckàc #@J ^hmUcmYIa $Smam=~bIz&509J!we*k q5L?:Ǎ"F0ٻq$YaA ng1ЃJ<8={HɎY)KD!-W,VUq>lټKGwܑv>B2`괿Ns_zR(X0hX<5( WuEEKQ!KNdEDG$ A8P@ 8LJ C0p"q"mлK'T,q*È)i,lӹHVOuk^ `r#ΥL+ǻTO)R ,{A82uljݽYD#ʩPETR=T`Aْ(򑗬wqjSf5MJRsKJ1E"7)J` )=FH5.z)j)e*VJ%ƎZT|í^JOJ.RjǍ$R f^NJS=.P 䒞{)F)uԃum3O#)6Z1Xu9I_*R*Wb381& J8eEZ8'>y\?W߇dq+5$Ҭ `  ѓ$(Q>=w/U.bb޻@~ S'pځ"TDVzjx=z$!D>Cm% [3*'zT M'b%JyN교#cxm8>v G۝X%4krB) v@@EP`A" (2 1H2PqhB >Ԭ2 mI^81TڬwL{ M1Kde¨ᘶuvJbHh I$X2 jH)B+1\bcCg c'bd`\WL/ߏ:lF,OcUbY}&IjFtacyLS ލmm>t,[wbZk}}*GWuX.O`ydD B)cq}>F6nԦ͟Cmv>eƧUCK"Y{WέѭXNF# RQ9B!w\WUPj3m1 ]yt;~ MZ7X 47;7_fs(t.Q5GymP-Ju#R> @gVQp>8l PukIOXY/~eYL8GuUT '6NH1EbҫRglNH[۳ {n)MJi;>nRR͌67)e$;Q7vcTTsx5K)w{B ;zO[5[Jܤ`D)59fsX$! Z"D4ք1Bc3@'JiCVE5,kuˉ.} $$#qndLqB3YhWCtm6EV qxPB\cu.&ٛ{h _ q^t$WVݗ0 >#CѢSCӜZeWr(N|?XwOuD܃D|dSZݔq|ZU8tHEh ^YׇsYlt똖HYJLi >:Ljiyf:|݈֑۽m1R[aBEDrF>}CZ=$=arw³,Ԟ]ɱv$!@5׏GxtϞNDGqE);AO,xNP@/W~YyՃ5P8I?Z9eO_>k#;aJ ɝ6Iųl>N= W%؎\:+SLՙKòS] \ D!ޘyT%@I'ݡ]H̷ڳ8Kre\3V\EkdȩN,myzmKC|m{f`9Ȳ Tϡއ;ono3?!+2la7K/'}\ODxש|ǛA2Y,W>3V%a0˧O9#*`,W4:<62Z0`qrN(!7 YfQv^c C=3(7ǟnCWưk%Qmr5/"՜Ҿu7MJ2x2cTT+ ORzRJQ`cTTs.-d NH1ERz}R*]jsnHF:R-Gaڈ|>@e's)A:J( x XII㮨0BJN(;nD 9Vg`kax9* !y`xawd&y@+5kj+M IA_nGuX0.e!teϢsnXF'ytM9] Sݻ|$\N^ d^!Nrנn8uyr\'. y[y+| `Oq.m6sv|s9::pTS犈J{]q=Czm!0{'P-̅-'ut8 t.GWbƺpAt}ut6\5Ϯ-eVm%*e.׺8Q܍hݫI,ul䌍ti$+)Sׁ..t uĀlƉ:qHN0~p8A@1 _(,ϓDM#ۄUc1l`[WoH VW%ItGRѫO.qKmz4.q1RB̥KzҧӍV\d Hi"HUf2re6}! . &-?-wWkLgWorM]aJ#A^?Uie>WN > ő>#] S0 b3ϫ`jZ+h;uV}pV #v%Bv"_SVS^"gmj<,!W07r̬ՙǑ0ZƔW0MLr S;g՘ǔL=)wFh-q2o0?Ax~<qs~hm&jgs0dlE7q-Ak> fTKݸ3`+`SڌyLj;nN)A{mi(y*ن4WF^^'5u>0v%T,w|, NJl4nrgXĠn=.k{kxkz;};m(|ջQ l C!1p^gHRAw/B gxpL0kwU޽G8 fsm{<7 ?qEr:+g$FTRd*PNox۹,1͝G2Qw m`M}~@t$S(Ǵ0ҘHT$С!BJ#$R4! q"PPq)1'zNmJ#LE8@''ŠcAch0Gɐ1d{8Du*ygE2!0WClU9 *F*"]ms㸑+*IU(*8%w[mXI̦@EK GݭLxZ(NŇDcHx>}VzܩzCk E&JPQVV|3ؚ #+_ZrT{7q&˯/9oODu Be,Z z /+ušϜd~)9d<  I罄-Xz/\9͈3"3ܤ9}lY9~&*=kT EiENIOﺉՍ$]e=8:pAqWlK>p8Pղ}R1z} `j\+B ^_k%c9vuSopM^o_~ *z)Ƌ,;S4oizӼy[7 |KHIMuN,U՚i&68#șL.Vxi~Tp~μCE3JmjIf#@"sV9CݲjsF,F"u@kcc|T?>xt\V&]Ql8+*9c#c+%B %b\RI2Y~x,5J@t1.;o r̿M _W0@X:sΊHJዂ-Il k^ Jl806/Crx Q&ѩZZ .80 1 ξh tXQQHABn[qP)3(WJV+R`TR.:{3lt(eϰYJ3*<<^ϢhRc.rꠁg8'Kxv % ;^iuN0_\Rq!yTz` ਱\Á5엜JS*5'q1XExzzʒ3F@cd,)zrO֊}Kۻ`'ǰ{7hX%PB?=,CM*\VG7ؑkfrl8bd3*_8of>./敠?Tr^2u9 RI8X 3hq5vaiZj8Mhs2?_\5mlO^vQ}Br0ש+kVQª াvL~ɫC 7WA@z %']T:DkBsM)?Sn31A餶N}9JԛwKOE nmXwnI6UjڻAT [*!Fw;Toz6,;7цMc5.RnW[qoNV{y系?;Lg=Nuӗ~q1n!Qp9-܎ş܈n]]^9pܿJ %/W6f6~2k/3qi]ec9m]y~3;wBj6J)&HK3EKiIxr+Jvyi\y9>TEhj@=ω^ֹrbڦ?4{7Ϊz=$3{U^/LK&Vgbʦ/#,_W8#XXY^,&,Z%VU*٤(VOcء8׫9}O]Ӂ}HPg?%RTRNEbmT6ƘFG{hb!@\HfRp,R(TO`'s"F3#B\q3ψ1Cuƽk/Ahl苜F% sŤ 3V\e c ω, хTȭ1 _% P ݒl$4#!cቛc4$#Ԙiݛ%V+"čB SlAhŐy#c(%T1Y1  Fcf1LF%L.sqV)IK.ҥQ4AP+ʥD )J)0d[~`wѱJªG&ԙ 'ͣ24I)0nՠnXa_-3Ϲ~ĘI${@ry}Xr7'ݐ! > Qyk#Rv!0,`%%\"󞈠O¿epyCci8μ%pcjgڔ1q9ޜ:vD8Ù1Ð z!#U|:^sđA SoGWzg^/Dj73.>Os\['ja/{==_,OՆCw6mmq5ߟ/K>;[nV/W9N5_'⊥pp4yXt@﵋(AN 9x4yO2T BQRۈ!9ݔt*Xag iOhMIFnDGNR116bBwu-=1лa!߹&E@|(/2vK hbLl#*v;v--Dhւ|&zMIL -.[E]]1K{jʦҚfj5ҥf揥 (22/r]7M ]oj(RZ2Q~8*ݿY;{ Fn).VpřP{=ĖidX%L\gJQIhy.yx CZIq @$P)NGkipOIFz\TB\=Ƌ\mTz-)hkQm@pGj{Zm~4{ҚmӾ5(Fk3QϻkE7Z1=hhK] QU# "{Kgx#\~0|!Ak!67RSUX%W`9KE{ uES$82$7Fm|Vr~ؖZԴāh,2LR9K(i2đ֨Wm=[u啂.H* {j2{!f/[UB;;|'YM,ɦ` H]/_JH@uA|}Z cA挈Z2@]7]H44'NлC ( ھz D@~S@`$!XuUiZ3~i9C">} _U'8sl=f ׌:Aej->jN9);g:ރ3\]UFTqշ[j[=٪ΰt,mժ*)AT:UqP`J3$@vNt Ll67wNCCȜC'Gah#LDѝ5Bj0"UZ}T͘\EZaZ7\{ %{9#,gH('2/RrP49b/]~]vt W):{tb]x^c Gh](pKÑUx$26s2Yf( Z(]lsl%2Ȗ-"ZiXXrcRn o`nzs[7:C^+͞4ZӉ۴ JƊ1xi%9gR1+@T *pVY抁k:Ms@$lւ2d40 %PR]:"2@9OEr ؐ/"p^l"CP\M Z2s>eeN."ԐrqjasU`Eѡ9wpp50X"' ȭ$X(6.֓9bvv6}/gV}& AH DT1!LhUd脱s1݊9:#!֓n9"u)ZP5{8-]__ 1`ȡ6?M tȸb]~5! {~)Ѱ@pwU7A{]B"3ڭmSo3\ZTyk@Mc{K묶5:-}Ymo&=-R+_Op޵iTPՐ졥4C_]{oIXe)lfCR7ӻLUoi(c2P&}+"1 I/!{ywm[OT"3߿Et)J4+Pvo: Tv]x15^uqH3p˾ʂ)C4Ԝ1  -Uk„$#3H] ((-dEL8"`q.S!QfsݯŸiAFYe(ե(E JTHVͻIZD2UGԒ`wX Z鳩~>]ضBv|~3+~d6|vPi6q#җ۽,irRuuٳ+wxK56ג#)眭 )i*`8I984i4:7#IQYɣ8dZR7Z*mk .-M(2͐u艶2" 8t4V TJcmgZAqK!ϼRM͗1H4pb/0`žHvVQ؆ l{e>P I79+NR nP]־ᅸW!$ڢ@ ȒL[Cp'Q"΁6@-=ݧdfC/e[@u'6 =[LӔCp ivd7=*i `{.]maCsMjyam+x&LfNBu@&8o=QB.hT4$`ر|cNZ%ܝQ1Wy,x/?3+gWԯʦ^)3FL6sjM$iq yLR @YzF_NC%&$*LB $rLXLR1#)2NL S +Q0*jqYżrw>'__暹<}YޭeI*{뇗\H/gPKڴcm0cf|}A;n0 on.~8R*u4_"8p j=1h쿠v/?۶\v'~S푱R:ޘڿѥvw) Ak+Ii.B ~I bT^]م՚Ԯ( Π>II{&~;#uT qG|]bFd'U{Wɞ˥;*IsK{Q|1PtWJrۭjŃŧUghsUQ|ī_ :&4RT)˧;yP!jta|SoǷ:8Z㉱0F\©` 0~a c b0n;|%F CsTF04YF0ceLf |(a$JNiHPM jHUM5pԣVlIݮ8MMI.Z^5qК@amfomkW@\){WsRU>G̛r}ჇEn3" ~)8B jjm Fqj= y.N” 5@v!T^ZpKq*p0<JZj@( {A$`h\dk $&N.K$1LbGijRw+Xϥ)>RSp*#[ 7}܈cթA޻^X>_Epb/Hcw|G1dYWC}?JCO~n aLԢÌ~B# akLMԂ:ruae 떮AtY/ axpXT\\huMz~H̯?&hWJh=k!0HC{MzbOfehXSl>onԒTbqAy3i_ۜ+.uvv零nvs:NKDX_cq:Lc-tS06vl[ɟ4O[$ G<[s9 =8yaw@98]Lyi`{l B|S!0tW}Ô 0.0?<})օ1o*Zeogli6L4,J A'RIL,K4hΣT;09ϹB#d*]ěa*ߌJ3-x4N4BSYcL ,aC&9KriC5Ti[]| R| s{[fk rRq-\: 28z_YCiŐ#XaNT[ cG%C@|h(zw4#R@j6 EAV{y 27yCJHGlP$avOdiqOU[:$]B5`T˼qXa 쪝ĩmȳR6ꉰ16dE-$ҹep$p%"R7k*Iqc]^ѡ&Q ~aܽ^2ks;*89SZUFȭ[NWuh;j7lDۿ@ iP!mg#{A6a;%jQB%mB1 [g\ig+;{?/w:ERbe<Ѕ$$L)>(Wa^t0;{&\b,(>_mfb{4Fg#>xl<+=";FVJ^+ڇ {WJa*NMI_IsIúחCz@X8wC>B} #(?FnX/"fW}+0cz{#:cV>BTT9 QOQ+B}b4((砘Ǧ3Dm3Q FXCY >t0؋ouaqXG=`+a1c%diXΆQjP=gF=i"DlUK\5εi0תr@nݽlMSt·qJtfB>Kmp=]isG y"M|B/hH}_VX:@|ɸҪNWڐCQ=6.dEfLufM4ʗ YJnd_3xkk%MǷ/^Ծ^团rϋ`Ps*7s#>Ka xfbҊrlVmUUލ'V^~j}^O>BKZ?MOwn1: @% 4sUޞ$4=˵}lB%:_~fR(mtbV;wq:L$ɧ5RDp!xvtx a W/(եL¤Y]j#ei";63e Vw˿둅EF {o_]j\BԲT.zGK&cC]6]?PJ]TX [)3=֥}FԗR!z:m+g༒7oR~VZHMڨJOJY𥇤ܔ 3i[0~V*t?|!/7ćT3=WRևԗRփ/=m+g&Wh ֒~|B]Woò^n K1Srw_Z#8{W_ڽҲs4uocJ~NޫRl:zl17E#uaQ k S:cR۰PɲfM:HcR ېNMBHAO@Zpje;v;ZFI@ B![%ڽ3nhvd $^U`5]^{Jj6\!%A;@5M.n^ۯXXji5wV*d4NȮLAH(xOs>/(BvlOMwm?yXP EVOTN ;AK enK o aF Feķ0}D<[TuJHLlnwECj^Bf4?x@7AaH5Q4.gp(u|ljX3Rq.d,kRcFZD %T%1N"#irL2Hؗuۈy4W}PVU%K)i%h(͑S cM5&xjL#2&cEW&\4},/Y>ˬ՗[=u>Ījd]?~5z6^oVm)<)D)`¤6qdb9F3R!ݠʕ`cG!mMZK;xT`ŝ"NGLS$=TN2eʣS1r"nd 3U)fG1Xk'r&NmE"KƺޯE#'_}O+7eIӣ~VԫR?~h&I&K|('Zz9ˊ~޸Avswqfݧ3#3zK;bțήu RNon.~t4;`q? Ӡ_:lLӑJ׿lrݞ4R;R)f|o}}TF132"2WB6~{>Wpd 1v,f$uM>l7go@Z*/?|/Wv}q{yHN;) @ 䀘j-Og=xT Z7T dPk jPS!NZ)[<sC Ye3xf<KTEu (xnd <:>Yq屓u`H 7g槝G!sC ؏?.y~ .-hbϙygAs~'ԟ9:퉏l_QdzDq@a#ps,Hb#p ^iϏ8?\jBe3s{İ'N%AɞC \s;LJUyhJU`HUC)UGKCj )؇s-51bOa̎HnM< 1;6R3i4?JOhPHexF)TqiR8 yiT0jiCRږ0|RA2Z(\7+ۥ-, ڍl u< /]"FvNZ.JpFʡ&NQҧ {*>6k|]Fװ$W/ߖPF:r²QۅuPe Ɋu,3, *tZd!~`ܢ:&AK{.Ҋk| 'V rԞowR`óob}hg):bw(YTzo @osuO,4-%jfEz R)\:  [zJ%U%=LJYΠ QWUZBJQ꼆b_HXT8dyI6iA:3rVeURft1QiR"ܖ3]9f@3eaF!Թ3U0*6J֬-Pdd ;;n5,7Ĩtn*`! EY] Cm9{j[EVSQ*U.hۨ Œ biz`"M{g8V}M@z )%'Jeg)36rf$p)|RʜdbFPMvwitfwi]OdUjŠ\,#Ah)$IA@ 4ޠ(] õ>\8|֭G pZf|\i#g轐\WOb 9Σ$BQz!G޻a-\y(kg ܥGJ*ם*LjXM }VeCa߀ >ة7ݥ&fH|F]j$GX׍.%qZG^Oa 5Fݥ& ;iKjɈˁDpϕ..;~oۥCk5Mp}:}K sqpU~^lϔu_/HJrkʥDwާfIϥe3'?^tM85nka}˕CrMTWgѭ}'lj~#&}y;XtK/E n} !)k&qnJ[ B6*an-JF`t!g-L", DW[+I`#7ǥ(h.H@` ؋pyEcUI$m}'[sF2ІrTAѨIF`WJՐU\ UgB@!te͋ڙZf(j2+0W\d/̨ͨJ)n-ɊQ&';L eZ7279euYjº&,Bd5 %W*s֘Y 3*2 5F'Q imK"v~"!wSC?1|*[׻M?O3h /V[ B6bRp:hѭ9D0e`za6N7bۈV̱[z{`t!gΘb}Sm= y0X'aIusT?7z8.W]G'١ŗI{^?.VPpU9X-muqppݿ_o>HKL߯jS=6⾸|헪å|o𻸮u#n]{6ozM>}|h;Mv5д5.L-ȅJ.JԠ%7{_E9wC Pf>3 :·JHf>Ȧ݌}7t WOHK=bhys]ü^l.3[%)Xjb-dV-kg"(P*H}IܵWzEvs딫+^MόcטHy#w8[@__#7JCZiHK(1MgddҊ5 ŵhȠT.rekcUfHiU !Ǻtjs<yELb274BW2LUE)D P\VI)(a#*kRfMG `_VWnaӶ-t 0qy Md\nv2nE]vr!h%}`[WXֆMGL&0n 8n߶>uc (X.[8Kq7Hwۭug4$xy%>U|%Ҥ񃜳i:ٹQz&^NkƄՓIޑ?@?$;\݆Of=$n:a.k's@hxT _~eY+6yAP@X5qA#I8VV l[2%;\˂aZigJ 49... jhIL=0#lw<~3=q#XLfJ=!Nд@Ñ.YA4&Hzxfb<\$Y[^Dy%7Ud11j ~<#O-B'nEȭbX׬k}4@EeXFݧF&@~jGIS7`~O7RN'5/l'&HQQ# a (<$lI-K!$J1"C(?Jݦ6!߈aЧ/h[H<6mzpBIcR[ B6OxS3\wZBIjpM5|4rT@'1mUbR7[9sv6/Iv5N t`]n {F=q,.k'$)7׵ w MoB/~va >mi`j'X/El=TUxvt"ulU*pIp|l&tv 1ҏe|{c.t14 LbJSTY/3;'̓C:WX*\ڹZ`\羀f˻^hS6Q4uhR&XՕnhE]LBG0L4Zɛ]劾^URe:cbBQ2vզ$,Q=x,l}Ř֐;#L8O1 w]\_0mtE3ÿe7az2VnvSonMR7GHab8>&ԢT?oM?a=2rF+e+&*Ke9%6KC0BTՌ@u{M#<޵Fr#_ev`0Ha퇝;0}yiJ}Ȭu 潤2 Ò*'SD)2yQ 5"賃ՠsTۧcuBals:5u 4PT (( -'R2y hBBbl@1ɰ$Y9?,Y,Yu:Œ*WWB\6? g8lWk7 #WYlSj?+֐`z}_j(dCƹ,4S:dCYdfNY%w!>=6Qp8vw_d&/#Ɵ?W6Gl?~΄R(ytOeio#~ zpfX[ >4}a63QC?z 돑ϼL皋xg13q`@1{Z\fO[?=yX7c(Ja#y(zЍOMsa}f%X98ݤh)ʃg"q4$PѠ<˚?,5D{^y^Xco4"omN!.LmPF=2FB15Ӏ@f#0bd3g}j)hLV\j=yUbWLEi*.`}(Mel}0P}0P ҢL5kV#7~h?~ A_yWOw0CJaZcTb!ڄ«~ ==񟸾k}){! 6yj06cP@+}te]})GZ34$ʠkD4N-ɗ*3BH̖B #SU҈ /PÆn6ꦡzI̤ =RҎVjէop$ԣIC5n>3ZHث;8t|_l"fM}_~|X@續ya3~wݗ/mk9ŗ(nq;xLJyF]UaM.H~o9 @qݷ'_8q?|dί=s `n4td3_mڌnGb*QAίExX`X9hkYeeP/DmGFO ?sdgyS0 0U%*Ӱt3ԧ+jQVo͎ظ~)#αI_4jED)-wc~%J:JTaR nQz꛶զ{ҳRnjRsiݣW]/ۛmiJb3z|Bݗv66goƑqHXsr-sIf0_N,rxCGi2ϭȰ4T)UvdUeUJ4*00ϙLQ~0T Y/"Y)57JU{7p_qlQꄄFCo/GȄJDI25PM))@_yITQ3M,;2>(1Gẏ$}~Wֵ)םԃ' isZC\ dK }:PXQ"}~x?>~)KH)"c_woس60;KєΰĿ@A" V&Z A'6jñz뫏+bNg6ɽߌAN< fy5`n=7# ?U g, hpIˠQAI @ HͶyh-'mݟ870m H:^VK<\O JvHyj23/F|8G6I^}ƹRSMj rs/VAT}͎&;,_Um_ ={_?2UWu5zQ>bͶ^14G]?e Ew_//Wz|Z̰}j<́_Q+ѯDׇ];rM?˧+{gE;^1յ@pnR5>gn}1NwdjSg'nOH2COLW٭4cv't#&8b.3 !Ucj>oثcyk eBJ.22V!3>LLհ\8Jߔ՞/J[}Ӷ]T:^J88"T:^V#6..QzQʻ QML վk]%w4ÊRmB68cVߴF>sR{-Xwsv W<[}ӶZK{RxQ*ߢYDJŋIrҳRE(UXPED1oVHqRM3~̟B >bЖy9<Ҹ~Im5i寧u?Eh!/kӸ7g"s4FTgZVs A'\鷗>,V޴EfHAN?cSpA5[7EP0[ڄ}ytNބ=.Ѵ|KPOh6PBxҽ 81O _^mqk7Bj,V/Y|0̺,lYa.5(G8o8I>BHyia2[ſrԁwwihy/p[P*!oVj*K -HC+B9Ի#-H.cCO H\$] ^H4$=^t6Q(NX6J%T@p.4@aDJe)DB@F鬪J[XCQlI ʺse۴;,NF>JgJ<7 -K "s.riYUΩ2-F{ ^0 R+W?C kaoտ~Pt~zhQVOO-w[g3Ɂ]PqoV&5ey8sF7rJ)[TEչs,-s-,22WBs|SWT$` \nb wWQX!@0xi&:1cFoIeDVi#<ו- 1jbiK=vh}YŲ=`$A$l"}#hSTיgC mNJV #ڞk)cw[L /j%_ |%Q˷Om:d=g|4~bf"ڞc/%A9A(,}ŞP,>9?Ȟ;z~~r:5s_}@J?L}cO 򟾾z̽ 酙5 r]},3Yϰ1OR7OY̓zQ6)"D1-D!f/s ]5d^UaU=d.(W{DKS91 qM=>#)ۃ^"N%u=f}ݭtC_tî GqF,Ɏ1ϫf I聊1sOĀ%QSt7g_9pէw}b :&eOĀ%QSt'N}VCWLX|Ź|Kn}1Nwd9g[ cgmy!ڊ)U gX[|L[ {|`$peI^X[m=<=R8ՍۯxQ8(Eq-8I\'KOX}ӶՅụT/J嚿]ED$^VU?糒% yQZ[EMY?XYhQR|ZsJd\$Zv꛶Rw+J%^`aZmk( p8걼`\B)RBId@ Rh[8toّ،ZJ(h>=2-0+sM* t`T9\堍tUa%;^ظyi˻˜XBQyE`ђC,쓑8c~X&Yw'>]FHߠ"{0"-'SI9  vxP%7Lu<in)HcCBjKX*)k@>R @G/LƓ>w4Ȱ 2)QûKC=nZ T9kRن}9 xdEEL2)G;O2)'J&k~~V2cMDkA(`\D PXIr:[gS>oē[DPą^N(@1U+@ѻ:(@4LM48sr 70ZIR tB9'.a*O&?WR,SvḳzG2Zp,qݴwGq7E/ &DNYq0`񻉉d&)c2h{g&wYq84L-|v )cOkKO?X!lI,.]ǂXd8(rT21ʴ-/`2vƃ-"ȫggru%A;c}"tADJ.%?Fʹ0-#zƫӲݒ^K*IA8-g旅(,8-ocP*ZW.VJ ߄ *)S6 U C~RG1G3mLoRÜ(> -|sۉ`񿴈Y,2fPWuQ`ܡ(,[1OXnưyɓ-*Pe⥢ Uy^Բai* 5?Fg(Th'NYC .q_+p^H CR γԀ&Ep=6$A[hث{9FȅB~qTt͏C<\\:)q%; fhg0[ǹooooJN.HZcV{{{ s!6fDHYmuVE!" $+ 2XGv"XXCFjgvIfF!\mթ+|9f\I<$45UJPOPNaUϐl:S=Lta~)u S@ky HP`l%JORc ǬW\MIaQU]vGmѺ2n`j}5[y?XcԲ^VB|J1^ٺ爀=/<3RL4ewgLdJ ЏntۆvyVZT[yw3z|ն!XqZ97Lj^77XR[X{ozƵYZ KiTBY~p.VIkkt=8B 0!,?ң=}',3hXB禳F۲F X|֨GI;uޟ]BRYz$uݩFjP V%UJ"zbB˙u$IQzԕ/Ԧ %#Y>`ғ0 ~˹̈́kqK cYl^9P2mO-QS[P3l}{j6I>ɬg|gUf,䙛h%RGGMIFޭ"^S[{5*n5X37J6c ZnU11(\E(ni<[Mtͦ|gZ5jO(I0#gzQSD-КfG9 y  3]XG 驪pml}8A4$kD&hHOkLQ<56>>0eRCM2< 0271+×YyPΗY t m06UQtv!<감gnUlR|c& -zNwn-ν@҇n7yz:,䙛h%*l=c ,-zNwn-@ ݪnuX375hO>z3λ3};{6#L1RJpjWS{;ɴ;%z`p‰⬺F+ң15܄*a4^[*͉VN%A%v/FNYR)Oi.hv?QQ?"ŋ[ "+OO@2~ĽW!q2.[]JzquQn8Knd*"r !sr6v;PﺷJ0%l>vaΔTDvq7r3.x2`+̈́zflkqQ׸FEK皓T7OahB.Dwa*Z۸G4qw+u7bXؠuCCU{wr3K9[q9${&G 5貆 0MDnsndOSI_`n5c?_y)&L:򙧈~{ǟB8m+.iCO{i}2W٧!-vj+MĎ|NuGs;޺Qy?R(B x?GMvCusdhz(?Crš>?b~9ұ5!9}wgԟD]1kFĭC/ar^zD"Lē: $;#ئ^FYRtS0nzD| (B\ws3iya٥d 8 kk&Ub,'.vAnǟ ,MCbYp=q-8g2EK *X;[e' cA5 2"j‰ Cx,F&$jKU' Js-[%A~Uo XSaІOU $X7p& #Q9A?[Ϙce1WBH%İd,BZIE1F1F ]|mtŝn*?>\q@OЍf.=t# `T6!]\Bs|y$;!O;,~: 8|$1|XgBx{ +)no7ߦ|51{yXqߢdnwNe d_l6dg;}[7{ƒ* %{Iu[]k=2B)6NDݓTFj k(RXG1!0<&AS@Z- ,U<$C ^+0A? ׏֌I;lӾ5[.ץ 㳕O}ԕHJup.A'+sZ>_ @Kىf~]

FPpt;_O!(|~pal.]FuڥߏG._/'_̼/; &=Ä7~I')@osrAoTŸRZ* ޹*4% -K!ܦ6eY\D-ai4-^'|2/%ߛ/&z7/>`7_.?ȋ$[TX0d@ *jokXfzo]vK썺Zbq\/ /^>gYN/ }{V3Q~!<\Ohq!]R_KUqw9w!<*QZqy g9S7ep+m$GEYʼc{^zP>f^Qik--]]A,.+/] Wb2"dD։qɰNEF%ZN\"3p57DszfWlJ|4.OGS@_h6G77\ǟ"4Ν?> 㞰|Co{y0NMX(N?xx7(ag z-h WBE&3ƽ){O,#r/# r+k~ʬ)̚ʬ9O(3hDc."f{ ,H6EZtRy6Et5ufSri/eQzs&/Bm&w&wq ~ycnLD)PEeD%`dFKO"E,$n+PPtP@/GUUݮ2644% "b2rJ*W }u~(Rf{G`o8UW|oXth(/f>L#K F[-} aTLZOìS6' SY#:UK,@. qITgWjm:s55}1N+ 1Qz%c:^ x2< 8]Hդa~3ZL4<h2j^q|z-+p[5˰,U-XR7VaTX棖7J[]u!,O/?E0Y:!t Y쫵{PLbraczSpt8%KUaO/eTPY2̆bsiE8r{yS(wmp HP0:;9 ^1ԠIotnT-S1ǫ,poWBp;=*,jX4~\#'6]_c:YDh2Jnf>ݢp3h9f_5.ӕPbftDbS 6xW]WhT1E+tɕI ?vc -h)w?.+gsUĴj*:QqO@.$qxXHYJII$6$p%"IbKBFrMj1֑EHNR,; Q08;f 1?XI>*"gTWm&}|hCMU D]1Zo.PjP) @idČz)TJ~6._NE N:X+_YYpnwOvl*+k<^QeK&Vp #E%SKaIN}}<:!a@}%qj jiTd{,/G7R4ThbK8ɥQLiN['3q/<8W<(oRޙZiQs%398mzJms"nWV;'Lm&IsDTXd0葖 O`XSA+7ثDtB%*{PdEPdD=!NF?% OR C*.ܼ8Nٺ LGaω4hX04]ϩ nڂ|"xSD吕R:/Р=" Ӻ f{ƌ3,t9(yZ )+q 4JⶬVYi|m8g3,5@RNXvW$a*"\B>y;{OaM; d]v3킡E$-KzA`i&QsDFoLLgڙ1^iTZlo_&w0փ-c1hs;"jhM1-@qx #a)!!n7ʗ-eEiM6"oSk`r4VDaʉd9BOk`[[E! $Hrrpξ.}h"d;O5EI#8 5GIzf몮4m/XzRBs^21c&Jeʉ)9E#7\liӭ(BVZ+@r[03qjU VGOomE$2jȌ!d0Ji=LKE,&)()O4 9w%k<ϟڲjI4u;](ώ IX99?ë+t2Hvw=8YҾ>'K=\ez9%27癛_ù?9rq9`LbKofi 7_>U{rս&`1LZ$A|?NPTlqxo)f9e 5mp"MB9|FoQ}B=i?GQ &>~y6}BYi)oZT`EE j&^ B){qww @)A~ͯCș*#FADB)EEf\i!ev5֎24`+sVд }е\1uENlhQ7Zd\V eI"MŹxvZbu~[:~mU(:5N[CL+E2T&D{ҥx6!3dJrfm..d:'>cYoh8=."޶Kf=yzRZr_c&2@h@oXS)b38Jd ۲A<3)SeջlW#ɇiph&&p<V*]*KTV!=GFpѲleb/h]rTiu1m^x&6ǯΣ jEZ3ɴ!ׂg ۲M5c1g9JGK3tK{t@nj!Wk3ɴ=I# I=I9m{[am)$f#.p5PE F+ 1ۘƷ|{ d9on÷8o )µYS:ĚR={~)H{u߫zB -Tqz;M]mAf47GmhAݛ=Z?z68T"2DGʩa:zb$=HJycؚںpGQk!$h=Ywq{'z8 ?lR{7DC9Et1H=݉v̘ЙopʠZz|E| ՝(Xa4͒J:q1Q} dM0S$ oM!Q$(xN Z\9pDysHLݧxYht7ׯvRƪNZ`El0M)m"]\x(YjG73L޲LѦ*Isnnk{Q(ҳݓ*v1RkbAL8Q=bK-m"[*ji>f"&C/紈Ĺq%Jh <Jh'! }Ms:͆K,Q^iG"=.粅vɋ>C'RZ*yyv^LZG)t ˄7+_@LCʴ_w{GM\vTOݡ3hKq͆.W<<+RY2[TNԗS&,AB` IDMF6>t2ՠ%&ٜ R2FďTȪr!A#ezr)dPmN)jKPXDjWb )`80Þ2NƧ1˶X?jJ.GS`D%)B^n "5 Q'A()[J3Lo6(ek Bt.;` TXXn| dRIca1Cwxm B -5g7 #Hz㴘 |0̓A9(;č$S ͩetF@6;e_G?o.IFLݸ1Έ͛;꒿4YTaYHeHa4ғрtr+eK9퓢uRI2$_V)UIUbD2DQeCۤ瀴bK|)UTrI;KEN)utw=dT/Pal8k/J`I, 4 d7UKj ER:DɤM2&F'{RD'>`4nӋ g]4VR~[ xc2-2,8 BiRUj9KVsc."1h'CF#O4~qo9ϋ 8Kن()#3(ah\TprŊ=\CZү,* J$(^6+\Qb|tQ+kZ= P^)Iu Xbx⑧Dj㵹{tu,(K@@nçJĜbIN<މpB/mӱwLK߯vgo-x$Gt@JĞS! AJs>$]'R*<(ᶃXr8+p #t"/Oko};u-0l>#l)F8ePh)ARYǐq +n9r@kRXR&;{"B⾴Ư뻼yθe@3kty,3VBْoy,R"l疗\*#ud5,YNct@2uh;Xc<֒e@-YH4}}z)yّ$_OeD R,ZЧjuHr#wh|V-l>X/Z*~ݟ*#~τg,m*RwZi/T$Fb~?YyI$*b(nbי[Dn۾wW7p#Q kզCBq&OQ)Du+x]YpX?[dK-~N?,%n%1 \dUUY-gדRjn^21ZPNNwpI'g'a6$ӓO+-@3$Μewvc ES%?m֎YO'o $Gjd{WHVWn(Z78/Bx4Qny`*7lwow,yʴGOoAf!C6Sб"*DҜDNgzʑ_1^/u^me;v!eY>dtv(s+XVu dհƚ}`_:|m<󛛇jt^\lQRY#As'.9HyK*!kJ۶tT՞^A3U嬝TZBxI2,Lٷe/q?\n?|ȟ\Q5A߿j_{fᘽskU°Jח$ؼݽzή _-w܂҅'7gH?~ k^(JOȍk`ߕkʃ' DAP$OK(b*ZbeTT/`긖Zi_lRa`C zh7JŵFzdF*x@ DRTxJQjnצ>EjGd&$T.xP+n6 M IzL mv . PDEDrH~2цuj+jCF`jG_1M[^1_k9\RPeQC5,1'Sdnec#Q}HKF:{QjbU)oiL,H꽭_O]C Pd)|ghxԅ*ǓH܊j^{r8ǬHzI5Ve,KtHf6Tk7ttJǮqjSHqM@&K=zaqh`$JBPXFK3jBY5_Զ '锽yJՕA ʂۼAۊW?4g!<`Q9haxclؖXŸjReJeQ1CkrLgksɍ bޥl_h=p4xƉKc9̶ݘ=!C޴m4^?_@Hڛ|ކie<]6tn)JӭݭH[>Ѻ_¦rNZBQ:"cRA%l[yvYN1};( sp݌ӭ ́O'iFpYI^:<2'MaCn{9^b=-[\Ov{.9߸u-39hM!Q*oN ZAE%@=$뷙YM+8m@EB.#6ƄI b 6@K*wkUj\T:,<1 C:-. j$ YV)MPqHG捠(,Zݣ]n\D",M HD)%Qyͽ N 6'x!$uW}y~p͔jLBO.LՀw:em.OymPǨ} _7zM/Xn᥺3Ѽт.6Kv =MjM!Fk%a&EO5R`Acq͙bCQߏk58nC=Mk'\ۈΫR:ꤼJ`)d}" Isw hIX˲k6^`ح[If@CNmBjborrΛ5Kw{fDI%4?>N* VRV]s1-{CyރB57Y2THi1YZa\<fK7kyxTV$jEHٕRW|rT4;fs;mS+lc2N9 ._1^-헑Ԗ9Z͡*q*h"1Ў'@i6oM }Qp}"PvmإYWGp\ /Xd]vsr$VM )JHHk;H -yǶR V)T@ 4ĉE0(D@nVy|ĬFa).B<(%vۂٛBٴçPŰ)QYdJg%+L`6/.l/| 9 oNNj?'nŒXG@]gW.۽=̳uz~qyvIrD$?8C }䂬B0%ng߿;-76"DEl=\Pv*zoFkozњSE/R*d9!{I_f'V\¦">x< eFhՏ]Dh(*.^^űeBQ?yxr޴3^i(iRzRiH$K8Uij*XD]Ŭ-Aiˎo.SbYDI)e6CīrK}T'sZe&Ngw <T &P/gE E m;'h'OZR_ZE"%XxD%d*菳7:=ꄃ1Pgd'op6ͮ+O9_{[rnIK"~S#;~lخRq+Ya| 9uy Px|$PYG L&H6B@$u)$z$/kq aCsB%D!8ϊ@X,U߬[7jȄQqN&Yf50ޕ$"̌ew0b]KOSbl^HZGbUQGB JŪ2"j(dg=mkC: 0l㰚]a^fށl*Vl %"8ƤZb(<"IrvȵiD!$ʩ$x9LҸaXy+Di9>=P%?n'VۺhGS{Ӷ9B׮t%FTWS[mhZZiE@KԒ?[cJt]ؽbׯtANKFK&γDu*cgtT,\w@] nQ]^;Ƴ |$c$K/jQ/ hG\ oY際zUT #*P,Clc:0`iyEN:cXb`s=$9w:ʥ"xچj@U6GK_s WnN8jffL2ìvI6'5C4x7@&ˌυA4  )M.IB~A:afdbo x7?*k٨^YPc )ݡs;˘ ?gIX!@1 _uPӟ|MjCBfdBpq|tKNs ?FHe<U`g?"h_dѤd48^?BΚ9lnF3\ $\=uB¬T ź1(-\+8\kv=RާtZ֟s}ϊj`|a=ƅtU;<Ӎ3`.k ?g|X@f.H 1oG6j$#CyUlBC-=e?ChMIH]OpV4+䕅V\jV; Bfl]^,<"wg,ƩTGȨ9PR0p( Z!dV],̈́iҙ0/$H&t>mDW쏏kgΓ䁡ќE)mT*6r4M8h$r}Ңq⛣G.9"aD9]5+N{+tŬ2;Csm>YdWSVl.4D34ɂuo PabiC]#F8F(U ;CߗbڢJvV:&E׺k8*2cA'9Z"C'$ "wHU e^g0os-,㻉nH1D!%  BE8(bmS:\x;9t$#[K_jF522(2wGX+ o\B?KW~2O?\M󳫟ޜIxNLW5G [HK 32؏xLH/='C ϼ++mW|6-=mc%ٓQW ?ˏ _c'|7΍ h)[]:D2㗯4ZH_z*QH ڂO3e0egH;A!I(r}g j7b78VKkӠ1Vp(^K~%A]*6uh@yzy1!,d$̌_0 >Q2G沨ti}1OfBZp2ޙFO,=A2A!dc`&(]hdaJmE%~B{MQAĚW3(Fd9l6rDFLD kO*P/nMMGRT`BWq9 dk:Tgӏ7 4e1K IRQ?@1-%T_=Jy:I&G4 PJ Y٪N'M91c#0|t>x^FOX8#$Au: QoѲϿ4"u SgN$xІ)c }(9g!:QF o,8~^̗=k^ӱem=zA! B:,\}\R0Ca"0db9xt1-ѲsP.oX 88oe!CT9ZJ{/Vо 2X76w,u >ՖJ6se--!qxwBcvg#sZ`BN*% 5 k͕C(HMUsk ϵЈ \[ZaDVIjNq!h7[VRBdhu  DDVzo8FOLyC&"@6{  X_6@TeDWxXAT.E\HJ0?t] 9 dI`ddDJDo(Y R0L Ɇ "W5an3< fFe"Sk2-"Zh'1orm6 Y蛁QIͪ>7͕_IUou>ֱlZ e) n|{yY6Oߏ;\ҽ>uI.M 7?0fձo)aQk8{ZۜdSb9(@65I Kȅ/tUlX#'9 2 ;8l;m"T;EnZ>M푉9RQn `6'ȼfp rȗA-H*#*IM" 甍\/4j9w H4kxR0.a|'bn%LOT2PwN5(p\B$ |~NNoޑ$B: 2cgg8uPBHPMO0&ptv}'>>rCeH;.6oUVyNOyՖg' 1h$AĔI>CFa0}7r\vh\[ N Qmhd " g漊v[E ?fV73i] b *?ؒC:[n(:p \o)YZGp1!W,MϽRݭAGO j_)#`=gH|;6g KK4c愍L菊NyNBvug[#G\hG͙x̿ẘNWv &^[gn4G۴f܍70@#?>{HMo_oIɤ60[{{NDǏ,|8?FmF'}*4ޅ;S-x z}kUa  ]0L'K:*SԘl"KKT! #`uºov_5,#M}l!,?#+_`4~<lu:-}"&~vN% ,shd.D82VU{g:`}i+tDY 6db^W-;:_~6|Պd{@Nd>< yQso$=RX& ސl* M^kC4Xr&(krYr"H6,vƌ3:ӻ˺L_wvL,~8[AWhΧiZ%Ƭv@y3=:)'ސ`Ր`Ր~ÚҡՈPwNu)P:(sYOمێΤp8.$4n1ѓ˲Ԝ*YPEdvh4p/1a) Ji?9,#hK+^U}![.ꪓC)zɃ$Dڂ̈́K-LUZ4甪("E"Ljp2{.U;IKRҙLDr1@'P;I Ӯr4Eҵ.a CIK-=ؗx1 {t}fo(uzǫtP܀"A蓮&Y#J6(IC,di+EGi  :Qg4NzS1yXPy՘U5Ϣtsr鲮?4^}x:,J&rwmmX;\嗙xvSf)"TH*J2$%5WESM4p. fL*CȚӕ Ȓ(( #R`2GO !--ԝ3¨e8z 0$@󖤽ќm~5r"»{vQ"jg 6t@K8IkxR[(^b.*7 <|]WnMWFXFskZ:hT:h`14S-73\"*5bUk0j6GzEmA`]ayx?~8 !Z(OvX}rƥ[_)rqgrjZB7*>VF ׽m-MyJIDSDSI& k,) HT%_ JQ0E>738Ӓ2Q4e-JR\q)X4i$" 2] f儉$Ƶ Q看H #0*r BRj=U#̎ʦ}5W|anMusՂsRg,9B37W[t36l`% |fbAIOgJHW tH('vX겧;̨SSU2N(,S|oUchܠ#t 2܆5&(A7[p ^^wykfdp+tO.D ST JV֨-mYUzC=Y*{:ӽqR{"q*ba<8RnCHFZ&1I)I Ң].A:26EG3YѼB)'|Z/Mv\9U }jg@;R\줆1^ [skLVxU нU~;70)"EMDr) #`MKaRC,(S yߘɊ p,FOECf58U@#BJNᶇšSЌ^=|xt3>PdIBN/O S J%EKqhid1y=8zzyH՚tY GQ=P:4*T4*jAI,Fl |EaՉ5%5kO7PuOѝ94?y9 I t;_?Ìrw1Jeыq&V8e\lI,ӥ6<{uVoW-9~Br?M"rd?y"v\wh:S8d1M7c0-Cv7fE% v վ_''[6||xȎ^n!vCެsl>EbOazk ɟrȲRv.P-k_׏4C{v>{}륬;_+4B1zʍx]H\B.Oto"'(2D%9jB .@0V@[<oW|M H12I 3] 40N1I x*QQɥCafFRF7Г:ڈɗ;+j:LГ-֧- }#NQb@)#Gc6.hD3o&Gu]qh-f(ȘЫT#T xy }$qnsZY6mIp?^l1wNh9 sͮ.766 qJߟ-lg7_U5EDׅ}]d7dߤGu) hqA|uŖJˇrS/&] ghya87addSHqCΔ nXռ:;auDuռ &z1թ|z*_7\,78!:G9gNjw7zp utOzgsJTфO)EOzOJ)R \jS*ISt.ʈ\KE$Ҁ#65 @Fpx({& ƾu, pՁhoSWڈ0@Ԃ4GLnťրs bLAD')1&mZNhi-H/0^xKk03a6O-Yg; Sm(ڞ*HRđQSXh օsOP.4rcxPh@ ɏNQܪm?6(' cEMP.3&-pr*>V/>K  ǜՖ5om*RǿF8>P ;+rK.oO$Ed4-"^cȲj.Dgכ9R\z eY(Pݓ r~:(.H^'SW~!!L3BH){t`oE43D!%^Z}?Q~P̫#2=tq.p MRîw$V}u4^.^\{%V#Qlۘ0w}%5 /._NN[=(yKl_?>ZLKFai&qMXmwTYVQ.sUI Z0Y?h,ZuP7n.d Pz6r'-II(Oers?%QԐ.@µ:~-Y.IAFz+yI F'$a;<\JRmhK0[|=-ThfM~{*ze3a?} OddGx?c*c5*QiD1 vd0.gaV|<~| 0B3.zPeP slǗlY=6y_NQϸ^ש Wx JUYn1\_@I阾oWn^>|coA]!J,iׄxp=|+1BٞrJaДkwmmϑmd4ST[t uwo'i }9kyq'b^f3UBg֜p/B$A[Zɰ`f(@*; ^5_TlȽ2A3_. n Ծyk$\)P]OWā05ݚSpRP%]R%+˫/rMUeAYmOS.S NE^P@dL˜ <%tsSK2% 8Z6 #gVyK1. Zi$ܴXD@+GSw ^Ǜ8sgVW=W}ǯ=f˶ix#p|Ѳ3@z7jmjߞ~gʚȍ_ab50+< ~ GLvK6(:XC,_&̄&zR:twI+܈nƵO.-lוz ԃ AD [pb_$S'>|Ɗ+9ybZ?cLh\`(aw'51,e^\3|ϜZ>sLatoԧ_%A4д o%LP2hNYg@G Vټ}0<;p4|7Dge࠱ľt]m/7}]kLo4TcC\ջG(מAAoQNACޡ7yfiڦysbgmkM9/nJ ΋Vܴ]N//tu+4+-靓՗T"ZgH]Rlu`V邠#Qx]G &r0:3mUl@p/xt,LFKu8( ۞i\GBUC<9xqv{t@=3)밪b< }~޸wu1 gZ:(+4espiK#U8H S59GH֙8DSs׳Nٗ s([SuFUmu\>; xlֱRv,htoUq.|{ՄPN1`B| &sY!{Ќ3e&G^B/HSlnj0cS.v4c\[WO>U_Ԡ*Uϵs)0s%T H%bVӏBQ8rQy,ϋ<^kkP}JupqI;䩤Hq5WW9h1d2k*7>_/3 G~qx~uDçe7|!* *EY,ߑ颛qRl$}%g~E+ō0Yi9A,/)wכlr5X^N⭣UM=^|<1'7lj,lOoh13 WLCn)N at7vHɋ K6ŋor"YӚoqvLW2dwȯ]ۯ.Q]=4uJ/ yJۣ&l"DC6b&_|ʖrC$6UCZ=0 3-~$j( 3Dp-D{||QLǿ<܍*^gqqy1gL~ []oig^򃑮ɾS&7뛗 ?=ܡ]ٻ9GCK͹3V00*YI)8PYq l5\|I j^Մ+/@F\R^:p(HRi%2f3 ܱGe˻FWlto(V1$*y;+M|Ƅ4</YkJ4{-6?A>b$|phOO>e/&ɮ]L:c :as¨ͽ1lM_)$Q^e( +hFB;٦jU 1D{OrI=l'sm#}.Bxx8@Kb0a8M2Pk)asB\_OûAnTjBȋÅ\7݁0^ܛ+B<*tO§AFl=IQTn36VFj eaCUUDXH @n֨0O 3Q$J`1v.K>弱FpO`[nкSINO :q^ gwmbub`ޮ8B5-Z?z8'.[i}ӷN?(0- Hh>.Q@7n NG)Z`hbfERs<^[Xk˽rkcWsy#0UQHq>:!H\@jR5۳bb(h =f|>ea/o>|D$UFz箳K[vPịN@b7ɣEU9Rl= =yu4A1'r3;G>}) |SJ2oJsXO|zT .9:tPRTeBL1ݽu9AqK3F,W4XrD IM5*F)Vt"dM9??_ġM/8]hʠ &#Ɏ8r^uy!J3Jq}Y3pBkD=<gcz\F`{WkTs}xȐ'J))c5-S5)z,`Ҙs(r]7"7ƇFƴ- (ޯ&D8nD' 09LgyS]PMobR+ALi"'R~ +\\:YX0H5 Gc o,|aQD`&júbS aSXϺ3sH/DVNqL5^eQIcnZ0c W8FI&RK=##*3ѨT:R:n4,+{ 'NJ0UijjI[<<1[X,@uvHȬ+O?߱U;l8~TUӽLrfPa5%˂ b%0ȳِ ǟ2X&B#UǧL?4ĀCCŰ5*Wس [ %~cwx?K.2A p[_I]iZIy \y]xdT e|i^~̊8pxIlP`#W<Ln2B\$@ {u`2rɱXp 3ƈ',7Z:?ͽ]C5}_ sX@6mA,|(0s_"\] h*!5k rȎ%VR5nV9jrQXF @,t NFg™xABV$%x; ƪTk a}_uSdc[]aIoH|l&1=2̌ڑoѱJZF*p/5IZ[ѱx K>DĎɑl4hv e,M+cKcJ$ǜ&P6dtR &%՜ 8KIKiZL|.J)BJwQ}J5'x֥-I)㘾P164M2弱mMe\֬m+ޚokV9yfrmU:u^kyXWү*?*u**8Gc-8 QBǝeVqP> 3t.2Q5Db V; 9Z墳~t+)[q 1W~)Dew c :E7*לcp3D]VI]>KiJ "#yCCcCndJzbB"c {\QQX#CW\;1 doTXK Lr3f"Ex +!Y^Rqebag" ഠIy7U"v ׂ(E7Aw=^xg_w~Ew<&$aV2i aPK1L BBזZ w\ t o=+<-ɒ@'E1"$j T͑tZA"ƴtuږ=KǬ)YAsI Pr O -TUIEc69m H@mw~Tc:H)$['ɞo~7T r %$iRL3?Nzt{%/epONY=;7VB jLSksPkjEl^xgôvRҖ׮T$kjPaڃf!E,MƃRJ{HÐzK/ /BF:j0JD au* 1Cq0*P0WՖ\Aqe$M`3`9\LJ Pcxiʳ>Iǖw&˞0 ai߽jXku4`qYႌYtm89cWV#*Kdf[1ԺrU%Mڂ9)sC2+cO@Ua 5 zʗ$?h俚'@~k %.qF#:4q  2tmB`11B)c!)(AuK,b$]BogGLjž;9z9$Ǥd̒Lv'%dӴ#Zr]甶5a|ӽf2ZǨgܺʑWfy[*5KWL`HO.=lgoO9HW*E*I>Hn<Qx18zuQIչ˳ S B BnA@Hx4Q&Q?8l0zYj6{h̥e>3@ݯf.O3rs؟ӝqgVޖY[=Rm>T]փ%t0YV*86&vx,)әOp_4mN|.p/ǯVGqXz?ITֶo.msKTe7eBcMOkrY S-렇J82 9/HfT֛_L{B2QVfծxKgֳJBCXNX2@͖+gB%VQ wvxdD2ΙFTWL$$&>mXyB_2Z}5-r%VT?@bRURq8|T@{-!yE𸇭GTvy{g Ib/1*pOctc.2,;¾#A¼MylZU͵7ͷ^F_nچ{qQ5NܺSP!dcvLS%#ÿGE2^6 ,CP쭂VJ֋1O%YJ^+/L;xFt!G>,>!N@qD:ˆ~5$i_~S&TySaf պHsYVS }K~s&(hc*k%(ه:n|AC™$e HB be߼J ;Ih|$FF/{ϡPL.Z+y$Uie<@$|ˍa* Nۺ+8rJ}:[S IT$WW=hG o<@'E|X}\dja_`hQZ=w27;*5Ɉ}DaS_>?8VtAP/n K2k2&&aWw7|tvjߏƓ}!y{& p&km;F%SG62OtdOcQ[ThxU4:C0dX=1ts/!ҳvYqp0|Y[QjgZ/*JeQlQ^'$+f'|-hkZ*F zY-k_UM~PƆ~xfܭGX>-Ԉs7Tswaͨ` CCba|Seٹ(z>w#nVbt4og,20ϧx2C=+\GgQV`w7%g!:Mꯐ5kA~N-+|>X3)!p>X 2VgD5ϐmwy*M09S.c`}ۨeua3k#ǢC90 UᨎߗȻRJF@&N!0ve:Qf~$"czq_14l%Ec,tu\%e;%'qx}NU T;쏔ȏER1 NV-6VgW_F7& P_J\ww4K3^B/[w\M5 tgu<,apBư8|.E/!ATQkw C}’wlvqS.)20#]TNx_FWO#!S#s%I?z = t >Pjx"o\!?,)E\N RZU)+ڂ=]W6ګyKNy~eSý\S uvEè1_9|?vH.<ÂN[ۯؕN*\K;Y}_̪YƒgYWd|#;]Huf.:uefxecM,pYEkg4xW؝~VŐ)JŵϮkW%YxKc(4G#s6:2tF#}s`M9R*%ёWύ5XVvX?X82ⴀ\3yfj\.*8R@HfE+qpç x'|o͉]7hcp)Tctxf1 6z=|2NJQHF.),ƳUY0#6)CgL|=^,L2my_eVjQzKY٧Z C!݀GAK0ew|o}$zyeq?!dϷG pfG3ϗ/>f=>텡m(&b%0%@Q*;yeQWF3P]ddkcnPd~1~aS.{!#!x9kB^R>ewƵ++25p/  y6EƧ+aO4;O%flFݻMliTe*rfaV4wo P^׆t\j~p8osEOqz1\qsЎ䘳wI[ [BRB0[KH|5׈ER[K`j]DUV:Rs}%D!'>deܬqjA2S".%gQ˕ Ϫi !6;_}<"/EQ P1܎V 5+\c=e:Ǫ9UpψG㼔XkbNgNᓵD-Xqg-u(ƽȬBȆ;\_[9z9k;c>1(d'3n3صq$ynj57znz=!JAZx'kR1nho)IxO$b}kdo$k'Yr5jgq! 8jByXd**ka  J3K =eKѹK많ũ35pfʖǫ?4A5lr Q6)JQdoB)Ian6GaUng\!fZ]\4cPQ-gօr!E V[ :e8(] TVkt_fM*I'Cz[tUKW[ہV+Jл<{_F͌< Isf5蒼f'Z QYS8ͫ^1p]kR1r)WF -Q€+j (?]Řl.ڬ xsĬ=]idBi`ɀ:9PPxGCRt},Ϭ.{Yf6[o e_We "1|`T9VtXMÇ;F)vA$gfk'] rBM*:6:,M ,*VԟP")D[i 0z^ 0ł]%X:wو(-X9˴Jd !G:LXe[Υy'< \7Pq~SPrߖmqI~wk徻 ϿEW~DoimIn{T'F=ґ,Iukǵ8#sU8z+l 42G37B 86>fӍu&F6UJ 0ДefΪM}XR:. PIMc]MQ*3kЕA6 "ʕkի9VN'5U&`yŀ>2*t0T-d&}6nnN O6jyYB {k峐KhvtD?*E'`J;rC^侣Ce gɃ6635CV3gzdq5[x绹&i"_|۽{Jj]_}y8H,Ĝ6XT2*[({E%S餒1zv2Yn3u{ OdåJ qVm=|]ҚhLOUV-ЖcƤj%Egڃ'ϐD=хl PN%t1B/ƉFr_~~--ԍ_!5zx9o\Ύѻ6+BZ[ˮzH4%kJ?ݛS c$l]1uBU<JE&,w΁̶*1Q>se|e%ڰ=;@]vXH&\>ҼytXy"B1>~$'l f(;CpTڟo߭^>Xxm+[A7X,w5*L3PYMa,\'RX6FLd."Q%dGNfGCxmvӻeTbv*rbx"JvKXم&|ᢠa1rM:Z$Al[ydZ-.TX" +}.TBu/ptóe#U cgs``\8JVa4%$W':,J<^ZGbհr$̰+-qjrb٦!PHkMǓk=,Bb+5{ am,?@PRY5]_w[KB? Y})~13D'}Wk$TEW?^NG9(8s+{*/5F%;V29KCE6[@?m(F+ qqsx%Y>K.[G)/%`6.WyNB%T@hjFnb!{^ ]!Q F!H-_l.nX>ec3pv¶r3p:p; A l8[)PՏ*S ?vo_L>\o"u/])e'/:`ST4؅tz9lZBncx&7~xw/Lyzkq_9A`gpGoFÏl<J[{ *?͍g [9K ﴛhgŽteξO Yŗis?ܞIIXDǝ|8?QM,%TK/XһUp(Jt{mQܺwA_3ǥ#1'R~wgMNͮD*eTrԶ댚"gB?(\_4_G1Gcj Rc E=Ghw 4{32ɆſߘE{gKJ"1 /都Z,nLFj(^YV:s0ZwLz70cxup}>(Gg@F){WKSyPQ[-^-ʱ]0*B1BɄÎ_Baѧ2?(gkqǎCЁIh@ζ6))-=Ͷ= mx4c9ܙo:Pibe qB#^6gS6"q¤ <ӹlYlI8(Qhpo1Oyv1߳V`c˪kU}u6EDʼnr|"StOE_،j-QY@>:MYrG9- ҏ="Poo;T:ԫs:(Ffp^'jDԷ61\T@xXޭ%B}SE`ANe<W<>렬%R{0ǖ57~т FFFEs|ԁy-*,L/]@2%RmcoF*\ut7JU0\7߮lOFĈ ~{ $TIH>&TZ-3N[.uN l؅qK$k!yr˘u4*X3lQ{uFy?u&J)mVpY5 }(I3bђRV2DTbC._SQ`=cҴ)sz훉䟺6mg\ _eL637n ,Ϳ_/ m'|\| c-B__{/ h!*/ 폯`NFɜ#OٺT;r84(f~idDS':-{:%+5Glځe.GiցYarvZ^.'jՀXGyj^>-M^Dմb=V{^$W{ݽw.F+rT*AԑͿx#/W;W)~νW*V`xҽA;ԗ,;AiJܞU;7T@?paHS6+9kΨq)1]6vxÌڑb5FS=:V+Z@feWډVˣ&(mºTQ_rvf.Spq<7gɩhepM8`ȍs}vuĘbX*6Brv]Lhvp?c Z)0%!RD*-:LZ`J$0`C.N^=m\OPblG?8^V0;>58@[4wyY[JB&v'ӎJ&J)oxzHUbr%1g]Kf.5{CB@r6) c_[7Հ\Q€=u:뫊`oKǡ笧p%n=KDxݞzOqNm[MHh3 $sp"tB:' c.+k\ -v՝d#˖{,٪6ǐ#LIbx\'Q:38\1G`9`!'J%zu"Ąu\Yşd<&qEYZ)} 6A;M# VB-IF\B)#;8%yLJ?{H%(؞EYGp~5-ۃ0\!}[Zb#ZkpH{+\n[ޱcX2S^1شTV'ptĢדec زfG{ h}g)iky=HTe~ھ~*{sг$wPRj$_`^ft8Qn%6mT\U7s*EB39߃/>tL}|\(nTlݨغQuˊm|8%,w^* cEs9^Xp!ω@&)Rsm& ߻ĩ$f2' YWG0 R̀ ( "3ߔ*+TyeLniE%ME`3[֜RY6-29vuCh6$\8Ρ=B͉A<ǙFo02$lXE9P:Slzp*9j0y*8g%@5RXVW. &~<_bchjװzγۏ^]w76w˻gwRXc[kϲr܊8doZ֢.fa_}eɭ`7ҲHlwI\ܖͮxw?1s'C&Y9գ*pB& GS暓w-hĩcH:J@G0m,:4pZP y\v鏗帤>PQZNxjz+?ͯef8-'hYzo'mMs`G/Wo%%fX݇bCA _ӟ~{q}-{h:Y<^6[!'}JG}doJofo}N51ܾ:#է)7ErMzMVS-U!I}Fw/6[$@6BIzg߻qayF>#ƻ͙E-'n3 nC !T`{$ 1RRԟfS؜*>ۻ?7 |3!'_WPzţ0aCˑ+%@Fꈶ!CxaԬ-Ĉk4ujG(Zۚ,yd8#5J+^ 1a!BLb0 ϧkՍ(D0SWJ[ V$+#4w֋ {^r#M{a,*ERIJe+ROR&+tv {8ՈFD5+U?"ΦKVm}z8^; AOV;q!qRb'1]>zr(cZEA%േI{$erJ@ZN.N񑪋1Ӥ@p<1$$`1NZ,zEJ^A䕌RJQ  * AeUJe]%NT`Dul6ZW=?6KoKhuEUEeZdQ^F "f*m\T,w*\xB?BX.$8J ye%Tl2pV L1\$ܟ7f)x kH/;RRu2i$1p`M'A!g$LYnn%xT@'1mƸٴ6wK^hw!g$L>an5iF>#ƻ͘E8Fݒ݆@șC@F#(a!wÈՁA55d5 /d&g4_yiMқ^wF^Ӝ6J Ch,.RZic/(=iC)jgoEQ( j * Ci#66^Pzz(g"(.y{+_zXuWjY_|i ,-a(mfK\z(RKJ .(=m".Ha(m&Fa(Un(=$]tr(u䯷var/home/core/zuul-output/logs/kubelet.log0000644000000000000000005600047515144153730017707 0ustar rootrootFeb 14 18:42:25 crc systemd[1]: Starting Kubernetes Kubelet... Feb 14 18:42:25 crc restorecon[4695]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 18:42:25 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 18:42:26 crc restorecon[4695]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 18:42:26 crc restorecon[4695]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 14 18:42:27 crc kubenswrapper[4897]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 18:42:27 crc kubenswrapper[4897]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 14 18:42:27 crc kubenswrapper[4897]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 18:42:27 crc kubenswrapper[4897]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 18:42:27 crc kubenswrapper[4897]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 14 18:42:27 crc kubenswrapper[4897]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.513429 4897 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525453 4897 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525490 4897 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525500 4897 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525509 4897 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525518 4897 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525530 4897 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525540 4897 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525550 4897 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525558 4897 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525566 4897 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525574 4897 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525582 4897 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525589 4897 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525597 4897 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525606 4897 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525614 4897 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525621 4897 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525629 4897 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525636 4897 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525644 4897 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525652 4897 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525659 4897 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525667 4897 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525675 4897 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525683 4897 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525694 4897 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525704 4897 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525713 4897 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525722 4897 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525731 4897 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525739 4897 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525748 4897 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525758 4897 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525766 4897 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525776 4897 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525792 4897 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525802 4897 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525812 4897 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525823 4897 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525831 4897 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525839 4897 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525847 4897 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525855 4897 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525863 4897 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525870 4897 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525878 4897 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525885 4897 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525893 4897 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525901 4897 feature_gate.go:330] unrecognized feature gate: Example Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525909 4897 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525917 4897 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525924 4897 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525932 4897 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525942 4897 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525952 4897 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525961 4897 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525968 4897 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525976 4897 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525984 4897 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525992 4897 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.525999 4897 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.526007 4897 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.526014 4897 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.526022 4897 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.526059 4897 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.526068 4897 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.526077 4897 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.526084 4897 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.526092 4897 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.526099 4897 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.526107 4897 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527533 4897 flags.go:64] FLAG: --address="0.0.0.0" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527555 4897 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527571 4897 flags.go:64] FLAG: --anonymous-auth="true" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527582 4897 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527593 4897 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527603 4897 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527614 4897 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527625 4897 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527635 4897 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527645 4897 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527655 4897 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527666 4897 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527676 4897 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527685 4897 flags.go:64] FLAG: --cgroup-root="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527693 4897 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527702 4897 flags.go:64] FLAG: --client-ca-file="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527712 4897 flags.go:64] FLAG: --cloud-config="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527720 4897 flags.go:64] FLAG: --cloud-provider="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527729 4897 flags.go:64] FLAG: --cluster-dns="[]" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527739 4897 flags.go:64] FLAG: --cluster-domain="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527749 4897 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527764 4897 flags.go:64] FLAG: --config-dir="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527776 4897 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527788 4897 flags.go:64] FLAG: --container-log-max-files="5" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527802 4897 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527814 4897 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527826 4897 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527839 4897 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527850 4897 flags.go:64] FLAG: --contention-profiling="false" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527862 4897 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527874 4897 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527886 4897 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527896 4897 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527914 4897 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527923 4897 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527932 4897 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527941 4897 flags.go:64] FLAG: --enable-load-reader="false" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527950 4897 flags.go:64] FLAG: --enable-server="true" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527962 4897 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527977 4897 flags.go:64] FLAG: --event-burst="100" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527987 4897 flags.go:64] FLAG: --event-qps="50" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.527996 4897 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528005 4897 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528014 4897 flags.go:64] FLAG: --eviction-hard="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528066 4897 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528076 4897 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528085 4897 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528096 4897 flags.go:64] FLAG: --eviction-soft="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528106 4897 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528115 4897 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528124 4897 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528133 4897 flags.go:64] FLAG: --experimental-mounter-path="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528142 4897 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528152 4897 flags.go:64] FLAG: --fail-swap-on="true" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528161 4897 flags.go:64] FLAG: --feature-gates="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528172 4897 flags.go:64] FLAG: --file-check-frequency="20s" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528181 4897 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528190 4897 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528200 4897 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528209 4897 flags.go:64] FLAG: --healthz-port="10248" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528218 4897 flags.go:64] FLAG: --help="false" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528227 4897 flags.go:64] FLAG: --hostname-override="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528235 4897 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528246 4897 flags.go:64] FLAG: --http-check-frequency="20s" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528255 4897 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528264 4897 flags.go:64] FLAG: --image-credential-provider-config="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528272 4897 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528281 4897 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528291 4897 flags.go:64] FLAG: --image-service-endpoint="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528300 4897 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528308 4897 flags.go:64] FLAG: --kube-api-burst="100" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528317 4897 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528327 4897 flags.go:64] FLAG: --kube-api-qps="50" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528336 4897 flags.go:64] FLAG: --kube-reserved="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528345 4897 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528354 4897 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528364 4897 flags.go:64] FLAG: --kubelet-cgroups="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528373 4897 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528383 4897 flags.go:64] FLAG: --lock-file="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528391 4897 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528400 4897 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528410 4897 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528423 4897 flags.go:64] FLAG: --log-json-split-stream="false" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528433 4897 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528442 4897 flags.go:64] FLAG: --log-text-split-stream="false" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528452 4897 flags.go:64] FLAG: --logging-format="text" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528460 4897 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528470 4897 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528479 4897 flags.go:64] FLAG: --manifest-url="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528488 4897 flags.go:64] FLAG: --manifest-url-header="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528499 4897 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528509 4897 flags.go:64] FLAG: --max-open-files="1000000" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528519 4897 flags.go:64] FLAG: --max-pods="110" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528528 4897 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528547 4897 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528557 4897 flags.go:64] FLAG: --memory-manager-policy="None" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528566 4897 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528575 4897 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528584 4897 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528594 4897 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528616 4897 flags.go:64] FLAG: --node-status-max-images="50" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528625 4897 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528634 4897 flags.go:64] FLAG: --oom-score-adj="-999" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528644 4897 flags.go:64] FLAG: --pod-cidr="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528653 4897 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528666 4897 flags.go:64] FLAG: --pod-manifest-path="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528675 4897 flags.go:64] FLAG: --pod-max-pids="-1" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528684 4897 flags.go:64] FLAG: --pods-per-core="0" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528694 4897 flags.go:64] FLAG: --port="10250" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528703 4897 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528712 4897 flags.go:64] FLAG: --provider-id="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528721 4897 flags.go:64] FLAG: --qos-reserved="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528730 4897 flags.go:64] FLAG: --read-only-port="10255" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528740 4897 flags.go:64] FLAG: --register-node="true" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528749 4897 flags.go:64] FLAG: --register-schedulable="true" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528758 4897 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528773 4897 flags.go:64] FLAG: --registry-burst="10" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528782 4897 flags.go:64] FLAG: --registry-qps="5" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528790 4897 flags.go:64] FLAG: --reserved-cpus="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528800 4897 flags.go:64] FLAG: --reserved-memory="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528812 4897 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528821 4897 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528830 4897 flags.go:64] FLAG: --rotate-certificates="false" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528839 4897 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528848 4897 flags.go:64] FLAG: --runonce="false" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528859 4897 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528868 4897 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528881 4897 flags.go:64] FLAG: --seccomp-default="false" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528889 4897 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528898 4897 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528908 4897 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528920 4897 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528930 4897 flags.go:64] FLAG: --storage-driver-password="root" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528938 4897 flags.go:64] FLAG: --storage-driver-secure="false" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528947 4897 flags.go:64] FLAG: --storage-driver-table="stats" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528956 4897 flags.go:64] FLAG: --storage-driver-user="root" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528965 4897 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528975 4897 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528984 4897 flags.go:64] FLAG: --system-cgroups="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.528992 4897 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.529007 4897 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.529016 4897 flags.go:64] FLAG: --tls-cert-file="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.529025 4897 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.529059 4897 flags.go:64] FLAG: --tls-min-version="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.529068 4897 flags.go:64] FLAG: --tls-private-key-file="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.529077 4897 flags.go:64] FLAG: --topology-manager-policy="none" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.529086 4897 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.529095 4897 flags.go:64] FLAG: --topology-manager-scope="container" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.529105 4897 flags.go:64] FLAG: --v="2" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.529116 4897 flags.go:64] FLAG: --version="false" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.529127 4897 flags.go:64] FLAG: --vmodule="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.529138 4897 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.529148 4897 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529405 4897 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529418 4897 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529431 4897 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529440 4897 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529450 4897 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529459 4897 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529471 4897 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529479 4897 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529488 4897 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529495 4897 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529511 4897 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529519 4897 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529527 4897 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529535 4897 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529545 4897 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529555 4897 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529565 4897 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529573 4897 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529581 4897 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529589 4897 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529597 4897 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529605 4897 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529613 4897 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529621 4897 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529629 4897 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529637 4897 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529645 4897 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529652 4897 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529660 4897 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529668 4897 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529676 4897 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529683 4897 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529691 4897 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529699 4897 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529706 4897 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529715 4897 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529722 4897 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529731 4897 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529744 4897 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529752 4897 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529760 4897 feature_gate.go:330] unrecognized feature gate: Example Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529768 4897 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529778 4897 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529786 4897 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529796 4897 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529805 4897 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529814 4897 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529823 4897 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529832 4897 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529840 4897 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529849 4897 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529857 4897 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529864 4897 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529872 4897 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529881 4897 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529891 4897 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529900 4897 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529910 4897 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529918 4897 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529926 4897 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529934 4897 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529942 4897 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529950 4897 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529958 4897 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529965 4897 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529973 4897 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529981 4897 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529989 4897 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.529997 4897 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.530004 4897 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.530013 4897 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.530026 4897 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.543590 4897 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.543654 4897 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.543806 4897 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.543829 4897 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.543840 4897 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.543850 4897 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.543859 4897 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.543867 4897 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.543876 4897 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.543885 4897 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.543894 4897 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.543903 4897 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.543911 4897 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.543919 4897 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.543927 4897 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.543936 4897 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.543944 4897 feature_gate.go:330] unrecognized feature gate: Example Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.543951 4897 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.543959 4897 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.543966 4897 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.543974 4897 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.543982 4897 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.543990 4897 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.543998 4897 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544006 4897 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544014 4897 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544022 4897 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544061 4897 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544069 4897 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544077 4897 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544085 4897 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544099 4897 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544110 4897 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544120 4897 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544128 4897 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544137 4897 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544146 4897 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544154 4897 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544162 4897 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544170 4897 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544178 4897 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544187 4897 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544195 4897 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544203 4897 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544210 4897 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544218 4897 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544226 4897 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544233 4897 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544241 4897 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544250 4897 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544260 4897 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544269 4897 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544279 4897 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544289 4897 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544298 4897 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544309 4897 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544319 4897 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544329 4897 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544340 4897 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544351 4897 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544359 4897 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544367 4897 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544377 4897 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544386 4897 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544394 4897 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544403 4897 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544410 4897 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544422 4897 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544430 4897 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544437 4897 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544445 4897 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544453 4897 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544461 4897 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.544474 4897 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544721 4897 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544734 4897 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544742 4897 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544751 4897 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544761 4897 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544771 4897 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544780 4897 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544790 4897 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544798 4897 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544807 4897 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544816 4897 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544825 4897 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544833 4897 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544842 4897 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544849 4897 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544857 4897 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544864 4897 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544872 4897 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544881 4897 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544889 4897 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544896 4897 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544904 4897 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544913 4897 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544921 4897 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544929 4897 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544937 4897 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544944 4897 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544952 4897 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544962 4897 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544974 4897 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544982 4897 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544990 4897 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.544998 4897 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545005 4897 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545014 4897 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545021 4897 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545051 4897 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545060 4897 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545068 4897 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545078 4897 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545088 4897 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545099 4897 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545109 4897 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545119 4897 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545127 4897 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545136 4897 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545145 4897 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545153 4897 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545161 4897 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545169 4897 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545177 4897 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545185 4897 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545193 4897 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545200 4897 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545208 4897 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545217 4897 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545224 4897 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545232 4897 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545239 4897 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545247 4897 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545255 4897 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545263 4897 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545297 4897 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545306 4897 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545313 4897 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545322 4897 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545330 4897 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545338 4897 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545346 4897 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545354 4897 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.545362 4897 feature_gate.go:330] unrecognized feature gate: Example Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.545375 4897 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.545624 4897 server.go:940] "Client rotation is on, will bootstrap in background" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.552476 4897 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.552611 4897 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.555408 4897 server.go:997] "Starting client certificate rotation" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.555453 4897 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.556798 4897 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-19 23:09:22.104814553 +0000 UTC Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.556926 4897 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.583138 4897 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 14 18:42:27 crc kubenswrapper[4897]: E0214 18:42:27.587410 4897 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.589542 4897 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.607797 4897 log.go:25] "Validated CRI v1 runtime API" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.648809 4897 log.go:25] "Validated CRI v1 image API" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.651081 4897 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.656269 4897 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-14-18-37-49-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.656313 4897 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.685768 4897 manager.go:217] Machine: {Timestamp:2026-02-14 18:42:27.681809273 +0000 UTC m=+0.658217806 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:3852ed47-2b76-43f4-bf60-51d80952e808 BootID:41bffe32-6f10-4c7d-a67d-9930279261bf Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:39:ef:23 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:39:ef:23 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:4b:8d:d1 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:67:ff:9b Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:88:a2:6d Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:fc:aa:c9 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:32:9e:d3:28:c0:c4 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:4e:9d:44:02:86:27 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.686225 4897 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.686602 4897 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.689369 4897 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.689690 4897 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.689753 4897 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.690120 4897 topology_manager.go:138] "Creating topology manager with none policy" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.690140 4897 container_manager_linux.go:303] "Creating device plugin manager" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.690753 4897 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.690804 4897 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.691024 4897 state_mem.go:36] "Initialized new in-memory state store" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.691292 4897 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.696066 4897 kubelet.go:418] "Attempting to sync node with API server" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.696103 4897 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.696204 4897 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.696229 4897 kubelet.go:324] "Adding apiserver pod source" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.696246 4897 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.700470 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 14 18:42:27 crc kubenswrapper[4897]: E0214 18:42:27.700649 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.700557 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 14 18:42:27 crc kubenswrapper[4897]: E0214 18:42:27.700824 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.701959 4897 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.703191 4897 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.706210 4897 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.708174 4897 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.708219 4897 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.708235 4897 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.708249 4897 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.708269 4897 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.708282 4897 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.708298 4897 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.708320 4897 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.708343 4897 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.708356 4897 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.708403 4897 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.708418 4897 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.709574 4897 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.710830 4897 server.go:1280] "Started kubelet" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.714855 4897 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.714867 4897 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.715655 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 14 18:42:27 crc systemd[1]: Started Kubernetes Kubelet. Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.717514 4897 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.721635 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.721903 4897 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.721853 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 08:20:54.041741076 +0000 UTC Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.722147 4897 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.722990 4897 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.722174 4897 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.725631 4897 server.go:460] "Adding debug handlers to kubelet server" Feb 14 18:42:27 crc kubenswrapper[4897]: E0214 18:42:27.725775 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 14 18:42:27 crc kubenswrapper[4897]: E0214 18:42:27.726348 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="200ms" Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.726603 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.727418 4897 factory.go:55] Registering systemd factory Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.727469 4897 factory.go:221] Registration of the systemd container factory successfully Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.727898 4897 factory.go:153] Registering CRI-O factory Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.727939 4897 factory.go:221] Registration of the crio container factory successfully Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.728147 4897 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.728205 4897 factory.go:103] Registering Raw factory Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.728265 4897 manager.go:1196] Started watching for new ooms in manager Feb 14 18:42:27 crc kubenswrapper[4897]: E0214 18:42:27.726911 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Feb 14 18:42:27 crc kubenswrapper[4897]: E0214 18:42:27.727574 4897 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.41:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894311b83eeb8cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-14 18:42:27.710785741 +0000 UTC m=+0.687194264,LastTimestamp:2026-02-14 18:42:27.710785741 +0000 UTC m=+0.687194264,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.729663 4897 manager.go:319] Starting recovery of all containers Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.741239 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.741410 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.741447 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.741481 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.741506 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.741530 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.741565 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.741592 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.741625 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.741644 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.741663 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.741682 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.741701 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.741725 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.741745 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.741763 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.741781 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.741800 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.741818 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.741837 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.741857 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.741875 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.741895 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.741914 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.741936 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.741953 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742014 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742077 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742096 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742118 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742135 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742154 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742172 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742220 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742248 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742274 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742295 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742312 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742329 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742347 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742375 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742396 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742415 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742435 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742456 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742487 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742512 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742535 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742570 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742590 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742608 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742626 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742654 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742691 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742713 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742733 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742757 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742779 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742796 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742814 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742832 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742853 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742871 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742895 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742915 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742934 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742951 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742969 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.742987 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743005 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743023 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743070 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743088 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743113 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743131 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743156 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743185 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743213 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743233 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743252 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743277 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743302 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743327 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743350 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743367 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743385 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743402 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743419 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743437 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743455 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743472 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743488 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743521 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743542 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743561 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743582 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743602 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743623 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743642 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743675 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743695 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743717 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743738 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743757 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743784 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743804 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743825 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743853 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743882 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743903 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743924 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743943 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743969 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.743996 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744015 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744060 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744079 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744097 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744118 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744144 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744167 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744191 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744215 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744235 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744252 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744274 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744311 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744338 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744366 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744385 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744404 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744422 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744438 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744457 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744483 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744508 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744546 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744575 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744600 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744627 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744656 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744688 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744714 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744739 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744762 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744784 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744808 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744835 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744853 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744883 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744908 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744926 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.744965 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.745663 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.745730 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.745754 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.745805 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.745827 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.745859 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.745881 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.745901 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.745933 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.745955 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.746146 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.746166 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.746185 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.746214 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.746240 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.746268 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.746287 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.746313 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.746338 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.746360 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.746386 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.746408 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.746427 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.746461 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.746491 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.746532 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.746562 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.746583 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.746612 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.746631 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.746659 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.746680 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.746701 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.746731 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.752586 4897 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.752674 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.752709 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.752733 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.752766 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.752787 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.752822 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.752853 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.752880 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.752911 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.752935 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.752963 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.752983 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.753004 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.753063 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.753096 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.753125 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.753145 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.753166 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.753194 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.753214 4897 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.753242 4897 reconstruct.go:97] "Volume reconstruction finished" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.753256 4897 reconciler.go:26] "Reconciler: start to sync state" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.766231 4897 manager.go:324] Recovery completed Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.776643 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.779388 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.779440 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.779458 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.780737 4897 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.780762 4897 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.780785 4897 state_mem.go:36] "Initialized new in-memory state store" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.789312 4897 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.792421 4897 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.792548 4897 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.792606 4897 kubelet.go:2335] "Starting kubelet main sync loop" Feb 14 18:42:27 crc kubenswrapper[4897]: E0214 18:42:27.792690 4897 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 14 18:42:27 crc kubenswrapper[4897]: W0214 18:42:27.793540 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 14 18:42:27 crc kubenswrapper[4897]: E0214 18:42:27.793630 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.806404 4897 policy_none.go:49] "None policy: Start" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.807343 4897 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.807382 4897 state_mem.go:35] "Initializing new in-memory state store" Feb 14 18:42:27 crc kubenswrapper[4897]: E0214 18:42:27.827053 4897 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.862471 4897 manager.go:334] "Starting Device Plugin manager" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.862521 4897 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.862533 4897 server.go:79] "Starting device plugin registration server" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.862963 4897 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.862983 4897 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.863420 4897 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.863647 4897 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.863677 4897 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 14 18:42:27 crc kubenswrapper[4897]: E0214 18:42:27.874634 4897 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.893041 4897 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.893150 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.894311 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.894341 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.894352 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.894464 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.894755 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.894856 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.895376 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.895435 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.895461 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.895760 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.895865 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.895892 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.895762 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.896245 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.896273 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.896762 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.896785 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.896808 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.898758 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.898793 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.898803 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.898926 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.899147 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.899213 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.900263 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.900317 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.900366 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.900808 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.900834 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.900846 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.900990 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.901267 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.901402 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.902091 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.902116 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.902127 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.902313 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.902344 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.902485 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.902544 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.902571 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.902953 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.902996 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.903014 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:27 crc kubenswrapper[4897]: E0214 18:42:27.927521 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="400ms" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.957014 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.957833 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.959874 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.959977 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.960055 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.960097 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.960154 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.960201 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.960244 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.960287 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.960324 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.960398 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.960438 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.960477 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.963210 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.965012 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.966980 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.967103 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.967129 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:27 crc kubenswrapper[4897]: I0214 18:42:27.967202 4897 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 18:42:27 crc kubenswrapper[4897]: E0214 18:42:27.968108 4897 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.41:6443: connect: connection refused" node="crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065061 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065121 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065152 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065182 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065205 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065213 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065242 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065260 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065271 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065285 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065300 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065397 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065382 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065431 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065433 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065466 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065494 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065562 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065627 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065693 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065727 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065737 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065780 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065820 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065823 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065842 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065852 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065892 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065908 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.065966 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.168217 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.169302 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.169403 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.169428 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.169472 4897 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 18:42:28 crc kubenswrapper[4897]: E0214 18:42:28.170002 4897 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.41:6443: connect: connection refused" node="crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.252309 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.273865 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.281099 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.302721 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.307884 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:42:28 crc kubenswrapper[4897]: W0214 18:42:28.310845 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-23f805f99dc5f757d43a037aea120eb9791d16fe714497b7fed41aebb1a340eb WatchSource:0}: Error finding container 23f805f99dc5f757d43a037aea120eb9791d16fe714497b7fed41aebb1a340eb: Status 404 returned error can't find the container with id 23f805f99dc5f757d43a037aea120eb9791d16fe714497b7fed41aebb1a340eb Feb 14 18:42:28 crc kubenswrapper[4897]: W0214 18:42:28.312987 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-f5d5e69c18dedca62727b40ac310921405e56d3976bbe1edeb7f241c006b0d29 WatchSource:0}: Error finding container f5d5e69c18dedca62727b40ac310921405e56d3976bbe1edeb7f241c006b0d29: Status 404 returned error can't find the container with id f5d5e69c18dedca62727b40ac310921405e56d3976bbe1edeb7f241c006b0d29 Feb 14 18:42:28 crc kubenswrapper[4897]: W0214 18:42:28.317544 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-16f3907f1b63ef0ea71ff9e70ba3a377c908df0b2bd89b5571dc5abce3bbfd70 WatchSource:0}: Error finding container 16f3907f1b63ef0ea71ff9e70ba3a377c908df0b2bd89b5571dc5abce3bbfd70: Status 404 returned error can't find the container with id 16f3907f1b63ef0ea71ff9e70ba3a377c908df0b2bd89b5571dc5abce3bbfd70 Feb 14 18:42:28 crc kubenswrapper[4897]: E0214 18:42:28.328960 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="800ms" Feb 14 18:42:28 crc kubenswrapper[4897]: W0214 18:42:28.331970 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-ac51ddf6b235a78f995bd87e18c0b17a1dba93633700fc3e5ad420ebca81dcd4 WatchSource:0}: Error finding container ac51ddf6b235a78f995bd87e18c0b17a1dba93633700fc3e5ad420ebca81dcd4: Status 404 returned error can't find the container with id ac51ddf6b235a78f995bd87e18c0b17a1dba93633700fc3e5ad420ebca81dcd4 Feb 14 18:42:28 crc kubenswrapper[4897]: W0214 18:42:28.335242 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-70dc5a4027eeecd6f2c7600cfa11f47eab3cac215dfb5effb0702eec11b24ce4 WatchSource:0}: Error finding container 70dc5a4027eeecd6f2c7600cfa11f47eab3cac215dfb5effb0702eec11b24ce4: Status 404 returned error can't find the container with id 70dc5a4027eeecd6f2c7600cfa11f47eab3cac215dfb5effb0702eec11b24ce4 Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.570925 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.572622 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.572652 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.572663 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.572683 4897 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 18:42:28 crc kubenswrapper[4897]: E0214 18:42:28.572913 4897 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.41:6443: connect: connection refused" node="crc" Feb 14 18:42:28 crc kubenswrapper[4897]: W0214 18:42:28.661392 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 14 18:42:28 crc kubenswrapper[4897]: E0214 18:42:28.661491 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Feb 14 18:42:28 crc kubenswrapper[4897]: W0214 18:42:28.670528 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 14 18:42:28 crc kubenswrapper[4897]: E0214 18:42:28.670597 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Feb 14 18:42:28 crc kubenswrapper[4897]: E0214 18:42:28.673503 4897 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.41:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894311b83eeb8cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-14 18:42:27.710785741 +0000 UTC m=+0.687194264,LastTimestamp:2026-02-14 18:42:27.710785741 +0000 UTC m=+0.687194264,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.716498 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.723780 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 00:07:28.303816213 +0000 UTC Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.798106 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"23f805f99dc5f757d43a037aea120eb9791d16fe714497b7fed41aebb1a340eb"} Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.800355 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"70dc5a4027eeecd6f2c7600cfa11f47eab3cac215dfb5effb0702eec11b24ce4"} Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.801765 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ac51ddf6b235a78f995bd87e18c0b17a1dba93633700fc3e5ad420ebca81dcd4"} Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.802731 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"16f3907f1b63ef0ea71ff9e70ba3a377c908df0b2bd89b5571dc5abce3bbfd70"} Feb 14 18:42:28 crc kubenswrapper[4897]: I0214 18:42:28.804119 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f5d5e69c18dedca62727b40ac310921405e56d3976bbe1edeb7f241c006b0d29"} Feb 14 18:42:29 crc kubenswrapper[4897]: E0214 18:42:29.130565 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="1.6s" Feb 14 18:42:29 crc kubenswrapper[4897]: W0214 18:42:29.169333 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 14 18:42:29 crc kubenswrapper[4897]: E0214 18:42:29.169491 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Feb 14 18:42:29 crc kubenswrapper[4897]: W0214 18:42:29.287230 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 14 18:42:29 crc kubenswrapper[4897]: E0214 18:42:29.287367 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.373850 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.375461 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.375548 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.375568 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.375607 4897 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 18:42:29 crc kubenswrapper[4897]: E0214 18:42:29.376249 4897 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.41:6443: connect: connection refused" node="crc" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.665739 4897 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 14 18:42:29 crc kubenswrapper[4897]: E0214 18:42:29.667093 4897 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.716634 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.724665 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:21:10.283811309 +0000 UTC Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.809255 4897 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="5e42a9e3cce41d65878109f32ae572a4a52fbb5f9dbe1915498499e344004417" exitCode=0 Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.809333 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"5e42a9e3cce41d65878109f32ae572a4a52fbb5f9dbe1915498499e344004417"} Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.809419 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.811508 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.811552 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.811571 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.814268 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb"} Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.814327 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a"} Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.814357 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd"} Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.814383 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c"} Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.814509 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.815564 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.815614 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.815633 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.816890 4897 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4" exitCode=0 Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.817011 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4"} Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.817250 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.818757 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.818803 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.818823 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.819496 4897 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f8fcdbac6833ce37b3c62daf72e260ab97f35fee3177323a0295c13a89eea088" exitCode=0 Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.819618 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f8fcdbac6833ce37b3c62daf72e260ab97f35fee3177323a0295c13a89eea088"} Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.819655 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.820967 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.821017 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.821064 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.821280 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.823049 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.823086 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.823105 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.823109 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"7355629c689c993b7ba57e1e076d28c952f689bad2e7dacc0ac0fe78aa083f8a"} Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.823074 4897 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="7355629c689c993b7ba57e1e076d28c952f689bad2e7dacc0ac0fe78aa083f8a" exitCode=0 Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.823214 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.824829 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.824859 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:29 crc kubenswrapper[4897]: I0214 18:42:29.824876 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.716983 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.725267 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 07:26:23.448020784 +0000 UTC Feb 14 18:42:30 crc kubenswrapper[4897]: E0214 18:42:30.732464 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="3.2s" Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.828331 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"df6dc1c29ebe4f77de0cdf38a6bdea29fcb3d9c7e01e3e54239d703a89cf44e8"} Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.828484 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.829855 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.829907 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.829925 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.831721 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ce9cdf53bb32ab9932f350a61855d31c9ff38fba5ad977fede380b8f3272fc53"} Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.831747 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"73eab27b7a388abb7c9142d8ef6520646cf9d804e5c7c1ae4980749e175134d8"} Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.831758 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"81f5449b8c083d713a05ff1299a5a4025873014bb736633762af4acc1d6d7214"} Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.831794 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.832809 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.832839 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.832852 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.833940 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00"} Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.833966 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006"} Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.833977 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0"} Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.835403 4897 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7b2daa587b3c6a3ece90325415635588a2e1f1732bf874db8d5b54d322c12e96" exitCode=0 Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.835508 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.835851 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7b2daa587b3c6a3ece90325415635588a2e1f1732bf874db8d5b54d322c12e96"} Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.835895 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.836210 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.836240 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.836261 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.836474 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.836490 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.836499 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.977053 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.978829 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.978877 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.978886 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:30 crc kubenswrapper[4897]: I0214 18:42:30.978913 4897 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 18:42:30 crc kubenswrapper[4897]: E0214 18:42:30.979589 4897 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.41:6443: connect: connection refused" node="crc" Feb 14 18:42:31 crc kubenswrapper[4897]: W0214 18:42:31.244945 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 14 18:42:31 crc kubenswrapper[4897]: E0214 18:42:31.245068 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Feb 14 18:42:31 crc kubenswrapper[4897]: W0214 18:42:31.307193 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 14 18:42:31 crc kubenswrapper[4897]: E0214 18:42:31.307305 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Feb 14 18:42:31 crc kubenswrapper[4897]: W0214 18:42:31.644792 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 14 18:42:31 crc kubenswrapper[4897]: E0214 18:42:31.644900 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Feb 14 18:42:31 crc kubenswrapper[4897]: I0214 18:42:31.716599 4897 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 14 18:42:31 crc kubenswrapper[4897]: I0214 18:42:31.726104 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 04:25:14.472382449 +0000 UTC Feb 14 18:42:31 crc kubenswrapper[4897]: I0214 18:42:31.843943 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"062333b31875d0ef2681960fdddf5f6c2b75749636f0df390a9e515de11feef7"} Feb 14 18:42:31 crc kubenswrapper[4897]: I0214 18:42:31.844019 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a"} Feb 14 18:42:31 crc kubenswrapper[4897]: I0214 18:42:31.844218 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:31 crc kubenswrapper[4897]: I0214 18:42:31.846354 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:31 crc kubenswrapper[4897]: I0214 18:42:31.846407 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:31 crc kubenswrapper[4897]: I0214 18:42:31.846429 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:31 crc kubenswrapper[4897]: I0214 18:42:31.846806 4897 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f63caa6f6a75d6f1ba250da1c7963dd1b2c116481683876b14bcd0d300f2e080" exitCode=0 Feb 14 18:42:31 crc kubenswrapper[4897]: I0214 18:42:31.847009 4897 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 18:42:31 crc kubenswrapper[4897]: I0214 18:42:31.847128 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:31 crc kubenswrapper[4897]: I0214 18:42:31.847391 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:31 crc kubenswrapper[4897]: I0214 18:42:31.848269 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f63caa6f6a75d6f1ba250da1c7963dd1b2c116481683876b14bcd0d300f2e080"} Feb 14 18:42:31 crc kubenswrapper[4897]: I0214 18:42:31.848350 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:31 crc kubenswrapper[4897]: I0214 18:42:31.848719 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:31 crc kubenswrapper[4897]: I0214 18:42:31.848773 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:31 crc kubenswrapper[4897]: I0214 18:42:31.848794 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:31 crc kubenswrapper[4897]: I0214 18:42:31.848814 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:31 crc kubenswrapper[4897]: I0214 18:42:31.848834 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:31 crc kubenswrapper[4897]: I0214 18:42:31.848847 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:31 crc kubenswrapper[4897]: I0214 18:42:31.849620 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:31 crc kubenswrapper[4897]: I0214 18:42:31.849649 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:31 crc kubenswrapper[4897]: I0214 18:42:31.849699 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:32 crc kubenswrapper[4897]: I0214 18:42:32.660461 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 18:42:32 crc kubenswrapper[4897]: I0214 18:42:32.661235 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:32 crc kubenswrapper[4897]: I0214 18:42:32.662877 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:32 crc kubenswrapper[4897]: I0214 18:42:32.662931 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:32 crc kubenswrapper[4897]: I0214 18:42:32.662951 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:32 crc kubenswrapper[4897]: I0214 18:42:32.670521 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 18:42:32 crc kubenswrapper[4897]: I0214 18:42:32.726622 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 13:10:38.039862666 +0000 UTC Feb 14 18:42:32 crc kubenswrapper[4897]: I0214 18:42:32.853012 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 14 18:42:32 crc kubenswrapper[4897]: I0214 18:42:32.855793 4897 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="062333b31875d0ef2681960fdddf5f6c2b75749636f0df390a9e515de11feef7" exitCode=255 Feb 14 18:42:32 crc kubenswrapper[4897]: I0214 18:42:32.855863 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"062333b31875d0ef2681960fdddf5f6c2b75749636f0df390a9e515de11feef7"} Feb 14 18:42:32 crc kubenswrapper[4897]: I0214 18:42:32.855951 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:32 crc kubenswrapper[4897]: I0214 18:42:32.857366 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:32 crc kubenswrapper[4897]: I0214 18:42:32.857415 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:32 crc kubenswrapper[4897]: I0214 18:42:32.857446 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:32 crc kubenswrapper[4897]: I0214 18:42:32.858452 4897 scope.go:117] "RemoveContainer" containerID="062333b31875d0ef2681960fdddf5f6c2b75749636f0df390a9e515de11feef7" Feb 14 18:42:32 crc kubenswrapper[4897]: I0214 18:42:32.860517 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"421d771dda8f80416795ec901cc21768d8341c3a66a6ad00bc21ad2fcaba75e7"} Feb 14 18:42:32 crc kubenswrapper[4897]: I0214 18:42:32.860601 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:32 crc kubenswrapper[4897]: I0214 18:42:32.860601 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"25f1c90dbbeeee4f5ba669d6702fd3b7735444611449744868ad972ffb10cf56"} Feb 14 18:42:32 crc kubenswrapper[4897]: I0214 18:42:32.860767 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4c2d4661941882ddd223264614b45551485042d2c7139457b264768df37b8583"} Feb 14 18:42:32 crc kubenswrapper[4897]: I0214 18:42:32.862333 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:32 crc kubenswrapper[4897]: I0214 18:42:32.862384 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:32 crc kubenswrapper[4897]: I0214 18:42:32.862400 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:33 crc kubenswrapper[4897]: I0214 18:42:33.521010 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:42:33 crc kubenswrapper[4897]: I0214 18:42:33.673278 4897 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 14 18:42:33 crc kubenswrapper[4897]: I0214 18:42:33.726878 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 16:46:45.348367267 +0000 UTC Feb 14 18:42:33 crc kubenswrapper[4897]: I0214 18:42:33.866827 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 14 18:42:33 crc kubenswrapper[4897]: I0214 18:42:33.869787 4897 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 18:42:33 crc kubenswrapper[4897]: I0214 18:42:33.869810 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a"} Feb 14 18:42:33 crc kubenswrapper[4897]: I0214 18:42:33.869903 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:33 crc kubenswrapper[4897]: I0214 18:42:33.871247 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:33 crc kubenswrapper[4897]: I0214 18:42:33.871308 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:33 crc kubenswrapper[4897]: I0214 18:42:33.871325 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:33 crc kubenswrapper[4897]: I0214 18:42:33.876383 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cedba242d8c900abe3061efd6123835592068264ae3720aa10cbbf3c9456bc5c"} Feb 14 18:42:33 crc kubenswrapper[4897]: I0214 18:42:33.876429 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"da1beb8618b008940db380111af4f81df31a4eea6d2c09ff68e1f7cb8b1d6a92"} Feb 14 18:42:33 crc kubenswrapper[4897]: I0214 18:42:33.876546 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:33 crc kubenswrapper[4897]: I0214 18:42:33.877591 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:33 crc kubenswrapper[4897]: I0214 18:42:33.877705 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:33 crc kubenswrapper[4897]: I0214 18:42:33.877728 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:34 crc kubenswrapper[4897]: I0214 18:42:34.119870 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 18:42:34 crc kubenswrapper[4897]: I0214 18:42:34.120104 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:34 crc kubenswrapper[4897]: I0214 18:42:34.121580 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:34 crc kubenswrapper[4897]: I0214 18:42:34.121644 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:34 crc kubenswrapper[4897]: I0214 18:42:34.121666 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:34 crc kubenswrapper[4897]: I0214 18:42:34.180148 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:34 crc kubenswrapper[4897]: I0214 18:42:34.181469 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:34 crc kubenswrapper[4897]: I0214 18:42:34.181676 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:34 crc kubenswrapper[4897]: I0214 18:42:34.181905 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:34 crc kubenswrapper[4897]: I0214 18:42:34.182099 4897 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 18:42:34 crc kubenswrapper[4897]: I0214 18:42:34.727111 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 06:20:18.041880545 +0000 UTC Feb 14 18:42:34 crc kubenswrapper[4897]: I0214 18:42:34.879389 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:34 crc kubenswrapper[4897]: I0214 18:42:34.879655 4897 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 18:42:34 crc kubenswrapper[4897]: I0214 18:42:34.879814 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:34 crc kubenswrapper[4897]: I0214 18:42:34.880414 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:34 crc kubenswrapper[4897]: I0214 18:42:34.880457 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:34 crc kubenswrapper[4897]: I0214 18:42:34.880473 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:34 crc kubenswrapper[4897]: I0214 18:42:34.881498 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:34 crc kubenswrapper[4897]: I0214 18:42:34.881704 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:34 crc kubenswrapper[4897]: I0214 18:42:34.881848 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:35 crc kubenswrapper[4897]: I0214 18:42:35.472584 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:42:35 crc kubenswrapper[4897]: I0214 18:42:35.538695 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 18:42:35 crc kubenswrapper[4897]: I0214 18:42:35.538891 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:35 crc kubenswrapper[4897]: I0214 18:42:35.540362 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:35 crc kubenswrapper[4897]: I0214 18:42:35.540411 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:35 crc kubenswrapper[4897]: I0214 18:42:35.540429 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:35 crc kubenswrapper[4897]: I0214 18:42:35.728206 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 03:35:56.340280156 +0000 UTC Feb 14 18:42:35 crc kubenswrapper[4897]: I0214 18:42:35.881957 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:35 crc kubenswrapper[4897]: I0214 18:42:35.883591 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:35 crc kubenswrapper[4897]: I0214 18:42:35.883653 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:35 crc kubenswrapper[4897]: I0214 18:42:35.883676 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:36 crc kubenswrapper[4897]: I0214 18:42:36.201064 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:42:36 crc kubenswrapper[4897]: I0214 18:42:36.728722 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 03:30:35.919500876 +0000 UTC Feb 14 18:42:36 crc kubenswrapper[4897]: I0214 18:42:36.837829 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 14 18:42:36 crc kubenswrapper[4897]: I0214 18:42:36.838023 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:36 crc kubenswrapper[4897]: I0214 18:42:36.839304 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:36 crc kubenswrapper[4897]: I0214 18:42:36.839477 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:36 crc kubenswrapper[4897]: I0214 18:42:36.839617 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:36 crc kubenswrapper[4897]: I0214 18:42:36.884340 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:36 crc kubenswrapper[4897]: I0214 18:42:36.885493 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:36 crc kubenswrapper[4897]: I0214 18:42:36.885660 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:36 crc kubenswrapper[4897]: I0214 18:42:36.885843 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:37 crc kubenswrapper[4897]: I0214 18:42:37.417161 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 14 18:42:37 crc kubenswrapper[4897]: I0214 18:42:37.417428 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:37 crc kubenswrapper[4897]: I0214 18:42:37.419240 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:37 crc kubenswrapper[4897]: I0214 18:42:37.419308 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:37 crc kubenswrapper[4897]: I0214 18:42:37.419333 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:37 crc kubenswrapper[4897]: I0214 18:42:37.729366 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 20:49:48.347715455 +0000 UTC Feb 14 18:42:37 crc kubenswrapper[4897]: E0214 18:42:37.874806 4897 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 14 18:42:37 crc kubenswrapper[4897]: I0214 18:42:37.934435 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 18:42:37 crc kubenswrapper[4897]: I0214 18:42:37.934700 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:37 crc kubenswrapper[4897]: I0214 18:42:37.936276 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:37 crc kubenswrapper[4897]: I0214 18:42:37.936342 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:37 crc kubenswrapper[4897]: I0214 18:42:37.936364 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:38 crc kubenswrapper[4897]: I0214 18:42:38.539573 4897 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 18:42:38 crc kubenswrapper[4897]: I0214 18:42:38.539654 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 18:42:38 crc kubenswrapper[4897]: I0214 18:42:38.730325 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 21:10:59.509662184 +0000 UTC Feb 14 18:42:39 crc kubenswrapper[4897]: I0214 18:42:39.536987 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 18:42:39 crc kubenswrapper[4897]: I0214 18:42:39.537217 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:39 crc kubenswrapper[4897]: I0214 18:42:39.538552 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:39 crc kubenswrapper[4897]: I0214 18:42:39.538585 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:39 crc kubenswrapper[4897]: I0214 18:42:39.538598 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:39 crc kubenswrapper[4897]: I0214 18:42:39.543764 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 18:42:39 crc kubenswrapper[4897]: I0214 18:42:39.731883 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 10:04:33.72669904 +0000 UTC Feb 14 18:42:39 crc kubenswrapper[4897]: I0214 18:42:39.892271 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:39 crc kubenswrapper[4897]: I0214 18:42:39.894209 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:39 crc kubenswrapper[4897]: I0214 18:42:39.894299 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:39 crc kubenswrapper[4897]: I0214 18:42:39.894329 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:40 crc kubenswrapper[4897]: I0214 18:42:40.732456 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 04:26:25.459214808 +0000 UTC Feb 14 18:42:41 crc kubenswrapper[4897]: I0214 18:42:41.733279 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 02:41:19.101313926 +0000 UTC Feb 14 18:42:42 crc kubenswrapper[4897]: W0214 18:42:42.144488 4897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 14 18:42:42 crc kubenswrapper[4897]: I0214 18:42:42.144690 4897 trace.go:236] Trace[121799340]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Feb-2026 18:42:32.142) (total time: 10002ms): Feb 14 18:42:42 crc kubenswrapper[4897]: Trace[121799340]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:42:42.144) Feb 14 18:42:42 crc kubenswrapper[4897]: Trace[121799340]: [10.002060357s] [10.002060357s] END Feb 14 18:42:42 crc kubenswrapper[4897]: E0214 18:42:42.144740 4897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 14 18:42:42 crc kubenswrapper[4897]: I0214 18:42:42.475987 4897 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 14 18:42:42 crc kubenswrapper[4897]: I0214 18:42:42.476054 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 14 18:42:42 crc kubenswrapper[4897]: I0214 18:42:42.479825 4897 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 14 18:42:42 crc kubenswrapper[4897]: I0214 18:42:42.480012 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 14 18:42:42 crc kubenswrapper[4897]: I0214 18:42:42.734317 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 17:37:19.363235827 +0000 UTC Feb 14 18:42:43 crc kubenswrapper[4897]: I0214 18:42:43.527555 4897 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]log ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]etcd ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/generic-apiserver-start-informers ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/priority-and-fairness-filter ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/start-apiextensions-informers ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/start-apiextensions-controllers ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/crd-informer-synced ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/start-system-namespaces-controller ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 14 18:42:43 crc kubenswrapper[4897]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/bootstrap-controller ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/start-kube-aggregator-informers ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/apiservice-registration-controller ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/apiservice-discovery-controller ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]autoregister-completion ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/apiservice-openapi-controller ok Feb 14 18:42:43 crc kubenswrapper[4897]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 14 18:42:43 crc kubenswrapper[4897]: livez check failed Feb 14 18:42:43 crc kubenswrapper[4897]: I0214 18:42:43.527636 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 18:42:43 crc kubenswrapper[4897]: I0214 18:42:43.735101 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 23:22:57.946904557 +0000 UTC Feb 14 18:42:44 crc kubenswrapper[4897]: I0214 18:42:44.736070 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 21:14:36.242738138 +0000 UTC Feb 14 18:42:45 crc kubenswrapper[4897]: I0214 18:42:45.473155 4897 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 14 18:42:45 crc kubenswrapper[4897]: I0214 18:42:45.473277 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 14 18:42:45 crc kubenswrapper[4897]: I0214 18:42:45.736414 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 21:14:59.484312458 +0000 UTC Feb 14 18:42:45 crc kubenswrapper[4897]: I0214 18:42:45.966526 4897 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 14 18:42:46 crc kubenswrapper[4897]: I0214 18:42:46.736816 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 12:26:23.154796386 +0000 UTC Feb 14 18:42:46 crc kubenswrapper[4897]: I0214 18:42:46.871124 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 14 18:42:46 crc kubenswrapper[4897]: I0214 18:42:46.871348 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:46 crc kubenswrapper[4897]: I0214 18:42:46.872882 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:46 crc kubenswrapper[4897]: I0214 18:42:46.873165 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:46 crc kubenswrapper[4897]: I0214 18:42:46.873429 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:46 crc kubenswrapper[4897]: I0214 18:42:46.896507 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 14 18:42:46 crc kubenswrapper[4897]: I0214 18:42:46.909615 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:46 crc kubenswrapper[4897]: I0214 18:42:46.910814 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:46 crc kubenswrapper[4897]: I0214 18:42:46.910879 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:46 crc kubenswrapper[4897]: I0214 18:42:46.910898 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:47 crc kubenswrapper[4897]: E0214 18:42:47.470008 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.472295 4897 trace.go:236] Trace[466062432]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Feb-2026 18:42:34.865) (total time: 12606ms): Feb 14 18:42:47 crc kubenswrapper[4897]: Trace[466062432]: ---"Objects listed" error: 12606ms (18:42:47.472) Feb 14 18:42:47 crc kubenswrapper[4897]: Trace[466062432]: [12.606819431s] [12.606819431s] END Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.472347 4897 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.475737 4897 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.475812 4897 trace.go:236] Trace[2079478301]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Feb-2026 18:42:34.701) (total time: 12774ms): Feb 14 18:42:47 crc kubenswrapper[4897]: Trace[2079478301]: ---"Objects listed" error: 12774ms (18:42:47.475) Feb 14 18:42:47 crc kubenswrapper[4897]: Trace[2079478301]: [12.774116452s] [12.774116452s] END Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.475837 4897 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.477128 4897 trace.go:236] Trace[991609121]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Feb-2026 18:42:35.896) (total time: 11580ms): Feb 14 18:42:47 crc kubenswrapper[4897]: Trace[991609121]: ---"Objects listed" error: 11580ms (18:42:47.476) Feb 14 18:42:47 crc kubenswrapper[4897]: Trace[991609121]: [11.580672558s] [11.580672558s] END Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.477173 4897 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 14 18:42:47 crc kubenswrapper[4897]: E0214 18:42:47.478149 4897 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.485407 4897 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.527603 4897 csr.go:261] certificate signing request csr-qfqsx is approved, waiting to be issued Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.546877 4897 csr.go:257] certificate signing request csr-qfqsx is issued Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.554926 4897 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 14 18:42:47 crc kubenswrapper[4897]: W0214 18:42:47.555476 4897 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 14 18:42:47 crc kubenswrapper[4897]: E0214 18:42:47.555328 4897 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events/crc.1894311b8806bbfb\": read tcp 38.102.83.41:51144->38.102.83.41:6443: use of closed network connection" event="&Event{ObjectMeta:{crc.1894311b8806bbfb default 26172 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-14 18:42:27 +0000 UTC,LastTimestamp:2026-02-14 18:42:27.895473282 +0000 UTC m=+0.871881815,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 14 18:42:47 crc kubenswrapper[4897]: W0214 18:42:47.555563 4897 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 14 18:42:47 crc kubenswrapper[4897]: W0214 18:42:47.555886 4897 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.710050 4897 apiserver.go:52] "Watching apiserver" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.735308 4897 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.735598 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.736114 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:42:47 crc kubenswrapper[4897]: E0214 18:42:47.736195 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.736334 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.736443 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:42:47 crc kubenswrapper[4897]: E0214 18:42:47.736571 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.736641 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.736586 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.736617 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 18:42:47 crc kubenswrapper[4897]: E0214 18:42:47.736841 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.736914 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 04:41:10.785692893 +0000 UTC Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.739866 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.740194 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.740270 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.740257 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.740590 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.740665 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.740729 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.740805 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.740883 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.761904 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.777796 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.787010 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.796225 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.810387 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.820705 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.827649 4897 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.830817 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.841516 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.859552 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.871776 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.878321 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.878367 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.878387 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.879501 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.879582 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.879565 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.879607 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.879692 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.880070 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.880121 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.880172 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.880195 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.880520 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.880615 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.880850 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.881051 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.888181 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.888310 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.888344 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.888370 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.888395 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.888427 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.888453 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.888480 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.888504 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.888536 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.888578 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.888605 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.888628 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.888658 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.888722 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.888747 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.888773 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.888847 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.888873 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.889063 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.889104 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.889134 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.889162 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.889188 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.889211 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.889235 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.889260 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.889281 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.889305 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.889340 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.889362 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.889386 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.889410 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.889433 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.889458 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.889479 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.889501 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.890809 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.891903 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.892017 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.892110 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.892143 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.892280 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.889384 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.895155 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.890507 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.890533 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.890656 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.890726 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.890761 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.890842 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.890945 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.891814 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: E0214 18:42:47.892378 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:42:48.392353254 +0000 UTC m=+21.368761727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.892551 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.893276 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.893296 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.893333 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.893178 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.893859 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.894143 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.894240 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.893847 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.894539 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.894834 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.894994 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.895729 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.895953 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.896150 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.896319 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.896925 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.896994 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.897240 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.897332 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.897381 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.897412 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.897654 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.897824 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.898163 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.898219 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.898696 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.898778 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.898784 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.899284 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.899367 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.899400 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.899425 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.899451 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.899469 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.897272 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.899886 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.899494 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.900574 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.900756 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.899630 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.900717 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.902501 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.902591 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.902685 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.902783 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.902872 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.902954 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.903045 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.903150 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.903251 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.902642 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.902907 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.903198 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.903358 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.903311 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.903524 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.903542 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.903652 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.903759 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.903884 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.903325 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.903983 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904007 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904041 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904060 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904080 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904098 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904114 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904120 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904171 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904200 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904221 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904227 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904239 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904263 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904280 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904298 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904315 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904334 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904355 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904376 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904395 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904405 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904414 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904434 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904456 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904476 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904495 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904516 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904533 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904549 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904564 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904550 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904582 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904609 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904639 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904661 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904681 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904698 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904717 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904738 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904757 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904779 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904796 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904831 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904853 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904883 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904901 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904921 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904941 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904958 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904977 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.904997 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.905016 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.905050 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.905069 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.905088 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.905110 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.905124 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.905222 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.905340 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.905713 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.905828 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.905130 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.905887 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906094 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906150 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906161 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906225 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906254 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906255 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906284 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906311 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906343 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906369 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906390 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906412 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906437 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906459 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906481 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906518 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906545 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906571 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906598 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906620 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906643 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906667 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906691 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906714 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906742 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906769 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906791 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906811 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906830 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906852 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906874 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906896 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906922 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906949 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906975 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906997 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907246 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907275 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907296 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907316 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907338 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907365 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907385 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907402 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907422 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907442 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907465 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907488 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907517 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907538 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907559 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907581 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907599 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907617 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907638 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907656 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907676 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907696 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907725 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907744 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907761 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907781 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907801 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907821 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907846 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907867 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907888 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907916 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907960 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907981 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908002 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908021 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908054 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908098 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908124 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908150 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908180 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908210 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908232 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908260 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908285 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908313 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908338 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908384 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908408 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908431 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908454 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908535 4897 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908550 4897 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908562 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908573 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908584 4897 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908597 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908609 4897 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908621 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908632 4897 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908642 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908652 4897 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908665 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908676 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908686 4897 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908697 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908708 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908719 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908732 4897 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908744 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908754 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908765 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908777 4897 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908787 4897 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908798 4897 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908810 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908821 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908833 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908845 4897 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908857 4897 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908872 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908887 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908902 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908913 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908924 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908935 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908946 4897 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908958 4897 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908972 4897 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908982 4897 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908993 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909005 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909015 4897 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909040 4897 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909066 4897 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909079 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909093 4897 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909106 4897 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909119 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909130 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909140 4897 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909150 4897 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909162 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909171 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909180 4897 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909192 4897 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909201 4897 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909211 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909221 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909231 4897 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909239 4897 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909250 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909260 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909270 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909279 4897 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909289 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909298 4897 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909308 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909320 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909332 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906480 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906499 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.914818 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907127 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907333 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.914846 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907448 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907474 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907572 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907779 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.907832 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906192 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908129 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908375 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908425 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908733 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.908816 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909040 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909084 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909138 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: E0214 18:42:47.909430 4897 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909653 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909722 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909756 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.909977 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.910142 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.910169 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.910317 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.910460 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.910515 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.910977 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.911497 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.911379 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.911739 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.912153 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.912354 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.912789 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.912875 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.913310 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.913614 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.913639 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.913864 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.914010 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.914021 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.914284 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.914497 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.914496 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.914728 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.914744 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.906529 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.914937 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.914968 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.914988 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.915213 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.915226 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.915400 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: E0214 18:42:47.915407 4897 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 18:42:47 crc kubenswrapper[4897]: E0214 18:42:47.916161 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 18:42:48.416137562 +0000 UTC m=+21.392546055 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.915810 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.915670 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.915737 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.915849 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.916613 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.916991 4897 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.917102 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.917114 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.917699 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: E0214 18:42:47.918207 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 18:42:48.418173366 +0000 UTC m=+21.394581879 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.918259 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.918457 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.918557 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.918590 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.919373 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.919466 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.919605 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.920391 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.920543 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.920963 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.921124 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.921285 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.920677 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.921585 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.921856 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.922022 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.922453 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.922490 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.922558 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.922679 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.922677 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.922781 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.923526 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.923599 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.923613 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.923972 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.924452 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.924840 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.925319 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.925641 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.926137 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.926219 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.926595 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.926748 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.926792 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.926904 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.927198 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.927515 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.927565 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.929738 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.927722 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.927839 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.928301 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.928469 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.929596 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.929823 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.929835 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.928897 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.929265 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.929810 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.929873 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.929402 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.929961 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.930687 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.930691 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.931562 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.932216 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.933209 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.933603 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.933878 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.936007 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.956468 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.959266 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.959480 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:47 crc kubenswrapper[4897]: E0214 18:42:47.959795 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 18:42:47 crc kubenswrapper[4897]: E0214 18:42:47.959812 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 18:42:47 crc kubenswrapper[4897]: E0214 18:42:47.959829 4897 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:42:47 crc kubenswrapper[4897]: E0214 18:42:47.959908 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 18:42:48.459870382 +0000 UTC m=+21.436278875 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:42:47 crc kubenswrapper[4897]: E0214 18:42:47.960012 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 18:42:47 crc kubenswrapper[4897]: E0214 18:42:47.960051 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 18:42:47 crc kubenswrapper[4897]: E0214 18:42:47.960065 4897 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:42:47 crc kubenswrapper[4897]: E0214 18:42:47.960096 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 18:42:48.460087528 +0000 UTC m=+21.436496021 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.967786 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.968344 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.970191 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.973218 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:42:47 crc kubenswrapper[4897]: I0214 18:42:47.973791 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.010758 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.010843 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.010910 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.010925 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.010940 4897 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.010954 4897 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.010966 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.010977 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.010989 4897 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011001 4897 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011013 4897 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011095 4897 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011111 4897 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011123 4897 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011135 4897 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011151 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011163 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011175 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011188 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011201 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011213 4897 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011225 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011238 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011250 4897 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011263 4897 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011275 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011287 4897 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011298 4897 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011310 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011321 4897 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011333 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011344 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011360 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011374 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011386 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011397 4897 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011409 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011423 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011434 4897 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011445 4897 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011459 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011476 4897 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011493 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011511 4897 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011524 4897 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011536 4897 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011549 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011561 4897 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011573 4897 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011586 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011598 4897 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011610 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011622 4897 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011633 4897 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011645 4897 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011657 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011670 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011682 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011697 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011713 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011734 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011750 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011766 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011784 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011801 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011866 4897 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011881 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011894 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011908 4897 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011920 4897 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011932 4897 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011943 4897 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011955 4897 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011967 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011980 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.011992 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012005 4897 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012019 4897 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012057 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012072 4897 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012086 4897 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012099 4897 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012111 4897 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012123 4897 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012138 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012150 4897 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012164 4897 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012178 4897 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012194 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012210 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012228 4897 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012246 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012263 4897 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012278 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012299 4897 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012315 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012328 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012341 4897 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012353 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012366 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012378 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012390 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012405 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012416 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012428 4897 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012439 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012451 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012462 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012474 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012488 4897 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012500 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012512 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012526 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012538 4897 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012566 4897 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012578 4897 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012590 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012601 4897 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012614 4897 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012627 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012639 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012651 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012665 4897 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012678 4897 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012690 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012703 4897 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012716 4897 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012786 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.012865 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.047183 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.054479 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.060294 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.101115 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-ldvzr"] Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.101493 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.103366 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.103639 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.103908 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.105113 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-rpwkf"] Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.105509 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-rnbbh"] Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.106104 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fz879"] Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.106659 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.106854 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-rpwkf" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.107688 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.106862 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.108539 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-k5mzq"] Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.109513 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.109638 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.113282 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.113892 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.114270 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.114338 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.114385 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.114480 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.114656 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.114705 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.114819 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.114878 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.114960 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.115131 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.115303 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.116884 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.118177 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.118994 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.119422 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.124697 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.132646 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.143289 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.157092 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.168119 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.178369 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.191990 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.204653 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.215332 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/39dde9bd-372a-45b1-bfa5-937929b27c20-hosts-file\") pod \"node-resolver-rpwkf\" (UID: \"39dde9bd-372a-45b1-bfa5-937929b27c20\") " pod="openshift-dns/node-resolver-rpwkf" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.215370 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-system-cni-dir\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.215392 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-run-netns\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.215415 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-node-log\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.215435 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-multus-conf-dir\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.215452 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9f885c6c-b913-48e3-93fc-abf932515ea9-mcd-auth-proxy-config\") pod \"machine-config-daemon-k5mzq\" (UID: \"9f885c6c-b913-48e3-93fc-abf932515ea9\") " pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.215475 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/302cd01a-17a5-4519-aa94-02e79495e73c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rnbbh\" (UID: \"302cd01a-17a5-4519-aa94-02e79495e73c\") " pod="openshift-multus/multus-additional-cni-plugins-rnbbh" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.215600 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-run-systemd\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.215650 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-run-openvswitch\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.215674 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/302cd01a-17a5-4519-aa94-02e79495e73c-cnibin\") pod \"multus-additional-cni-plugins-rnbbh\" (UID: \"302cd01a-17a5-4519-aa94-02e79495e73c\") " pod="openshift-multus/multus-additional-cni-plugins-rnbbh" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.215697 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-784sc\" (UniqueName: \"kubernetes.io/projected/39dde9bd-372a-45b1-bfa5-937929b27c20-kube-api-access-784sc\") pod \"node-resolver-rpwkf\" (UID: \"39dde9bd-372a-45b1-bfa5-937929b27c20\") " pod="openshift-dns/node-resolver-rpwkf" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.215724 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-kubelet\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.215743 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7vst\" (UniqueName: \"kubernetes.io/projected/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-kube-api-access-n7vst\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.215770 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-slash\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.215841 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-etc-kubernetes\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.215874 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-systemd-units\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.215896 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9f885c6c-b913-48e3-93fc-abf932515ea9-proxy-tls\") pod \"machine-config-daemon-k5mzq\" (UID: \"9f885c6c-b913-48e3-93fc-abf932515ea9\") " pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.215921 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/302cd01a-17a5-4519-aa94-02e79495e73c-system-cni-dir\") pod \"multus-additional-cni-plugins-rnbbh\" (UID: \"302cd01a-17a5-4519-aa94-02e79495e73c\") " pod="openshift-multus/multus-additional-cni-plugins-rnbbh" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.215950 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-host-var-lib-cni-multus\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.215973 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-cnibin\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.215992 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-etc-openvswitch\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216010 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/302cd01a-17a5-4519-aa94-02e79495e73c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rnbbh\" (UID: \"302cd01a-17a5-4519-aa94-02e79495e73c\") " pod="openshift-multus/multus-additional-cni-plugins-rnbbh" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216049 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbdfm\" (UniqueName: \"kubernetes.io/projected/302cd01a-17a5-4519-aa94-02e79495e73c-kube-api-access-lbdfm\") pod \"multus-additional-cni-plugins-rnbbh\" (UID: \"302cd01a-17a5-4519-aa94-02e79495e73c\") " pod="openshift-multus/multus-additional-cni-plugins-rnbbh" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216074 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-os-release\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216095 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-cni-binary-copy\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216114 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-log-socket\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216163 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9f885c6c-b913-48e3-93fc-abf932515ea9-rootfs\") pod \"machine-config-daemon-k5mzq\" (UID: \"9f885c6c-b913-48e3-93fc-abf932515ea9\") " pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216198 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/302cd01a-17a5-4519-aa94-02e79495e73c-cni-binary-copy\") pod \"multus-additional-cni-plugins-rnbbh\" (UID: \"302cd01a-17a5-4519-aa94-02e79495e73c\") " pod="openshift-multus/multus-additional-cni-plugins-rnbbh" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216219 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f304b761-40a3-41ba-af33-a2b0634a55fb-env-overrides\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216245 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-host-run-k8s-cni-cncf-io\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216261 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7j56\" (UniqueName: \"kubernetes.io/projected/f304b761-40a3-41ba-af33-a2b0634a55fb-kube-api-access-j7j56\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216277 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/302cd01a-17a5-4519-aa94-02e79495e73c-os-release\") pod \"multus-additional-cni-plugins-rnbbh\" (UID: \"302cd01a-17a5-4519-aa94-02e79495e73c\") " pod="openshift-multus/multus-additional-cni-plugins-rnbbh" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216297 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f304b761-40a3-41ba-af33-a2b0634a55fb-ovnkube-script-lib\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216314 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-host-run-multus-certs\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216336 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-var-lib-openvswitch\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216365 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-run-ovn-kubernetes\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216398 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-cni-bin\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216460 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-multus-cni-dir\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216559 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-multus-socket-dir-parent\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216623 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-host-var-lib-cni-bin\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216654 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-hostroot\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216674 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f304b761-40a3-41ba-af33-a2b0634a55fb-ovnkube-config\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216723 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqlkb\" (UniqueName: \"kubernetes.io/projected/9f885c6c-b913-48e3-93fc-abf932515ea9-kube-api-access-wqlkb\") pod \"machine-config-daemon-k5mzq\" (UID: \"9f885c6c-b913-48e3-93fc-abf932515ea9\") " pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216749 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216771 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-host-run-netns\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216791 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-run-ovn\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216815 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-host-var-lib-kubelet\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216836 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-multus-daemon-config\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216856 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-cni-netd\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.216884 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f304b761-40a3-41ba-af33-a2b0634a55fb-ovn-node-metrics-cert\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.217501 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.227231 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.242213 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.254091 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.264253 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.273508 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.283291 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.295985 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.309196 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.318330 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-host-var-lib-kubelet\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.318368 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-multus-daemon-config\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.318387 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-cni-netd\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.318475 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f304b761-40a3-41ba-af33-a2b0634a55fb-ovn-node-metrics-cert\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.318773 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-cni-netd\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.318881 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-host-var-lib-kubelet\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.318507 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-system-cni-dir\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.318978 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-run-netns\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.318998 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-node-log\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.319022 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/39dde9bd-372a-45b1-bfa5-937929b27c20-hosts-file\") pod \"node-resolver-rpwkf\" (UID: \"39dde9bd-372a-45b1-bfa5-937929b27c20\") " pod="openshift-dns/node-resolver-rpwkf" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.319058 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9f885c6c-b913-48e3-93fc-abf932515ea9-mcd-auth-proxy-config\") pod \"machine-config-daemon-k5mzq\" (UID: \"9f885c6c-b913-48e3-93fc-abf932515ea9\") " pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.319078 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/302cd01a-17a5-4519-aa94-02e79495e73c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rnbbh\" (UID: \"302cd01a-17a5-4519-aa94-02e79495e73c\") " pod="openshift-multus/multus-additional-cni-plugins-rnbbh" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.319087 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-system-cni-dir\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.319124 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-multus-conf-dir\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.319182 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-run-netns\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.319213 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-run-openvswitch\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.319159 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-multus-conf-dir\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.319253 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-node-log\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.319263 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/39dde9bd-372a-45b1-bfa5-937929b27c20-hosts-file\") pod \"node-resolver-rpwkf\" (UID: \"39dde9bd-372a-45b1-bfa5-937929b27c20\") " pod="openshift-dns/node-resolver-rpwkf" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.319384 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-run-openvswitch\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.319983 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9f885c6c-b913-48e3-93fc-abf932515ea9-mcd-auth-proxy-config\") pod \"machine-config-daemon-k5mzq\" (UID: \"9f885c6c-b913-48e3-93fc-abf932515ea9\") " pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.320127 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-multus-daemon-config\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.321583 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/302cd01a-17a5-4519-aa94-02e79495e73c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rnbbh\" (UID: \"302cd01a-17a5-4519-aa94-02e79495e73c\") " pod="openshift-multus/multus-additional-cni-plugins-rnbbh" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.321692 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/302cd01a-17a5-4519-aa94-02e79495e73c-cnibin\") pod \"multus-additional-cni-plugins-rnbbh\" (UID: \"302cd01a-17a5-4519-aa94-02e79495e73c\") " pod="openshift-multus/multus-additional-cni-plugins-rnbbh" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.321773 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/302cd01a-17a5-4519-aa94-02e79495e73c-cnibin\") pod \"multus-additional-cni-plugins-rnbbh\" (UID: \"302cd01a-17a5-4519-aa94-02e79495e73c\") " pod="openshift-multus/multus-additional-cni-plugins-rnbbh" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.321826 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-784sc\" (UniqueName: \"kubernetes.io/projected/39dde9bd-372a-45b1-bfa5-937929b27c20-kube-api-access-784sc\") pod \"node-resolver-rpwkf\" (UID: \"39dde9bd-372a-45b1-bfa5-937929b27c20\") " pod="openshift-dns/node-resolver-rpwkf" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.321858 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-kubelet\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.321896 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-run-systemd\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.321918 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-slash\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.321947 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7vst\" (UniqueName: \"kubernetes.io/projected/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-kube-api-access-n7vst\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.321982 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-systemd-units\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322002 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9f885c6c-b913-48e3-93fc-abf932515ea9-proxy-tls\") pod \"machine-config-daemon-k5mzq\" (UID: \"9f885c6c-b913-48e3-93fc-abf932515ea9\") " pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322042 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/302cd01a-17a5-4519-aa94-02e79495e73c-system-cni-dir\") pod \"multus-additional-cni-plugins-rnbbh\" (UID: \"302cd01a-17a5-4519-aa94-02e79495e73c\") " pod="openshift-multus/multus-additional-cni-plugins-rnbbh" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322071 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-host-var-lib-cni-multus\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322095 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-etc-kubernetes\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322118 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-cnibin\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322140 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-etc-openvswitch\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322162 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/302cd01a-17a5-4519-aa94-02e79495e73c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rnbbh\" (UID: \"302cd01a-17a5-4519-aa94-02e79495e73c\") " pod="openshift-multus/multus-additional-cni-plugins-rnbbh" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322185 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbdfm\" (UniqueName: \"kubernetes.io/projected/302cd01a-17a5-4519-aa94-02e79495e73c-kube-api-access-lbdfm\") pod \"multus-additional-cni-plugins-rnbbh\" (UID: \"302cd01a-17a5-4519-aa94-02e79495e73c\") " pod="openshift-multus/multus-additional-cni-plugins-rnbbh" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322211 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-cni-binary-copy\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322235 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-log-socket\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322256 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9f885c6c-b913-48e3-93fc-abf932515ea9-rootfs\") pod \"machine-config-daemon-k5mzq\" (UID: \"9f885c6c-b913-48e3-93fc-abf932515ea9\") " pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322285 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/302cd01a-17a5-4519-aa94-02e79495e73c-cni-binary-copy\") pod \"multus-additional-cni-plugins-rnbbh\" (UID: \"302cd01a-17a5-4519-aa94-02e79495e73c\") " pod="openshift-multus/multus-additional-cni-plugins-rnbbh" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322308 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-os-release\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322340 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-host-run-k8s-cni-cncf-io\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322372 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f304b761-40a3-41ba-af33-a2b0634a55fb-env-overrides\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322393 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/302cd01a-17a5-4519-aa94-02e79495e73c-os-release\") pod \"multus-additional-cni-plugins-rnbbh\" (UID: \"302cd01a-17a5-4519-aa94-02e79495e73c\") " pod="openshift-multus/multus-additional-cni-plugins-rnbbh" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322417 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f304b761-40a3-41ba-af33-a2b0634a55fb-ovnkube-script-lib\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322446 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7j56\" (UniqueName: \"kubernetes.io/projected/f304b761-40a3-41ba-af33-a2b0634a55fb-kube-api-access-j7j56\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322469 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-host-run-multus-certs\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322491 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-var-lib-openvswitch\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322513 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-cni-bin\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322537 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-run-ovn-kubernetes\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322559 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-multus-cni-dir\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322582 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-multus-socket-dir-parent\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322605 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-host-var-lib-cni-bin\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322628 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-hostroot\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322659 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f304b761-40a3-41ba-af33-a2b0634a55fb-ovnkube-config\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322694 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqlkb\" (UniqueName: \"kubernetes.io/projected/9f885c6c-b913-48e3-93fc-abf932515ea9-kube-api-access-wqlkb\") pod \"machine-config-daemon-k5mzq\" (UID: \"9f885c6c-b913-48e3-93fc-abf932515ea9\") " pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322719 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-host-run-netns\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322741 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-run-ovn\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322763 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.322836 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.323230 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-kubelet\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.323269 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-run-systemd\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.323296 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-slash\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.323396 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.323511 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/302cd01a-17a5-4519-aa94-02e79495e73c-os-release\") pod \"multus-additional-cni-plugins-rnbbh\" (UID: \"302cd01a-17a5-4519-aa94-02e79495e73c\") " pod="openshift-multus/multus-additional-cni-plugins-rnbbh" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.323808 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9f885c6c-b913-48e3-93fc-abf932515ea9-rootfs\") pod \"machine-config-daemon-k5mzq\" (UID: \"9f885c6c-b913-48e3-93fc-abf932515ea9\") " pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.323873 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-var-lib-openvswitch\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.323983 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f304b761-40a3-41ba-af33-a2b0634a55fb-env-overrides\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.324376 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f304b761-40a3-41ba-af33-a2b0634a55fb-ovn-node-metrics-cert\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.324446 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/302cd01a-17a5-4519-aa94-02e79495e73c-cni-binary-copy\") pod \"multus-additional-cni-plugins-rnbbh\" (UID: \"302cd01a-17a5-4519-aa94-02e79495e73c\") " pod="openshift-multus/multus-additional-cni-plugins-rnbbh" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.324526 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-os-release\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.324529 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-host-run-multus-certs\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.324562 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-host-run-k8s-cni-cncf-io\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.324578 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-host-var-lib-cni-multus\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.324936 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-multus-cni-dir\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.324965 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-systemd-units\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.325179 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-etc-kubernetes\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.325192 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-cni-binary-copy\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.325216 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-host-run-netns\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.325247 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-host-var-lib-cni-bin\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.325251 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/302cd01a-17a5-4519-aa94-02e79495e73c-system-cni-dir\") pod \"multus-additional-cni-plugins-rnbbh\" (UID: \"302cd01a-17a5-4519-aa94-02e79495e73c\") " pod="openshift-multus/multus-additional-cni-plugins-rnbbh" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.325270 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-hostroot\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.325294 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-run-ovn-kubernetes\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.325323 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-log-socket\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.325328 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-cnibin\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.325347 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-cni-bin\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.325371 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-etc-openvswitch\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.325378 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-run-ovn\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.325705 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f304b761-40a3-41ba-af33-a2b0634a55fb-ovnkube-config\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.325824 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/302cd01a-17a5-4519-aa94-02e79495e73c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rnbbh\" (UID: \"302cd01a-17a5-4519-aa94-02e79495e73c\") " pod="openshift-multus/multus-additional-cni-plugins-rnbbh" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.325906 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-multus-socket-dir-parent\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.327597 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f304b761-40a3-41ba-af33-a2b0634a55fb-ovnkube-script-lib\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.329270 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9f885c6c-b913-48e3-93fc-abf932515ea9-proxy-tls\") pod \"machine-config-daemon-k5mzq\" (UID: \"9f885c6c-b913-48e3-93fc-abf932515ea9\") " pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.345795 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbdfm\" (UniqueName: \"kubernetes.io/projected/302cd01a-17a5-4519-aa94-02e79495e73c-kube-api-access-lbdfm\") pod \"multus-additional-cni-plugins-rnbbh\" (UID: \"302cd01a-17a5-4519-aa94-02e79495e73c\") " pod="openshift-multus/multus-additional-cni-plugins-rnbbh" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.349560 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-784sc\" (UniqueName: \"kubernetes.io/projected/39dde9bd-372a-45b1-bfa5-937929b27c20-kube-api-access-784sc\") pod \"node-resolver-rpwkf\" (UID: \"39dde9bd-372a-45b1-bfa5-937929b27c20\") " pod="openshift-dns/node-resolver-rpwkf" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.352139 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7vst\" (UniqueName: \"kubernetes.io/projected/b5b30895-0d98-44e4-8e31-2c5ebe5e1850-kube-api-access-n7vst\") pod \"multus-ldvzr\" (UID: \"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\") " pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.354961 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7j56\" (UniqueName: \"kubernetes.io/projected/f304b761-40a3-41ba-af33-a2b0634a55fb-kube-api-access-j7j56\") pod \"ovnkube-node-fz879\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.363854 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqlkb\" (UniqueName: \"kubernetes.io/projected/9f885c6c-b913-48e3-93fc-abf932515ea9-kube-api-access-wqlkb\") pod \"machine-config-daemon-k5mzq\" (UID: \"9f885c6c-b913-48e3-93fc-abf932515ea9\") " pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.424394 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.424557 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.424594 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:42:48 crc kubenswrapper[4897]: E0214 18:42:48.424705 4897 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 18:42:48 crc kubenswrapper[4897]: E0214 18:42:48.424780 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 18:42:49.424761615 +0000 UTC m=+22.401170098 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 18:42:48 crc kubenswrapper[4897]: E0214 18:42:48.424870 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:42:49.424862158 +0000 UTC m=+22.401270641 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:42:48 crc kubenswrapper[4897]: E0214 18:42:48.424961 4897 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 18:42:48 crc kubenswrapper[4897]: E0214 18:42:48.424993 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 18:42:49.424986372 +0000 UTC m=+22.401394855 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.466756 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ldvzr" Feb 14 18:42:48 crc kubenswrapper[4897]: W0214 18:42:48.478602 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5b30895_0d98_44e4_8e31_2c5ebe5e1850.slice/crio-0c3c28ceccb04c16ac7da476b4f80456b527074892f3bf8758e487b7f66354ca WatchSource:0}: Error finding container 0c3c28ceccb04c16ac7da476b4f80456b527074892f3bf8758e487b7f66354ca: Status 404 returned error can't find the container with id 0c3c28ceccb04c16ac7da476b4f80456b527074892f3bf8758e487b7f66354ca Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.493437 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.498151 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-rpwkf" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.507570 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.508775 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.512741 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.519437 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.519882 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.525393 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.525568 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:42:48 crc kubenswrapper[4897]: E0214 18:42:48.525832 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 18:42:48 crc kubenswrapper[4897]: E0214 18:42:48.525875 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 18:42:48 crc kubenswrapper[4897]: E0214 18:42:48.525884 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 18:42:48 crc kubenswrapper[4897]: E0214 18:42:48.525917 4897 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:42:48 crc kubenswrapper[4897]: E0214 18:42:48.525892 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 18:42:48 crc kubenswrapper[4897]: E0214 18:42:48.525998 4897 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:42:48 crc kubenswrapper[4897]: E0214 18:42:48.526068 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 18:42:49.52600133 +0000 UTC m=+22.502409853 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:42:48 crc kubenswrapper[4897]: E0214 18:42:48.526114 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 18:42:49.526093042 +0000 UTC m=+22.502501565 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.527877 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.532784 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.532958 4897 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.533990 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.537270 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.546781 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.556410 4897 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-14 18:37:47 +0000 UTC, rotation deadline is 2026-12-13 01:36:13.149703479 +0000 UTC Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.556498 4897 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7230h53m24.593209622s for next certificate rotation Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.573752 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.577762 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.598309 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.615819 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.630885 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.649675 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.656615 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.671005 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.687264 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.699649 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.717972 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.736182 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.737376 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 23:28:21.881441111 +0000 UTC Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.745514 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.760003 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://062333b31875d0ef2681960fdddf5f6c2b75749636f0df390a9e515de11feef7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:31Z\\\",\\\"message\\\":\\\"W0214 18:42:31.166394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 18:42:31.167022 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771094551 cert, and key in /tmp/serving-cert-291422904/serving-signer.crt, /tmp/serving-cert-291422904/serving-signer.key\\\\nI0214 18:42:31.440800 1 observer_polling.go:159] Starting file observer\\\\nW0214 18:42:31.443905 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 18:42:31.444292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:31.445487 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-291422904/tls.crt::/tmp/serving-cert-291422904/tls.key\\\\\\\"\\\\nF0214 18:42:31.928991 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.778086 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.793050 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.793093 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.793146 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:42:48 crc kubenswrapper[4897]: E0214 18:42:48.793197 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:42:48 crc kubenswrapper[4897]: E0214 18:42:48.793313 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:42:48 crc kubenswrapper[4897]: E0214 18:42:48.793469 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.799191 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.812156 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.827322 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.866086 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:48Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.905906 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:48Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.926742 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:48Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.961141 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"787776a0af71efe026c7326257436d9cbf04d64c9c346740e52aebf720244a54"} Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.963019 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123"} Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.963084 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3"} Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.963096 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"245b5a5135404dc2ab773e679e4870efe0ef51ae05024aeb5fb8e219207ac6f5"} Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.964800 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" event={"ID":"302cd01a-17a5-4519-aa94-02e79495e73c","Type":"ContainerStarted","Data":"04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8"} Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.964826 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" event={"ID":"302cd01a-17a5-4519-aa94-02e79495e73c","Type":"ContainerStarted","Data":"9fdcbfa44a25ff044e4fcd9ae2c3e0bf90d1df36494102c7e5848e99cb235231"} Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.966519 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ldvzr" event={"ID":"b5b30895-0d98-44e4-8e31-2c5ebe5e1850","Type":"ContainerStarted","Data":"491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd"} Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.966583 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ldvzr" event={"ID":"b5b30895-0d98-44e4-8e31-2c5ebe5e1850","Type":"ContainerStarted","Data":"0c3c28ceccb04c16ac7da476b4f80456b527074892f3bf8758e487b7f66354ca"} Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.968234 4897 generic.go:334] "Generic (PLEG): container finished" podID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerID="81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5" exitCode=0 Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.968318 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" event={"ID":"f304b761-40a3-41ba-af33-a2b0634a55fb","Type":"ContainerDied","Data":"81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5"} Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.969847 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" event={"ID":"f304b761-40a3-41ba-af33-a2b0634a55fb","Type":"ContainerStarted","Data":"94f2c9d0841081233151eb26444a5ad930742620ed1d41ae4112ef4e7a9c6506"} Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.970967 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerStarted","Data":"c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a"} Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.971111 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerStarted","Data":"b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af"} Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.971213 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerStarted","Data":"8dcf01ba69eed1a63b5a1e87177d7c183e9b1926174c9040af907967306edfa3"} Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.972578 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-rpwkf" event={"ID":"39dde9bd-372a-45b1-bfa5-937929b27c20","Type":"ContainerStarted","Data":"fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba"} Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.972611 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-rpwkf" event={"ID":"39dde9bd-372a-45b1-bfa5-937929b27c20","Type":"ContainerStarted","Data":"6f55ee7d7881ab57a0ec11c50ab65dd217635e915fd1ea6ed76e6af958134a26"} Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.974153 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d"} Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.974185 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5b391aa1bc9c4cadfbeaba6685272c23638456026a58477666c1e77f980e4989"} Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.974673 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:48Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.976258 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.976793 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.978699 4897 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a" exitCode=255 Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.978778 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a"} Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.978858 4897 scope.go:117] "RemoveContainer" containerID="062333b31875d0ef2681960fdddf5f6c2b75749636f0df390a9e515de11feef7" Feb 14 18:42:48 crc kubenswrapper[4897]: I0214 18:42:48.979447 4897 scope.go:117] "RemoveContainer" containerID="e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a" Feb 14 18:42:48 crc kubenswrapper[4897]: E0214 18:42:48.979630 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.001843 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:48Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.021905 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.038083 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.054558 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.075057 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.093570 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://062333b31875d0ef2681960fdddf5f6c2b75749636f0df390a9e515de11feef7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:31Z\\\",\\\"message\\\":\\\"W0214 18:42:31.166394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 18:42:31.167022 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771094551 cert, and key in /tmp/serving-cert-291422904/serving-signer.crt, /tmp/serving-cert-291422904/serving-signer.key\\\\nI0214 18:42:31.440800 1 observer_polling.go:159] Starting file observer\\\\nW0214 18:42:31.443905 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 18:42:31.444292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:31.445487 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-291422904/tls.crt::/tmp/serving-cert-291422904/tls.key\\\\\\\"\\\\nF0214 18:42:31.928991 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.105844 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.119107 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.134712 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.167008 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.207671 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.253129 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.289093 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.337763 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:49Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.439385 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.439597 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:42:49 crc kubenswrapper[4897]: E0214 18:42:49.439695 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:42:51.439646134 +0000 UTC m=+24.416054617 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:42:49 crc kubenswrapper[4897]: E0214 18:42:49.439741 4897 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.439782 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:42:49 crc kubenswrapper[4897]: E0214 18:42:49.439818 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 18:42:51.439797409 +0000 UTC m=+24.416205902 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 18:42:49 crc kubenswrapper[4897]: E0214 18:42:49.440058 4897 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 18:42:49 crc kubenswrapper[4897]: E0214 18:42:49.440207 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 18:42:51.440141029 +0000 UTC m=+24.416549552 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.540723 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.540851 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:42:49 crc kubenswrapper[4897]: E0214 18:42:49.540991 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 18:42:49 crc kubenswrapper[4897]: E0214 18:42:49.541062 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 18:42:49 crc kubenswrapper[4897]: E0214 18:42:49.541085 4897 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:42:49 crc kubenswrapper[4897]: E0214 18:42:49.541003 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 18:42:49 crc kubenswrapper[4897]: E0214 18:42:49.541148 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 18:42:49 crc kubenswrapper[4897]: E0214 18:42:49.541162 4897 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:42:49 crc kubenswrapper[4897]: E0214 18:42:49.541169 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 18:42:51.541142917 +0000 UTC m=+24.517551440 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:42:49 crc kubenswrapper[4897]: E0214 18:42:49.541198 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 18:42:51.541186429 +0000 UTC m=+24.517594912 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.738280 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 03:38:51.323380774 +0000 UTC Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.798907 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.799929 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.801453 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.802281 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.803540 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.804262 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.805094 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.806440 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.807374 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.810176 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.810776 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.812129 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.812691 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.813257 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.814217 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.814751 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.815717 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.816197 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.816767 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.817795 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.818306 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.819374 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.819820 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.820844 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.821326 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.821942 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.823444 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.823935 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.825151 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.825837 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.826726 4897 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.826826 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.828593 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.829575 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.830041 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.831564 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.832392 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.833452 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.834144 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.835254 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.835864 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.837016 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.837713 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.838813 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.839336 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.840251 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.840807 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.842021 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.842563 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.843455 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.843930 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.844974 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.845775 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.846494 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.983747 4897 generic.go:334] "Generic (PLEG): container finished" podID="302cd01a-17a5-4519-aa94-02e79495e73c" containerID="04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8" exitCode=0 Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.983864 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" event={"ID":"302cd01a-17a5-4519-aa94-02e79495e73c","Type":"ContainerDied","Data":"04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8"} Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.986266 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 14 18:42:49 crc kubenswrapper[4897]: I0214 18:42:49.992069 4897 scope.go:117] "RemoveContainer" containerID="e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a" Feb 14 18:42:49 crc kubenswrapper[4897]: E0214 18:42:49.992271 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.010356 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://062333b31875d0ef2681960fdddf5f6c2b75749636f0df390a9e515de11feef7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:31Z\\\",\\\"message\\\":\\\"W0214 18:42:31.166394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 18:42:31.167022 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771094551 cert, and key in /tmp/serving-cert-291422904/serving-signer.crt, /tmp/serving-cert-291422904/serving-signer.key\\\\nI0214 18:42:31.440800 1 observer_polling.go:159] Starting file observer\\\\nW0214 18:42:31.443905 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 18:42:31.444292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:31.445487 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-291422904/tls.crt::/tmp/serving-cert-291422904/tls.key\\\\\\\"\\\\nF0214 18:42:31.928991 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.029379 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.047065 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.062169 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.075650 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.107160 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.121829 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.138115 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.158623 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.178734 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.204670 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.226754 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.259246 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.285580 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.298847 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.313260 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.327570 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.346972 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.362662 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.384296 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.405651 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.420486 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.434580 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.453494 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.475760 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.488501 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:50Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.627564 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.739420 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 06:07:38.174376788 +0000 UTC Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.793258 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:42:50 crc kubenswrapper[4897]: E0214 18:42:50.793412 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.793472 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:42:50 crc kubenswrapper[4897]: I0214 18:42:50.793566 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:42:50 crc kubenswrapper[4897]: E0214 18:42:50.793659 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:42:50 crc kubenswrapper[4897]: E0214 18:42:50.793783 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.000383 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" event={"ID":"f304b761-40a3-41ba-af33-a2b0634a55fb","Type":"ContainerStarted","Data":"962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096"} Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.000428 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" event={"ID":"f304b761-40a3-41ba-af33-a2b0634a55fb","Type":"ContainerStarted","Data":"79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f"} Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.000438 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" event={"ID":"f304b761-40a3-41ba-af33-a2b0634a55fb","Type":"ContainerStarted","Data":"4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d"} Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.000448 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" event={"ID":"f304b761-40a3-41ba-af33-a2b0634a55fb","Type":"ContainerStarted","Data":"1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec"} Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.000459 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" event={"ID":"f304b761-40a3-41ba-af33-a2b0634a55fb","Type":"ContainerStarted","Data":"837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85"} Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.000466 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" event={"ID":"f304b761-40a3-41ba-af33-a2b0634a55fb","Type":"ContainerStarted","Data":"b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad"} Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.003047 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" event={"ID":"302cd01a-17a5-4519-aa94-02e79495e73c","Type":"ContainerDied","Data":"b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a"} Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.003013 4897 generic.go:334] "Generic (PLEG): container finished" podID="302cd01a-17a5-4519-aa94-02e79495e73c" containerID="b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a" exitCode=0 Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.004780 4897 scope.go:117] "RemoveContainer" containerID="e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a" Feb 14 18:42:51 crc kubenswrapper[4897]: E0214 18:42:51.005249 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.030431 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:51Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.053749 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:51Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.074400 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:51Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.088904 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:51Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.111532 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:51Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.129640 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:51Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.143770 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:51Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.158131 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:51Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.178335 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:51Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.193023 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:51Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.214613 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:51Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.230563 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:51Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.263124 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:51Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.460972 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.461193 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.461249 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:42:51 crc kubenswrapper[4897]: E0214 18:42:51.461304 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:42:55.46125232 +0000 UTC m=+28.437660853 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:42:51 crc kubenswrapper[4897]: E0214 18:42:51.461382 4897 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 18:42:51 crc kubenswrapper[4897]: E0214 18:42:51.461491 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 18:42:55.461461587 +0000 UTC m=+28.437870100 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 18:42:51 crc kubenswrapper[4897]: E0214 18:42:51.461834 4897 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 18:42:51 crc kubenswrapper[4897]: E0214 18:42:51.461944 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 18:42:55.461923001 +0000 UTC m=+28.438331514 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.562631 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.562736 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:42:51 crc kubenswrapper[4897]: E0214 18:42:51.562855 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 18:42:51 crc kubenswrapper[4897]: E0214 18:42:51.562898 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 18:42:51 crc kubenswrapper[4897]: E0214 18:42:51.562924 4897 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:42:51 crc kubenswrapper[4897]: E0214 18:42:51.562967 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 18:42:51 crc kubenswrapper[4897]: E0214 18:42:51.563007 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 18:42:51 crc kubenswrapper[4897]: E0214 18:42:51.563069 4897 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:42:51 crc kubenswrapper[4897]: E0214 18:42:51.563025 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 18:42:55.562994732 +0000 UTC m=+28.539403255 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:42:51 crc kubenswrapper[4897]: E0214 18:42:51.563137 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 18:42:55.563115825 +0000 UTC m=+28.539524338 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:42:51 crc kubenswrapper[4897]: I0214 18:42:51.740902 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 17:49:02.354883424 +0000 UTC Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.013070 4897 generic.go:334] "Generic (PLEG): container finished" podID="302cd01a-17a5-4519-aa94-02e79495e73c" containerID="2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82" exitCode=0 Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.013172 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" event={"ID":"302cd01a-17a5-4519-aa94-02e79495e73c","Type":"ContainerDied","Data":"2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82"} Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.023173 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e"} Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.046387 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:52Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.070999 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:52Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.096597 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:52Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.119189 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:52Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.140315 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:52Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.165623 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:52Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.199100 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:52Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.229207 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:52Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.245454 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:52Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.262997 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:52Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.278552 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:52Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.293754 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:52Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.308767 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:52Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.324870 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:52Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.348802 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:52Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.369667 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:52Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.383351 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:52Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.406561 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:52Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.428179 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:52Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.444648 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:52Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.459349 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:52Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.480889 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:52Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.494971 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:52Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.509219 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:52Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.529349 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:52Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.545875 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:52Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.743089 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 14:11:07.334743976 +0000 UTC Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.793261 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.793286 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:42:52 crc kubenswrapper[4897]: I0214 18:42:52.793310 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:42:52 crc kubenswrapper[4897]: E0214 18:42:52.793511 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:42:52 crc kubenswrapper[4897]: E0214 18:42:52.793595 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:42:52 crc kubenswrapper[4897]: E0214 18:42:52.793724 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.033002 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" event={"ID":"f304b761-40a3-41ba-af33-a2b0634a55fb","Type":"ContainerStarted","Data":"19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c"} Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.037142 4897 generic.go:334] "Generic (PLEG): container finished" podID="302cd01a-17a5-4519-aa94-02e79495e73c" containerID="31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6" exitCode=0 Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.037317 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" event={"ID":"302cd01a-17a5-4519-aa94-02e79495e73c","Type":"ContainerDied","Data":"31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6"} Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.054860 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:53Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.071058 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:53Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.104421 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:53Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.126419 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:53Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.139767 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:53Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.159878 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:53Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.177147 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:53Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.190793 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:53Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.205599 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:53Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.219533 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:53Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.232593 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:53Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.250633 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:53Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.263183 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:53Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.744219 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 11:30:07.132007909 +0000 UTC Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.878819 4897 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.887656 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.887732 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.887801 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.888010 4897 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.895339 4897 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.895740 4897 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.897046 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.897074 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.897085 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.897107 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.897117 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:53Z","lastTransitionTime":"2026-02-14T18:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:53 crc kubenswrapper[4897]: E0214 18:42:53.910234 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:53Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.913896 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.913926 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.913936 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.913952 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.913963 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:53Z","lastTransitionTime":"2026-02-14T18:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:53 crc kubenswrapper[4897]: E0214 18:42:53.925924 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:53Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.929213 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.929250 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.929261 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.929284 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.929297 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:53Z","lastTransitionTime":"2026-02-14T18:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:53 crc kubenswrapper[4897]: E0214 18:42:53.940844 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:53Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.944574 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.944609 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.944622 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.944640 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.944653 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:53Z","lastTransitionTime":"2026-02-14T18:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:53 crc kubenswrapper[4897]: E0214 18:42:53.961276 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:53Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.968697 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.968734 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.968750 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.968771 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:53 crc kubenswrapper[4897]: I0214 18:42:53.968799 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:53Z","lastTransitionTime":"2026-02-14T18:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:54 crc kubenswrapper[4897]: E0214 18:42:54.010277 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:53Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:54 crc kubenswrapper[4897]: E0214 18:42:54.010979 4897 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.016616 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.017281 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.017498 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.017634 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.017763 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:54Z","lastTransitionTime":"2026-02-14T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.042502 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" event={"ID":"302cd01a-17a5-4519-aa94-02e79495e73c","Type":"ContainerStarted","Data":"c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a"} Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.066525 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:54Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.077880 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:54Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.089712 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:54Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.101895 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:54Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.112194 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:54Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.120387 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.120449 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.120467 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.120492 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.120510 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:54Z","lastTransitionTime":"2026-02-14T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.123689 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:54Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.134219 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:54Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.148895 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:54Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.162255 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:54Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.180988 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:54Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.194185 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:54Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.208399 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:54Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.218600 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:54Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.223114 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.223169 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.223186 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.223212 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.223230 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:54Z","lastTransitionTime":"2026-02-14T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.325686 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.325760 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.325783 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.325811 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.325835 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:54Z","lastTransitionTime":"2026-02-14T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.429014 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.429085 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.429097 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.429118 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.429132 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:54Z","lastTransitionTime":"2026-02-14T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.533777 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.534144 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.534228 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.534382 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.534491 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:54Z","lastTransitionTime":"2026-02-14T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.637683 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.637732 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.637750 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.637774 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.637791 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:54Z","lastTransitionTime":"2026-02-14T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.741463 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.741841 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.741926 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.741999 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.742096 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:54Z","lastTransitionTime":"2026-02-14T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.744528 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 18:20:14.125239101 +0000 UTC Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.793502 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.793537 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.793575 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:42:54 crc kubenswrapper[4897]: E0214 18:42:54.794205 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:42:54 crc kubenswrapper[4897]: E0214 18:42:54.793991 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:42:54 crc kubenswrapper[4897]: E0214 18:42:54.794378 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.846125 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.846602 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.846730 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.846803 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.846877 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:54Z","lastTransitionTime":"2026-02-14T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.949901 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.949949 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.949966 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.949990 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:54 crc kubenswrapper[4897]: I0214 18:42:54.950007 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:54Z","lastTransitionTime":"2026-02-14T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.048885 4897 generic.go:334] "Generic (PLEG): container finished" podID="302cd01a-17a5-4519-aa94-02e79495e73c" containerID="c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a" exitCode=0 Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.048935 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" event={"ID":"302cd01a-17a5-4519-aa94-02e79495e73c","Type":"ContainerDied","Data":"c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a"} Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.054573 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.054629 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.054647 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.054686 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.054707 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:55Z","lastTransitionTime":"2026-02-14T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.082394 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:55Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.102838 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:55Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.124652 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:55Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.140463 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:55Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.155980 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:55Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.164347 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.164398 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.164410 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.164431 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.164446 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:55Z","lastTransitionTime":"2026-02-14T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.173878 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:55Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.195228 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:55Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.208156 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:55Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.229697 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:55Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.245190 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:55Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.259521 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:55Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.267145 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.267194 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.267204 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.267225 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.267241 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:55Z","lastTransitionTime":"2026-02-14T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.284515 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:55Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.305316 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:55Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.370863 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.370929 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.370947 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.370974 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.370996 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:55Z","lastTransitionTime":"2026-02-14T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.474798 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.474863 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.474887 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.474916 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.474938 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:55Z","lastTransitionTime":"2026-02-14T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.513818 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.513999 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.514095 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:42:55 crc kubenswrapper[4897]: E0214 18:42:55.514182 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:43:03.514136443 +0000 UTC m=+36.490544966 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:42:55 crc kubenswrapper[4897]: E0214 18:42:55.514265 4897 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 18:42:55 crc kubenswrapper[4897]: E0214 18:42:55.514356 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 18:43:03.514333829 +0000 UTC m=+36.490742352 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 18:42:55 crc kubenswrapper[4897]: E0214 18:42:55.514368 4897 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 18:42:55 crc kubenswrapper[4897]: E0214 18:42:55.514511 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 18:43:03.514483614 +0000 UTC m=+36.490892137 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.578384 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.578443 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.578452 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.578472 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.578483 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:55Z","lastTransitionTime":"2026-02-14T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.615242 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.615313 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:42:55 crc kubenswrapper[4897]: E0214 18:42:55.615477 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 18:42:55 crc kubenswrapper[4897]: E0214 18:42:55.615506 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 18:42:55 crc kubenswrapper[4897]: E0214 18:42:55.615524 4897 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:42:55 crc kubenswrapper[4897]: E0214 18:42:55.615602 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 18:43:03.615580395 +0000 UTC m=+36.591988908 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:42:55 crc kubenswrapper[4897]: E0214 18:42:55.615634 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 18:42:55 crc kubenswrapper[4897]: E0214 18:42:55.615693 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 18:42:55 crc kubenswrapper[4897]: E0214 18:42:55.615753 4897 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:42:55 crc kubenswrapper[4897]: E0214 18:42:55.615856 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 18:43:03.615823482 +0000 UTC m=+36.592232005 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.681268 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.681317 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.681326 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.681346 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.681359 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:55Z","lastTransitionTime":"2026-02-14T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.745714 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 03:38:10.012247508 +0000 UTC Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.784082 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.784145 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.784163 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.784193 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.784213 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:55Z","lastTransitionTime":"2026-02-14T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.887005 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.887091 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.887111 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.887133 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.887149 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:55Z","lastTransitionTime":"2026-02-14T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.991546 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.991597 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.991610 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.991629 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:55 crc kubenswrapper[4897]: I0214 18:42:55.991644 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:55Z","lastTransitionTime":"2026-02-14T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.056504 4897 generic.go:334] "Generic (PLEG): container finished" podID="302cd01a-17a5-4519-aa94-02e79495e73c" containerID="6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3" exitCode=0 Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.056600 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" event={"ID":"302cd01a-17a5-4519-aa94-02e79495e73c","Type":"ContainerDied","Data":"6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3"} Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.066483 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" event={"ID":"f304b761-40a3-41ba-af33-a2b0634a55fb","Type":"ContainerStarted","Data":"a5ec29a354dcda445db81a7bd941099e76b789a3c84d6350edd4020bbb3d96a0"} Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.066731 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.066971 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.070536 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.096110 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.098404 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.098453 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.098467 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.098488 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.098499 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:56Z","lastTransitionTime":"2026-02-14T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.108867 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.110630 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.113443 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.123206 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.142846 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.156004 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.170063 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.193391 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.200900 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.201296 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.201312 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.201334 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.201347 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:56Z","lastTransitionTime":"2026-02-14T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.218947 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.245498 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.265616 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.282903 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.298672 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.303537 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.303565 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.303573 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.303589 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.303601 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:56Z","lastTransitionTime":"2026-02-14T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.318473 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.329680 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.346962 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.365511 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.390250 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5ec29a354dcda445db81a7bd941099e76b789a3c84d6350edd4020bbb3d96a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.396855 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-6wh27"] Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.397243 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-6wh27" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.399467 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.399525 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.399537 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.399826 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.405142 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.405909 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.405948 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.405961 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.405977 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.405988 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:56Z","lastTransitionTime":"2026-02-14T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.420736 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.422708 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2f5a3174-286c-4e61-a682-3367cc751fee-host\") pod \"node-ca-6wh27\" (UID: \"2f5a3174-286c-4e61-a682-3367cc751fee\") " pod="openshift-image-registry/node-ca-6wh27" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.422769 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2f5a3174-286c-4e61-a682-3367cc751fee-serviceca\") pod \"node-ca-6wh27\" (UID: \"2f5a3174-286c-4e61-a682-3367cc751fee\") " pod="openshift-image-registry/node-ca-6wh27" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.422804 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qggbg\" (UniqueName: \"kubernetes.io/projected/2f5a3174-286c-4e61-a682-3367cc751fee-kube-api-access-qggbg\") pod \"node-ca-6wh27\" (UID: \"2f5a3174-286c-4e61-a682-3367cc751fee\") " pod="openshift-image-registry/node-ca-6wh27" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.433872 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.446069 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.468219 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.484453 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.501666 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.508682 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.508723 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.508732 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.508750 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.508762 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:56Z","lastTransitionTime":"2026-02-14T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.522500 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.523989 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2f5a3174-286c-4e61-a682-3367cc751fee-serviceca\") pod \"node-ca-6wh27\" (UID: \"2f5a3174-286c-4e61-a682-3367cc751fee\") " pod="openshift-image-registry/node-ca-6wh27" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.524181 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qggbg\" (UniqueName: \"kubernetes.io/projected/2f5a3174-286c-4e61-a682-3367cc751fee-kube-api-access-qggbg\") pod \"node-ca-6wh27\" (UID: \"2f5a3174-286c-4e61-a682-3367cc751fee\") " pod="openshift-image-registry/node-ca-6wh27" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.524242 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2f5a3174-286c-4e61-a682-3367cc751fee-host\") pod \"node-ca-6wh27\" (UID: \"2f5a3174-286c-4e61-a682-3367cc751fee\") " pod="openshift-image-registry/node-ca-6wh27" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.524357 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2f5a3174-286c-4e61-a682-3367cc751fee-host\") pod \"node-ca-6wh27\" (UID: \"2f5a3174-286c-4e61-a682-3367cc751fee\") " pod="openshift-image-registry/node-ca-6wh27" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.526082 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2f5a3174-286c-4e61-a682-3367cc751fee-serviceca\") pod \"node-ca-6wh27\" (UID: \"2f5a3174-286c-4e61-a682-3367cc751fee\") " pod="openshift-image-registry/node-ca-6wh27" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.539967 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.556366 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qggbg\" (UniqueName: \"kubernetes.io/projected/2f5a3174-286c-4e61-a682-3367cc751fee-kube-api-access-qggbg\") pod \"node-ca-6wh27\" (UID: \"2f5a3174-286c-4e61-a682-3367cc751fee\") " pod="openshift-image-registry/node-ca-6wh27" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.557020 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.579208 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.596302 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.610987 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.611110 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.611132 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.611156 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.611176 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:56Z","lastTransitionTime":"2026-02-14T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.617391 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5ec29a354dcda445db81a7bd941099e76b789a3c84d6350edd4020bbb3d96a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.631143 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6wh27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5a3174-286c-4e61-a682-3367cc751fee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qggbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6wh27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.645596 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.661751 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.676486 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.691259 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.702250 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.714055 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.714096 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.714105 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.714120 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.714131 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:56Z","lastTransitionTime":"2026-02-14T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.720311 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.721554 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-6wh27" Feb 14 18:42:56 crc kubenswrapper[4897]: W0214 18:42:56.736603 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f5a3174_286c_4e61_a682_3367cc751fee.slice/crio-4b55399cde56fb0d2a5f27f08b3408a808120c126c80b4995e458b905bb4434a WatchSource:0}: Error finding container 4b55399cde56fb0d2a5f27f08b3408a808120c126c80b4995e458b905bb4434a: Status 404 returned error can't find the container with id 4b55399cde56fb0d2a5f27f08b3408a808120c126c80b4995e458b905bb4434a Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.737904 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.746224 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 21:28:15.200319511 +0000 UTC Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.753385 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:56Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.793603 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.793629 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.793745 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:42:56 crc kubenswrapper[4897]: E0214 18:42:56.793897 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:42:56 crc kubenswrapper[4897]: E0214 18:42:56.794114 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:42:56 crc kubenswrapper[4897]: E0214 18:42:56.794356 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.802463 4897 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.817257 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.817303 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.817321 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.817346 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.817364 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:56Z","lastTransitionTime":"2026-02-14T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.919865 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.919901 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.919912 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.919928 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:56 crc kubenswrapper[4897]: I0214 18:42:56.919940 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:56Z","lastTransitionTime":"2026-02-14T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.021974 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.022019 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.022052 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.022074 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.022092 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:57Z","lastTransitionTime":"2026-02-14T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.072722 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-6wh27" event={"ID":"2f5a3174-286c-4e61-a682-3367cc751fee","Type":"ContainerStarted","Data":"562ae320c8e3bf75555e510eda044b859886ae50d8583ddb65da2ac4a698dfc4"} Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.072817 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-6wh27" event={"ID":"2f5a3174-286c-4e61-a682-3367cc751fee","Type":"ContainerStarted","Data":"4b55399cde56fb0d2a5f27f08b3408a808120c126c80b4995e458b905bb4434a"} Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.084826 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" event={"ID":"302cd01a-17a5-4519-aa94-02e79495e73c","Type":"ContainerStarted","Data":"1b62a34a3861770f5f41751762c4076f7538409143fc76997ad85b95bbe5789e"} Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.084973 4897 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.089408 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.106854 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.125099 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.125152 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.125172 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.125195 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.125212 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:57Z","lastTransitionTime":"2026-02-14T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.129000 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.149898 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.182773 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5ec29a354dcda445db81a7bd941099e76b789a3c84d6350edd4020bbb3d96a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.204191 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6wh27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5a3174-286c-4e61-a682-3367cc751fee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562ae320c8e3bf75555e510eda044b859886ae50d8583ddb65da2ac4a698dfc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qggbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6wh27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.216702 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.228292 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.228342 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.228360 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.228385 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.228402 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:57Z","lastTransitionTime":"2026-02-14T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.236556 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.251750 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.265468 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.278986 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.291979 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.312753 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.326594 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.330367 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.330430 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.330447 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.330472 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.330489 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:57Z","lastTransitionTime":"2026-02-14T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.339274 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.358435 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5ec29a354dcda445db81a7bd941099e76b789a3c84d6350edd4020bbb3d96a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.368914 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6wh27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5a3174-286c-4e61-a682-3367cc751fee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562ae320c8e3bf75555e510eda044b859886ae50d8583ddb65da2ac4a698dfc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qggbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6wh27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.383765 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.401281 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.413785 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.426531 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.433092 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.433129 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.433141 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.433156 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.433165 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:57Z","lastTransitionTime":"2026-02-14T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.439760 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.451464 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.465322 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.475262 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.487487 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.498083 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.514574 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62a34a3861770f5f41751762c4076f7538409143fc76997ad85b95bbe5789e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.535671 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.535702 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.535710 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.535724 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.535733 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:57Z","lastTransitionTime":"2026-02-14T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.637892 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.637955 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.638011 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.638101 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.638121 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:57Z","lastTransitionTime":"2026-02-14T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.741204 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.741261 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.741277 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.741304 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.741329 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:57Z","lastTransitionTime":"2026-02-14T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.746736 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:12:00.529833539 +0000 UTC Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.815514 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.845302 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.845381 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.845406 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.845442 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.845466 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:57Z","lastTransitionTime":"2026-02-14T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.847735 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.867726 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.894178 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.911501 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.929159 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.952878 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.952925 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.952940 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.952957 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.952969 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:57Z","lastTransitionTime":"2026-02-14T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.956753 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.974448 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:57 crc kubenswrapper[4897]: I0214 18:42:57.996265 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62a34a3861770f5f41751762c4076f7538409143fc76997ad85b95bbe5789e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:57Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.008572 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:58Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.018047 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:58Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.033977 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5ec29a354dcda445db81a7bd941099e76b789a3c84d6350edd4020bbb3d96a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:58Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.043063 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6wh27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5a3174-286c-4e61-a682-3367cc751fee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562ae320c8e3bf75555e510eda044b859886ae50d8583ddb65da2ac4a698dfc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qggbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6wh27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:58Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.054095 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:58Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.055475 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.055526 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.055540 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.055558 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.055570 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:58Z","lastTransitionTime":"2026-02-14T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.087733 4897 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.158898 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.158944 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.158956 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.158974 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.158985 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:58Z","lastTransitionTime":"2026-02-14T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.262113 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.262182 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.262201 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.262225 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.262243 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:58Z","lastTransitionTime":"2026-02-14T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.364725 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.364781 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.364802 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.364825 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.364843 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:58Z","lastTransitionTime":"2026-02-14T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.467765 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.467855 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.467876 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.467901 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.467993 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:58Z","lastTransitionTime":"2026-02-14T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.571812 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.572278 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.572296 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.572319 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.572331 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:58Z","lastTransitionTime":"2026-02-14T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.674222 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.674287 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.674305 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.674330 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.674348 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:58Z","lastTransitionTime":"2026-02-14T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.746924 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 06:39:34.648773123 +0000 UTC Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.777632 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.777715 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.777740 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.777768 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.777789 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:58Z","lastTransitionTime":"2026-02-14T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.793080 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.793141 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.793141 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:42:58 crc kubenswrapper[4897]: E0214 18:42:58.793380 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:42:58 crc kubenswrapper[4897]: E0214 18:42:58.793484 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:42:58 crc kubenswrapper[4897]: E0214 18:42:58.793663 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.880736 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.880803 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.880821 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.880848 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.880867 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:58Z","lastTransitionTime":"2026-02-14T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.926715 4897 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.983583 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.983647 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.983668 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.983700 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:58 crc kubenswrapper[4897]: I0214 18:42:58.983723 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:58Z","lastTransitionTime":"2026-02-14T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.087296 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.087342 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.087355 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.087373 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.087384 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:59Z","lastTransitionTime":"2026-02-14T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.094947 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fz879_f304b761-40a3-41ba-af33-a2b0634a55fb/ovnkube-controller/0.log" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.099080 4897 generic.go:334] "Generic (PLEG): container finished" podID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerID="a5ec29a354dcda445db81a7bd941099e76b789a3c84d6350edd4020bbb3d96a0" exitCode=1 Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.099164 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" event={"ID":"f304b761-40a3-41ba-af33-a2b0634a55fb","Type":"ContainerDied","Data":"a5ec29a354dcda445db81a7bd941099e76b789a3c84d6350edd4020bbb3d96a0"} Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.100219 4897 scope.go:117] "RemoveContainer" containerID="a5ec29a354dcda445db81a7bd941099e76b789a3c84d6350edd4020bbb3d96a0" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.117445 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:59Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.150825 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5ec29a354dcda445db81a7bd941099e76b789a3c84d6350edd4020bbb3d96a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5ec29a354dcda445db81a7bd941099e76b789a3c84d6350edd4020bbb3d96a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:42:58Z\\\",\\\"message\\\":\\\"andler 6 for removal\\\\nI0214 18:42:58.625540 6124 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 18:42:58.625606 6124 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 18:42:58.625235 6124 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 18:42:58.625771 6124 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0214 18:42:58.625904 6124 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 18:42:58.626290 6124 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 18:42:58.626352 6124 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 18:42:58.626363 6124 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 18:42:58.626370 6124 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 18:42:58.626384 6124 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 18:42:58.626392 6124 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 18:42:58.626408 6124 factory.go:656] Stopping watch factory\\\\nI0214 18:42:58.626427 6124 ovnkube.go:599] Stopped ovnkube\\\\nI0214 18:42:58.626455 6124 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:59Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.168680 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6wh27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5a3174-286c-4e61-a682-3367cc751fee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562ae320c8e3bf75555e510eda044b859886ae50d8583ddb65da2ac4a698dfc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qggbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6wh27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:59Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.191422 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.191528 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.191547 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.191617 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.191637 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:59Z","lastTransitionTime":"2026-02-14T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.193268 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:59Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.213075 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:59Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.233170 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:59Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.253697 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:59Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.255985 4897 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.267107 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:59Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.290842 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:59Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.294394 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.294433 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.294480 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.295141 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.295220 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:59Z","lastTransitionTime":"2026-02-14T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.312655 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:59Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.335255 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:59Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.361959 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:59Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.387110 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:59Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.398164 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.398209 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.398221 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.398238 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.398252 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:59Z","lastTransitionTime":"2026-02-14T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.411015 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62a34a3861770f5f41751762c4076f7538409143fc76997ad85b95bbe5789e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:42:59Z is after 2025-08-24T17:21:41Z" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.500611 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.500674 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.500691 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.500716 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.500734 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:59Z","lastTransitionTime":"2026-02-14T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.603946 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.604006 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.604023 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.604096 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.604116 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:59Z","lastTransitionTime":"2026-02-14T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.707642 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.707691 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.707705 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.707724 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.707738 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:59Z","lastTransitionTime":"2026-02-14T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.747429 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 21:38:39.776003157 +0000 UTC Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.809608 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.809638 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.809646 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.809659 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.809668 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:59Z","lastTransitionTime":"2026-02-14T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.912543 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.912588 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.912603 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.912620 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:42:59 crc kubenswrapper[4897]: I0214 18:42:59.912982 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:42:59Z","lastTransitionTime":"2026-02-14T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.015124 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.015160 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.015169 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.015184 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.015194 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:00Z","lastTransitionTime":"2026-02-14T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.104583 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fz879_f304b761-40a3-41ba-af33-a2b0634a55fb/ovnkube-controller/0.log" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.107571 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" event={"ID":"f304b761-40a3-41ba-af33-a2b0634a55fb","Type":"ContainerStarted","Data":"d0ac6b2f87ac30e1cfca9f511df4154515342eb29bdf98d1769d3a159c98f168"} Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.107699 4897 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.152894 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.153179 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.153200 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.153229 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.153255 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:00Z","lastTransitionTime":"2026-02-14T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.161150 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.180020 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.194842 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.212945 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.226494 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.238763 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.255670 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.255696 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.255704 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.255720 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.255730 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:00Z","lastTransitionTime":"2026-02-14T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.256398 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62a34a3861770f5f41751762c4076f7538409143fc76997ad85b95bbe5789e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.276654 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.290651 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.321052 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ac6b2f87ac30e1cfca9f511df4154515342eb29bdf98d1769d3a159c98f168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5ec29a354dcda445db81a7bd941099e76b789a3c84d6350edd4020bbb3d96a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:42:58Z\\\",\\\"message\\\":\\\"andler 6 for removal\\\\nI0214 18:42:58.625540 6124 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 18:42:58.625606 6124 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 18:42:58.625235 6124 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 18:42:58.625771 6124 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0214 18:42:58.625904 6124 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 18:42:58.626290 6124 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 18:42:58.626352 6124 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 18:42:58.626363 6124 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 18:42:58.626370 6124 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 18:42:58.626384 6124 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 18:42:58.626392 6124 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 18:42:58.626408 6124 factory.go:656] Stopping watch factory\\\\nI0214 18:42:58.626427 6124 ovnkube.go:599] Stopped ovnkube\\\\nI0214 18:42:58.626455 6124 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.334633 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6wh27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5a3174-286c-4e61-a682-3367cc751fee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562ae320c8e3bf75555e510eda044b859886ae50d8583ddb65da2ac4a698dfc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qggbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6wh27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.347149 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.358763 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.358812 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.358821 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.358841 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.358852 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:00Z","lastTransitionTime":"2026-02-14T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.359975 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.381278 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.462063 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.462157 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.462180 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.462208 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.462227 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:00Z","lastTransitionTime":"2026-02-14T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.542432 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl"] Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.543067 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.545577 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.545698 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.557744 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.564596 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.564664 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.564689 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.564721 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.564746 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:00Z","lastTransitionTime":"2026-02-14T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.569444 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6ckh\" (UniqueName: \"kubernetes.io/projected/cf15f881-4696-42f3-af8d-2e1b02eee35b-kube-api-access-l6ckh\") pod \"ovnkube-control-plane-749d76644c-zhdvl\" (UID: \"cf15f881-4696-42f3-af8d-2e1b02eee35b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.569568 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf15f881-4696-42f3-af8d-2e1b02eee35b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zhdvl\" (UID: \"cf15f881-4696-42f3-af8d-2e1b02eee35b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.569638 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cf15f881-4696-42f3-af8d-2e1b02eee35b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zhdvl\" (UID: \"cf15f881-4696-42f3-af8d-2e1b02eee35b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.569715 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf15f881-4696-42f3-af8d-2e1b02eee35b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zhdvl\" (UID: \"cf15f881-4696-42f3-af8d-2e1b02eee35b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.578795 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.599279 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.616725 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.638695 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.656917 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.667513 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.667574 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.667593 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.667659 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.667680 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:00Z","lastTransitionTime":"2026-02-14T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.671145 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cf15f881-4696-42f3-af8d-2e1b02eee35b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zhdvl\" (UID: \"cf15f881-4696-42f3-af8d-2e1b02eee35b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.671243 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf15f881-4696-42f3-af8d-2e1b02eee35b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zhdvl\" (UID: \"cf15f881-4696-42f3-af8d-2e1b02eee35b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.671352 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6ckh\" (UniqueName: \"kubernetes.io/projected/cf15f881-4696-42f3-af8d-2e1b02eee35b-kube-api-access-l6ckh\") pod \"ovnkube-control-plane-749d76644c-zhdvl\" (UID: \"cf15f881-4696-42f3-af8d-2e1b02eee35b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.671440 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf15f881-4696-42f3-af8d-2e1b02eee35b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zhdvl\" (UID: \"cf15f881-4696-42f3-af8d-2e1b02eee35b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.672079 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cf15f881-4696-42f3-af8d-2e1b02eee35b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zhdvl\" (UID: \"cf15f881-4696-42f3-af8d-2e1b02eee35b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.672244 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf15f881-4696-42f3-af8d-2e1b02eee35b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zhdvl\" (UID: \"cf15f881-4696-42f3-af8d-2e1b02eee35b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.675255 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.678275 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf15f881-4696-42f3-af8d-2e1b02eee35b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zhdvl\" (UID: \"cf15f881-4696-42f3-af8d-2e1b02eee35b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.691138 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62a34a3861770f5f41751762c4076f7538409143fc76997ad85b95bbe5789e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.704429 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6ckh\" (UniqueName: \"kubernetes.io/projected/cf15f881-4696-42f3-af8d-2e1b02eee35b-kube-api-access-l6ckh\") pod \"ovnkube-control-plane-749d76644c-zhdvl\" (UID: \"cf15f881-4696-42f3-af8d-2e1b02eee35b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.709526 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.731834 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ac6b2f87ac30e1cfca9f511df4154515342eb29bdf98d1769d3a159c98f168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5ec29a354dcda445db81a7bd941099e76b789a3c84d6350edd4020bbb3d96a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:42:58Z\\\",\\\"message\\\":\\\"andler 6 for removal\\\\nI0214 18:42:58.625540 6124 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 18:42:58.625606 6124 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 18:42:58.625235 6124 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 18:42:58.625771 6124 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0214 18:42:58.625904 6124 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 18:42:58.626290 6124 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 18:42:58.626352 6124 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 18:42:58.626363 6124 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 18:42:58.626370 6124 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 18:42:58.626384 6124 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 18:42:58.626392 6124 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 18:42:58.626408 6124 factory.go:656] Stopping watch factory\\\\nI0214 18:42:58.626427 6124 ovnkube.go:599] Stopped ovnkube\\\\nI0214 18:42:58.626455 6124 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.744324 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6wh27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5a3174-286c-4e61-a682-3367cc751fee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562ae320c8e3bf75555e510eda044b859886ae50d8583ddb65da2ac4a698dfc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qggbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6wh27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.747774 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 11:03:40.609516132 +0000 UTC Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.758171 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf15f881-4696-42f3-af8d-2e1b02eee35b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhdvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.770577 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.770683 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.770715 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.770907 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.771019 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:00Z","lastTransitionTime":"2026-02-14T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.772350 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.789328 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.792991 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.793051 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.793115 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:00 crc kubenswrapper[4897]: E0214 18:43:00.793214 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:00 crc kubenswrapper[4897]: E0214 18:43:00.793333 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:00 crc kubenswrapper[4897]: E0214 18:43:00.793464 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.802979 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:00Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.865283 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.874802 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.874878 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.874902 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.874930 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.874964 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:00Z","lastTransitionTime":"2026-02-14T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.978755 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.978820 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.978831 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.978855 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:00 crc kubenswrapper[4897]: I0214 18:43:00.978870 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:00Z","lastTransitionTime":"2026-02-14T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.082318 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.082385 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.082408 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.082433 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.082451 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:01Z","lastTransitionTime":"2026-02-14T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.114061 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" event={"ID":"cf15f881-4696-42f3-af8d-2e1b02eee35b","Type":"ContainerStarted","Data":"2cce032ffc95dd6c27b61ee6b4f4af310532931c29f883277e94009fd7c41683"} Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.116962 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fz879_f304b761-40a3-41ba-af33-a2b0634a55fb/ovnkube-controller/1.log" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.117753 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fz879_f304b761-40a3-41ba-af33-a2b0634a55fb/ovnkube-controller/0.log" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.122127 4897 generic.go:334] "Generic (PLEG): container finished" podID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerID="d0ac6b2f87ac30e1cfca9f511df4154515342eb29bdf98d1769d3a159c98f168" exitCode=1 Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.122179 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" event={"ID":"f304b761-40a3-41ba-af33-a2b0634a55fb","Type":"ContainerDied","Data":"d0ac6b2f87ac30e1cfca9f511df4154515342eb29bdf98d1769d3a159c98f168"} Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.122231 4897 scope.go:117] "RemoveContainer" containerID="a5ec29a354dcda445db81a7bd941099e76b789a3c84d6350edd4020bbb3d96a0" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.123342 4897 scope.go:117] "RemoveContainer" containerID="d0ac6b2f87ac30e1cfca9f511df4154515342eb29bdf98d1769d3a159c98f168" Feb 14 18:43:01 crc kubenswrapper[4897]: E0214 18:43:01.123618 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fz879_openshift-ovn-kubernetes(f304b761-40a3-41ba-af33-a2b0634a55fb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.142255 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.161863 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.180097 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.185046 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.185081 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.185090 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.185124 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.185134 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:01Z","lastTransitionTime":"2026-02-14T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.200123 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.220533 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.240100 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.259134 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.272743 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.288645 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.288732 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.288749 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.288809 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.288828 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:01Z","lastTransitionTime":"2026-02-14T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.291542 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.317928 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62a34a3861770f5f41751762c4076f7538409143fc76997ad85b95bbe5789e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.335924 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.358324 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.391746 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ac6b2f87ac30e1cfca9f511df4154515342eb29bdf98d1769d3a159c98f168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5ec29a354dcda445db81a7bd941099e76b789a3c84d6350edd4020bbb3d96a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:42:58Z\\\",\\\"message\\\":\\\"andler 6 for removal\\\\nI0214 18:42:58.625540 6124 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 18:42:58.625606 6124 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 18:42:58.625235 6124 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 18:42:58.625771 6124 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0214 18:42:58.625904 6124 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 18:42:58.626290 6124 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 18:42:58.626352 6124 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 18:42:58.626363 6124 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 18:42:58.626370 6124 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 18:42:58.626384 6124 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 18:42:58.626392 6124 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 18:42:58.626408 6124 factory.go:656] Stopping watch factory\\\\nI0214 18:42:58.626427 6124 ovnkube.go:599] Stopped ovnkube\\\\nI0214 18:42:58.626455 6124 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0ac6b2f87ac30e1cfca9f511df4154515342eb29bdf98d1769d3a159c98f168\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"message\\\":\\\"d as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0214 18:43:00.372941 6318 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0214 18:43:00.373132 6318 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-k5mzq\\\\nI0214 18:43:00.374300 6318 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-k5mzq\\\\nI0214 18:43:00.374311 6318 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-k5mzq in node crc\\\\nI0214 18:43:00.374317 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-k5mzq after 0 failed attempt(s)\\\\nI0214 18:43:00.374323 6318 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-k5mzq\\\\nF0214 18:43:00.373245 6318 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.392960 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.392998 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.393008 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.393042 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.393056 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:01Z","lastTransitionTime":"2026-02-14T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.410061 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6wh27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5a3174-286c-4e61-a682-3367cc751fee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562ae320c8e3bf75555e510eda044b859886ae50d8583ddb65da2ac4a698dfc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qggbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6wh27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.426869 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf15f881-4696-42f3-af8d-2e1b02eee35b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhdvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:01Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.496372 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.496425 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.496435 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.496453 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.496467 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:01Z","lastTransitionTime":"2026-02-14T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.599611 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.599676 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.599689 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.599714 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.599726 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:01Z","lastTransitionTime":"2026-02-14T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.703163 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.703224 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.703243 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.703268 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.703283 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:01Z","lastTransitionTime":"2026-02-14T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.747929 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 07:38:45.52447384 +0000 UTC Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.806228 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.806307 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.806327 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.806353 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.806370 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:01Z","lastTransitionTime":"2026-02-14T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.910199 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.910257 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.910276 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.910302 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:01 crc kubenswrapper[4897]: I0214 18:43:01.910318 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:01Z","lastTransitionTime":"2026-02-14T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.012944 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.013006 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.013021 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.013069 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.013084 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:02Z","lastTransitionTime":"2026-02-14T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.071550 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-xrgww"] Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.072243 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:02 crc kubenswrapper[4897]: E0214 18:43:02.072373 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.087935 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b614985-b2f8-443d-9996-635d7e407b24-metrics-certs\") pod \"network-metrics-daemon-xrgww\" (UID: \"6b614985-b2f8-443d-9996-635d7e407b24\") " pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.088001 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gqqw\" (UniqueName: \"kubernetes.io/projected/6b614985-b2f8-443d-9996-635d7e407b24-kube-api-access-2gqqw\") pod \"network-metrics-daemon-xrgww\" (UID: \"6b614985-b2f8-443d-9996-635d7e407b24\") " pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.093807 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.113193 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.116170 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.116233 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.116251 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.116310 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.116336 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:02Z","lastTransitionTime":"2026-02-14T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.127437 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.128462 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" event={"ID":"cf15f881-4696-42f3-af8d-2e1b02eee35b","Type":"ContainerStarted","Data":"f2d4195bd0709714936c540f3c16554a5d06bd95ed92c23003b5d7efc05afcc9"} Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.128541 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" event={"ID":"cf15f881-4696-42f3-af8d-2e1b02eee35b","Type":"ContainerStarted","Data":"0b77b31ca9268c27935ddb1973488480bb124ffdb5934bb011eca818de003da0"} Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.131704 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fz879_f304b761-40a3-41ba-af33-a2b0634a55fb/ovnkube-controller/1.log" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.143398 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrgww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b614985-b2f8-443d-9996-635d7e407b24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrgww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.162355 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.181464 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.188694 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b614985-b2f8-443d-9996-635d7e407b24-metrics-certs\") pod \"network-metrics-daemon-xrgww\" (UID: \"6b614985-b2f8-443d-9996-635d7e407b24\") " pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.188748 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gqqw\" (UniqueName: \"kubernetes.io/projected/6b614985-b2f8-443d-9996-635d7e407b24-kube-api-access-2gqqw\") pod \"network-metrics-daemon-xrgww\" (UID: \"6b614985-b2f8-443d-9996-635d7e407b24\") " pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:02 crc kubenswrapper[4897]: E0214 18:43:02.188967 4897 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 18:43:02 crc kubenswrapper[4897]: E0214 18:43:02.189166 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b614985-b2f8-443d-9996-635d7e407b24-metrics-certs podName:6b614985-b2f8-443d-9996-635d7e407b24 nodeName:}" failed. No retries permitted until 2026-02-14 18:43:02.689139169 +0000 UTC m=+35.665547692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b614985-b2f8-443d-9996-635d7e407b24-metrics-certs") pod "network-metrics-daemon-xrgww" (UID: "6b614985-b2f8-443d-9996-635d7e407b24") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.201966 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.213178 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gqqw\" (UniqueName: \"kubernetes.io/projected/6b614985-b2f8-443d-9996-635d7e407b24-kube-api-access-2gqqw\") pod \"network-metrics-daemon-xrgww\" (UID: \"6b614985-b2f8-443d-9996-635d7e407b24\") " pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.218578 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.218615 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.218626 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.218644 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.218655 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:02Z","lastTransitionTime":"2026-02-14T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.221868 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.240223 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62a34a3861770f5f41751762c4076f7538409143fc76997ad85b95bbe5789e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.253880 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf15f881-4696-42f3-af8d-2e1b02eee35b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhdvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.269175 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.297408 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ac6b2f87ac30e1cfca9f511df4154515342eb29bdf98d1769d3a159c98f168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5ec29a354dcda445db81a7bd941099e76b789a3c84d6350edd4020bbb3d96a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:42:58Z\\\",\\\"message\\\":\\\"andler 6 for removal\\\\nI0214 18:42:58.625540 6124 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 18:42:58.625606 6124 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 18:42:58.625235 6124 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 18:42:58.625771 6124 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0214 18:42:58.625904 6124 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 18:42:58.626290 6124 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 18:42:58.626352 6124 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 18:42:58.626363 6124 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 18:42:58.626370 6124 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 18:42:58.626384 6124 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 18:42:58.626392 6124 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 18:42:58.626408 6124 factory.go:656] Stopping watch factory\\\\nI0214 18:42:58.626427 6124 ovnkube.go:599] Stopped ovnkube\\\\nI0214 18:42:58.626455 6124 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0ac6b2f87ac30e1cfca9f511df4154515342eb29bdf98d1769d3a159c98f168\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"message\\\":\\\"d as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0214 18:43:00.372941 6318 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0214 18:43:00.373132 6318 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-k5mzq\\\\nI0214 18:43:00.374300 6318 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-k5mzq\\\\nI0214 18:43:00.374311 6318 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-k5mzq in node crc\\\\nI0214 18:43:00.374317 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-k5mzq after 0 failed attempt(s)\\\\nI0214 18:43:00.374323 6318 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-k5mzq\\\\nF0214 18:43:00.373245 6318 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.311593 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6wh27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5a3174-286c-4e61-a682-3367cc751fee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562ae320c8e3bf75555e510eda044b859886ae50d8583ddb65da2ac4a698dfc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qggbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6wh27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.321602 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.321672 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.321691 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.321717 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.321735 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:02Z","lastTransitionTime":"2026-02-14T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.331603 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.350226 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.367983 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.389847 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.409081 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.425124 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.425401 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.425591 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.425778 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.425967 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:02Z","lastTransitionTime":"2026-02-14T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.436340 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62a34a3861770f5f41751762c4076f7538409143fc76997ad85b95bbe5789e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.457436 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.489790 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ac6b2f87ac30e1cfca9f511df4154515342eb29bdf98d1769d3a159c98f168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5ec29a354dcda445db81a7bd941099e76b789a3c84d6350edd4020bbb3d96a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:42:58Z\\\",\\\"message\\\":\\\"andler 6 for removal\\\\nI0214 18:42:58.625540 6124 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 18:42:58.625606 6124 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 18:42:58.625235 6124 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 18:42:58.625771 6124 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0214 18:42:58.625904 6124 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 18:42:58.626290 6124 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 18:42:58.626352 6124 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 18:42:58.626363 6124 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 18:42:58.626370 6124 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 18:42:58.626384 6124 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 18:42:58.626392 6124 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 18:42:58.626408 6124 factory.go:656] Stopping watch factory\\\\nI0214 18:42:58.626427 6124 ovnkube.go:599] Stopped ovnkube\\\\nI0214 18:42:58.626455 6124 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0ac6b2f87ac30e1cfca9f511df4154515342eb29bdf98d1769d3a159c98f168\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"message\\\":\\\"d as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0214 18:43:00.372941 6318 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0214 18:43:00.373132 6318 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-k5mzq\\\\nI0214 18:43:00.374300 6318 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-k5mzq\\\\nI0214 18:43:00.374311 6318 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-k5mzq in node crc\\\\nI0214 18:43:00.374317 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-k5mzq after 0 failed attempt(s)\\\\nI0214 18:43:00.374323 6318 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-k5mzq\\\\nF0214 18:43:00.373245 6318 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.506857 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6wh27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5a3174-286c-4e61-a682-3367cc751fee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562ae320c8e3bf75555e510eda044b859886ae50d8583ddb65da2ac4a698dfc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qggbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6wh27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.526000 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf15f881-4696-42f3-af8d-2e1b02eee35b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b77b31ca9268c27935ddb1973488480bb124ffdb5934bb011eca818de003da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d4195bd0709714936c540f3c16554a5d06bd95ed92c23003b5d7efc05afcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhdvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.529437 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.529491 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.529513 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.529543 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.529565 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:02Z","lastTransitionTime":"2026-02-14T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.546774 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.567508 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.588259 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.607309 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrgww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b614985-b2f8-443d-9996-635d7e407b24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrgww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.630359 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.633096 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.633169 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.633209 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.633239 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.633258 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:02Z","lastTransitionTime":"2026-02-14T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.661448 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.684193 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.694360 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b614985-b2f8-443d-9996-635d7e407b24-metrics-certs\") pod \"network-metrics-daemon-xrgww\" (UID: \"6b614985-b2f8-443d-9996-635d7e407b24\") " pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:02 crc kubenswrapper[4897]: E0214 18:43:02.694568 4897 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 18:43:02 crc kubenswrapper[4897]: E0214 18:43:02.694676 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b614985-b2f8-443d-9996-635d7e407b24-metrics-certs podName:6b614985-b2f8-443d-9996-635d7e407b24 nodeName:}" failed. No retries permitted until 2026-02-14 18:43:03.694643863 +0000 UTC m=+36.671052386 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b614985-b2f8-443d-9996-635d7e407b24-metrics-certs") pod "network-metrics-daemon-xrgww" (UID: "6b614985-b2f8-443d-9996-635d7e407b24") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.705907 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.723791 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:02Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.738446 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.738489 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.738503 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.738526 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.738540 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:02Z","lastTransitionTime":"2026-02-14T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.748858 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 14:26:11.625810126 +0000 UTC Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.793887 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.794004 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.793916 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:02 crc kubenswrapper[4897]: E0214 18:43:02.794122 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:02 crc kubenswrapper[4897]: E0214 18:43:02.794195 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:02 crc kubenswrapper[4897]: E0214 18:43:02.794408 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.842155 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.842218 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.842240 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.842271 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.842293 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:02Z","lastTransitionTime":"2026-02-14T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.945861 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.945948 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.945968 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.945997 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:02 crc kubenswrapper[4897]: I0214 18:43:02.946015 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:02Z","lastTransitionTime":"2026-02-14T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.049059 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.049168 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.049186 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.049211 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.049228 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:03Z","lastTransitionTime":"2026-02-14T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.152006 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.152097 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.152115 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.152139 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.152157 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:03Z","lastTransitionTime":"2026-02-14T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.255402 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.255467 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.255485 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.255510 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.255530 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:03Z","lastTransitionTime":"2026-02-14T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.358957 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.359006 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.359025 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.359081 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.359098 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:03Z","lastTransitionTime":"2026-02-14T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.461972 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.462063 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.462084 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.462109 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.462127 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:03Z","lastTransitionTime":"2026-02-14T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.564998 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.565093 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.565111 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.565134 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.565156 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:03Z","lastTransitionTime":"2026-02-14T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.604330 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:43:03 crc kubenswrapper[4897]: E0214 18:43:03.604549 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:43:19.604508331 +0000 UTC m=+52.580916864 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.604671 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.604747 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:03 crc kubenswrapper[4897]: E0214 18:43:03.604917 4897 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 18:43:03 crc kubenswrapper[4897]: E0214 18:43:03.604998 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 18:43:19.604975925 +0000 UTC m=+52.581384438 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 18:43:03 crc kubenswrapper[4897]: E0214 18:43:03.604917 4897 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 18:43:03 crc kubenswrapper[4897]: E0214 18:43:03.605147 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 18:43:19.60512261 +0000 UTC m=+52.581531133 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.668065 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.668119 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.668135 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.668160 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.668178 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:03Z","lastTransitionTime":"2026-02-14T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.706020 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.706152 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b614985-b2f8-443d-9996-635d7e407b24-metrics-certs\") pod \"network-metrics-daemon-xrgww\" (UID: \"6b614985-b2f8-443d-9996-635d7e407b24\") " pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.706216 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:03 crc kubenswrapper[4897]: E0214 18:43:03.706364 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 18:43:03 crc kubenswrapper[4897]: E0214 18:43:03.706410 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 18:43:03 crc kubenswrapper[4897]: E0214 18:43:03.706415 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 18:43:03 crc kubenswrapper[4897]: E0214 18:43:03.706435 4897 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:43:03 crc kubenswrapper[4897]: E0214 18:43:03.706450 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 18:43:03 crc kubenswrapper[4897]: E0214 18:43:03.706471 4897 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:43:03 crc kubenswrapper[4897]: E0214 18:43:03.706522 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 18:43:19.706494489 +0000 UTC m=+52.682903012 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:43:03 crc kubenswrapper[4897]: E0214 18:43:03.706423 4897 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 18:43:03 crc kubenswrapper[4897]: E0214 18:43:03.706561 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 18:43:19.706543561 +0000 UTC m=+52.682952084 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:43:03 crc kubenswrapper[4897]: E0214 18:43:03.706634 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b614985-b2f8-443d-9996-635d7e407b24-metrics-certs podName:6b614985-b2f8-443d-9996-635d7e407b24 nodeName:}" failed. No retries permitted until 2026-02-14 18:43:05.706604783 +0000 UTC m=+38.683013306 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b614985-b2f8-443d-9996-635d7e407b24-metrics-certs") pod "network-metrics-daemon-xrgww" (UID: "6b614985-b2f8-443d-9996-635d7e407b24") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.749717 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 17:13:31.781682 +0000 UTC Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.771191 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.771246 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.771263 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.771297 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.771315 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:03Z","lastTransitionTime":"2026-02-14T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.793557 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:03 crc kubenswrapper[4897]: E0214 18:43:03.793741 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.874570 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.874646 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.874663 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.874689 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.874706 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:03Z","lastTransitionTime":"2026-02-14T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.978193 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.978250 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.978267 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.978295 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:03 crc kubenswrapper[4897]: I0214 18:43:03.978312 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:03Z","lastTransitionTime":"2026-02-14T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.046528 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.046580 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.046596 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.046620 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.046637 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:04Z","lastTransitionTime":"2026-02-14T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:04 crc kubenswrapper[4897]: E0214 18:43:04.067128 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:04Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.073109 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.073152 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.073169 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.073193 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.073211 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:04Z","lastTransitionTime":"2026-02-14T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:04 crc kubenswrapper[4897]: E0214 18:43:04.093809 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:04Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.099497 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.099557 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.099576 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.099605 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.099627 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:04Z","lastTransitionTime":"2026-02-14T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:04 crc kubenswrapper[4897]: E0214 18:43:04.121560 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:04Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.138305 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.138382 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.138406 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.138435 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.138455 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:04Z","lastTransitionTime":"2026-02-14T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:04 crc kubenswrapper[4897]: E0214 18:43:04.159360 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:04Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.165459 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.165505 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.165522 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.165541 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.165556 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:04Z","lastTransitionTime":"2026-02-14T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:04 crc kubenswrapper[4897]: E0214 18:43:04.184675 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:04Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:04 crc kubenswrapper[4897]: E0214 18:43:04.184897 4897 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.186718 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.186766 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.186782 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.186807 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.186825 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:04Z","lastTransitionTime":"2026-02-14T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.290416 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.290460 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.290476 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.290500 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.290519 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:04Z","lastTransitionTime":"2026-02-14T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.393331 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.393366 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.393382 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.393404 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.393421 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:04Z","lastTransitionTime":"2026-02-14T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.496099 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.496509 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.496688 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.496814 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.496947 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:04Z","lastTransitionTime":"2026-02-14T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.600126 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.600153 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.600163 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.600177 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.600186 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:04Z","lastTransitionTime":"2026-02-14T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.702710 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.702763 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.702779 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.702803 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.702820 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:04Z","lastTransitionTime":"2026-02-14T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.750302 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 21:13:32.143373405 +0000 UTC Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.793910 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.794150 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:04 crc kubenswrapper[4897]: E0214 18:43:04.794277 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:04 crc kubenswrapper[4897]: E0214 18:43:04.794400 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.794506 4897 scope.go:117] "RemoveContainer" containerID="e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.793918 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:04 crc kubenswrapper[4897]: E0214 18:43:04.794651 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.804272 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.804296 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.804304 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.804318 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.804329 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:04Z","lastTransitionTime":"2026-02-14T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.906740 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.906773 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.906782 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.906797 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:04 crc kubenswrapper[4897]: I0214 18:43:04.906807 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:04Z","lastTransitionTime":"2026-02-14T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.010117 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.010426 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.010448 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.010473 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.010490 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:05Z","lastTransitionTime":"2026-02-14T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.112659 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.112712 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.112724 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.112743 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.112756 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:05Z","lastTransitionTime":"2026-02-14T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.149275 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.151582 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d"} Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.151894 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.169504 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:05Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.189066 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:05Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.209279 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:05Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.215284 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.215349 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.215373 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.215403 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.215431 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:05Z","lastTransitionTime":"2026-02-14T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.230321 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:05Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.249589 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:05Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.269823 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:05Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.285424 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:05Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.300760 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrgww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b614985-b2f8-443d-9996-635d7e407b24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrgww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:05Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.318948 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.318999 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.319020 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.319081 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.319104 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:05Z","lastTransitionTime":"2026-02-14T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.321871 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:05Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.341878 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:05Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.363674 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:05Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.387842 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62a34a3861770f5f41751762c4076f7538409143fc76997ad85b95bbe5789e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:05Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.405478 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6wh27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5a3174-286c-4e61-a682-3367cc751fee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562ae320c8e3bf75555e510eda044b859886ae50d8583ddb65da2ac4a698dfc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qggbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6wh27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:05Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.422178 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.422218 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.422233 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.422258 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.422274 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:05Z","lastTransitionTime":"2026-02-14T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.488645 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf15f881-4696-42f3-af8d-2e1b02eee35b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b77b31ca9268c27935ddb1973488480bb124ffdb5934bb011eca818de003da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d4195bd0709714936c540f3c16554a5d06bd95ed92c23003b5d7efc05afcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhdvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:05Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.512750 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:05Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.525856 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.525894 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.525906 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.525923 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.525936 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:05Z","lastTransitionTime":"2026-02-14T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.536797 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ac6b2f87ac30e1cfca9f511df4154515342eb29bdf98d1769d3a159c98f168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5ec29a354dcda445db81a7bd941099e76b789a3c84d6350edd4020bbb3d96a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:42:58Z\\\",\\\"message\\\":\\\"andler 6 for removal\\\\nI0214 18:42:58.625540 6124 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 18:42:58.625606 6124 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 18:42:58.625235 6124 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 18:42:58.625771 6124 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0214 18:42:58.625904 6124 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 18:42:58.626290 6124 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 18:42:58.626352 6124 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 18:42:58.626363 6124 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 18:42:58.626370 6124 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 18:42:58.626384 6124 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 18:42:58.626392 6124 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 18:42:58.626408 6124 factory.go:656] Stopping watch factory\\\\nI0214 18:42:58.626427 6124 ovnkube.go:599] Stopped ovnkube\\\\nI0214 18:42:58.626455 6124 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0ac6b2f87ac30e1cfca9f511df4154515342eb29bdf98d1769d3a159c98f168\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"message\\\":\\\"d as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0214 18:43:00.372941 6318 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0214 18:43:00.373132 6318 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-k5mzq\\\\nI0214 18:43:00.374300 6318 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-k5mzq\\\\nI0214 18:43:00.374311 6318 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-k5mzq in node crc\\\\nI0214 18:43:00.374317 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-k5mzq after 0 failed attempt(s)\\\\nI0214 18:43:00.374323 6318 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-k5mzq\\\\nF0214 18:43:00.373245 6318 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:05Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.628924 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.628964 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.628978 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.628998 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.629009 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:05Z","lastTransitionTime":"2026-02-14T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.731339 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.731419 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.731437 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.731460 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.731477 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:05Z","lastTransitionTime":"2026-02-14T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.750817 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 10:35:29.768478609 +0000 UTC Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.793540 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:05 crc kubenswrapper[4897]: E0214 18:43:05.793745 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.795174 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b614985-b2f8-443d-9996-635d7e407b24-metrics-certs\") pod \"network-metrics-daemon-xrgww\" (UID: \"6b614985-b2f8-443d-9996-635d7e407b24\") " pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:05 crc kubenswrapper[4897]: E0214 18:43:05.795368 4897 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 18:43:05 crc kubenswrapper[4897]: E0214 18:43:05.795464 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b614985-b2f8-443d-9996-635d7e407b24-metrics-certs podName:6b614985-b2f8-443d-9996-635d7e407b24 nodeName:}" failed. No retries permitted until 2026-02-14 18:43:09.795436748 +0000 UTC m=+42.771845261 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b614985-b2f8-443d-9996-635d7e407b24-metrics-certs") pod "network-metrics-daemon-xrgww" (UID: "6b614985-b2f8-443d-9996-635d7e407b24") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.834316 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.834381 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.834395 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.834412 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.834425 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:05Z","lastTransitionTime":"2026-02-14T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.936965 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.937022 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.937084 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.937117 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:05 crc kubenswrapper[4897]: I0214 18:43:05.937140 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:05Z","lastTransitionTime":"2026-02-14T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.040204 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.040248 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.040261 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.040278 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.040291 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:06Z","lastTransitionTime":"2026-02-14T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.143082 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.143133 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.143144 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.143160 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.143172 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:06Z","lastTransitionTime":"2026-02-14T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.245379 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.245442 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.245477 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.245492 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.245503 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:06Z","lastTransitionTime":"2026-02-14T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.348117 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.348188 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.348205 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.348228 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.348242 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:06Z","lastTransitionTime":"2026-02-14T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.451197 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.451276 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.451291 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.451349 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.451368 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:06Z","lastTransitionTime":"2026-02-14T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.554238 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.554304 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.554321 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.554353 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.554374 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:06Z","lastTransitionTime":"2026-02-14T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.657576 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.657621 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.657633 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.657653 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.657665 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:06Z","lastTransitionTime":"2026-02-14T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.751350 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 18:58:58.826131514 +0000 UTC Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.760232 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.760266 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.760275 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.760289 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.760299 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:06Z","lastTransitionTime":"2026-02-14T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.793524 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.793609 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:06 crc kubenswrapper[4897]: E0214 18:43:06.793700 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.794059 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:06 crc kubenswrapper[4897]: E0214 18:43:06.794131 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:06 crc kubenswrapper[4897]: E0214 18:43:06.794244 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.862634 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.862675 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.862688 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.862703 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.862715 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:06Z","lastTransitionTime":"2026-02-14T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.965829 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.965875 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.965892 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.965949 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:06 crc kubenswrapper[4897]: I0214 18:43:06.965965 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:06Z","lastTransitionTime":"2026-02-14T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.069397 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.069450 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.069461 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.069476 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.069495 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:07Z","lastTransitionTime":"2026-02-14T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.171638 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.171695 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.171713 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.171741 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.171756 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:07Z","lastTransitionTime":"2026-02-14T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.274911 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.274968 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.274977 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.274993 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.275003 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:07Z","lastTransitionTime":"2026-02-14T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.376827 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.376860 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.376869 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.376883 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.376906 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:07Z","lastTransitionTime":"2026-02-14T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.478709 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.478758 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.478767 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.478782 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.478791 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:07Z","lastTransitionTime":"2026-02-14T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.581513 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.581559 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.581571 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.581599 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.581611 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:07Z","lastTransitionTime":"2026-02-14T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.683867 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.683896 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.683904 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.683917 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.683926 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:07Z","lastTransitionTime":"2026-02-14T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.752146 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 00:35:04.032152857 +0000 UTC Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.787315 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.787374 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.787393 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.787419 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.787439 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:07Z","lastTransitionTime":"2026-02-14T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.793790 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:07 crc kubenswrapper[4897]: E0214 18:43:07.794019 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.812235 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6wh27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5a3174-286c-4e61-a682-3367cc751fee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562ae320c8e3bf75555e510eda044b859886ae50d8583ddb65da2ac4a698dfc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qggbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6wh27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:07Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.831312 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf15f881-4696-42f3-af8d-2e1b02eee35b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b77b31ca9268c27935ddb1973488480bb124ffdb5934bb011eca818de003da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d4195bd0709714936c540f3c16554a5d06bd95ed92c23003b5d7efc05afcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhdvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:07Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.851932 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:07Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.888516 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ac6b2f87ac30e1cfca9f511df4154515342eb29bdf98d1769d3a159c98f168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5ec29a354dcda445db81a7bd941099e76b789a3c84d6350edd4020bbb3d96a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:42:58Z\\\",\\\"message\\\":\\\"andler 6 for removal\\\\nI0214 18:42:58.625540 6124 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 18:42:58.625606 6124 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 18:42:58.625235 6124 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 18:42:58.625771 6124 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0214 18:42:58.625904 6124 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0214 18:42:58.626290 6124 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 18:42:58.626352 6124 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 18:42:58.626363 6124 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 18:42:58.626370 6124 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 18:42:58.626384 6124 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 18:42:58.626392 6124 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 18:42:58.626408 6124 factory.go:656] Stopping watch factory\\\\nI0214 18:42:58.626427 6124 ovnkube.go:599] Stopped ovnkube\\\\nI0214 18:42:58.626455 6124 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0ac6b2f87ac30e1cfca9f511df4154515342eb29bdf98d1769d3a159c98f168\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"message\\\":\\\"d as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0214 18:43:00.372941 6318 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0214 18:43:00.373132 6318 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-k5mzq\\\\nI0214 18:43:00.374300 6318 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-k5mzq\\\\nI0214 18:43:00.374311 6318 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-k5mzq in node crc\\\\nI0214 18:43:00.374317 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-k5mzq after 0 failed attempt(s)\\\\nI0214 18:43:00.374323 6318 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-k5mzq\\\\nF0214 18:43:00.373245 6318 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:07Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.890868 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.891078 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.891100 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.891124 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.891141 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:07Z","lastTransitionTime":"2026-02-14T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.911900 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:07Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.932187 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:07Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.949784 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:07Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.967302 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:07Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.983480 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:07Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.994013 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.994087 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.994112 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.994138 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:07 crc kubenswrapper[4897]: I0214 18:43:07.994156 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:07Z","lastTransitionTime":"2026-02-14T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.004562 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:08Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.019850 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:08Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.034917 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrgww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b614985-b2f8-443d-9996-635d7e407b24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrgww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:08Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.055973 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:08Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.067696 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:08Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.083779 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:08Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.097487 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.097553 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.097573 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.097599 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.097651 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:08Z","lastTransitionTime":"2026-02-14T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.103889 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62a34a3861770f5f41751762c4076f7538409143fc76997ad85b95bbe5789e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:08Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.200891 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.200961 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.200978 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.201002 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.201019 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:08Z","lastTransitionTime":"2026-02-14T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.304885 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.304932 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.304943 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.304960 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.304969 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:08Z","lastTransitionTime":"2026-02-14T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.408216 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.408268 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.408287 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.408310 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.408327 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:08Z","lastTransitionTime":"2026-02-14T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.511414 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.511449 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.511459 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.511475 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.511485 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:08Z","lastTransitionTime":"2026-02-14T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.614209 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.614257 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.614268 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.614285 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.614479 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:08Z","lastTransitionTime":"2026-02-14T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.718196 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.718267 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.718292 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.718318 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.718338 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:08Z","lastTransitionTime":"2026-02-14T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.753001 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 03:12:52.055262549 +0000 UTC Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.793686 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.793746 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.793785 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:08 crc kubenswrapper[4897]: E0214 18:43:08.793869 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:08 crc kubenswrapper[4897]: E0214 18:43:08.794069 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:08 crc kubenswrapper[4897]: E0214 18:43:08.794146 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.821000 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.821110 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.821129 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.821156 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.821173 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:08Z","lastTransitionTime":"2026-02-14T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.923985 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.924088 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.924114 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.924142 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:08 crc kubenswrapper[4897]: I0214 18:43:08.924162 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:08Z","lastTransitionTime":"2026-02-14T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.027341 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.027403 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.027420 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.027445 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.027462 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:09Z","lastTransitionTime":"2026-02-14T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.130719 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.130763 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.130782 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.130804 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.130820 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:09Z","lastTransitionTime":"2026-02-14T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.233747 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.233831 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.233855 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.233892 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.233915 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:09Z","lastTransitionTime":"2026-02-14T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.337059 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.337133 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.337151 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.337178 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.337196 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:09Z","lastTransitionTime":"2026-02-14T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.439933 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.439985 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.440003 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.440026 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.440071 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:09Z","lastTransitionTime":"2026-02-14T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.543313 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.543359 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.543423 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.543460 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.543476 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:09Z","lastTransitionTime":"2026-02-14T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.646397 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.646462 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.646481 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.646506 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.646523 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:09Z","lastTransitionTime":"2026-02-14T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.749790 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.749852 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.749871 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.749896 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.749918 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:09Z","lastTransitionTime":"2026-02-14T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.754145 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 19:41:06.409899939 +0000 UTC Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.793857 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:09 crc kubenswrapper[4897]: E0214 18:43:09.794105 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.837450 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b614985-b2f8-443d-9996-635d7e407b24-metrics-certs\") pod \"network-metrics-daemon-xrgww\" (UID: \"6b614985-b2f8-443d-9996-635d7e407b24\") " pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:09 crc kubenswrapper[4897]: E0214 18:43:09.837587 4897 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 18:43:09 crc kubenswrapper[4897]: E0214 18:43:09.837653 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b614985-b2f8-443d-9996-635d7e407b24-metrics-certs podName:6b614985-b2f8-443d-9996-635d7e407b24 nodeName:}" failed. No retries permitted until 2026-02-14 18:43:17.837633829 +0000 UTC m=+50.814042312 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b614985-b2f8-443d-9996-635d7e407b24-metrics-certs") pod "network-metrics-daemon-xrgww" (UID: "6b614985-b2f8-443d-9996-635d7e407b24") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.852979 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.853061 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.853078 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.853101 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.853118 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:09Z","lastTransitionTime":"2026-02-14T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.955960 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.956025 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.956078 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.956107 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:09 crc kubenswrapper[4897]: I0214 18:43:09.956128 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:09Z","lastTransitionTime":"2026-02-14T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.059092 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.059136 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.059147 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.059164 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.059176 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:10Z","lastTransitionTime":"2026-02-14T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.162164 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.162251 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.162275 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.162310 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.162333 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:10Z","lastTransitionTime":"2026-02-14T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.265262 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.265307 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.265318 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.265336 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.265348 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:10Z","lastTransitionTime":"2026-02-14T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.368250 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.368319 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.368337 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.368362 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.368380 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:10Z","lastTransitionTime":"2026-02-14T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.470115 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.470180 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.470193 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.470210 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.470224 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:10Z","lastTransitionTime":"2026-02-14T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.575799 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.575865 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.575878 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.575902 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.575914 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:10Z","lastTransitionTime":"2026-02-14T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.678922 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.678988 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.679006 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.679059 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.679078 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:10Z","lastTransitionTime":"2026-02-14T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.755205 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 20:24:17.787958013 +0000 UTC Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.781725 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.781828 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.781844 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.781870 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.781887 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:10Z","lastTransitionTime":"2026-02-14T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.793865 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.793952 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:10 crc kubenswrapper[4897]: E0214 18:43:10.794073 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.794134 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:10 crc kubenswrapper[4897]: E0214 18:43:10.794310 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:10 crc kubenswrapper[4897]: E0214 18:43:10.794449 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.884806 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.884862 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.884880 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.884927 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.884945 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:10Z","lastTransitionTime":"2026-02-14T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.987816 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.987883 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.987902 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.987929 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:10 crc kubenswrapper[4897]: I0214 18:43:10.987951 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:10Z","lastTransitionTime":"2026-02-14T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.091581 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.091646 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.091662 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.091687 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.091704 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:11Z","lastTransitionTime":"2026-02-14T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.194457 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.194515 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.194529 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.194552 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.194568 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:11Z","lastTransitionTime":"2026-02-14T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.298062 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.298119 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.298134 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.298157 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.298175 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:11Z","lastTransitionTime":"2026-02-14T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.400613 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.400666 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.400684 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.400708 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.400726 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:11Z","lastTransitionTime":"2026-02-14T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.504013 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.504100 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.504116 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.504137 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.504152 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:11Z","lastTransitionTime":"2026-02-14T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.607381 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.607467 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.607485 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.607508 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.607528 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:11Z","lastTransitionTime":"2026-02-14T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.710881 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.710940 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.710957 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.710981 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.710998 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:11Z","lastTransitionTime":"2026-02-14T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.755771 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 15:02:45.809583884 +0000 UTC Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.793498 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:11 crc kubenswrapper[4897]: E0214 18:43:11.793706 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.814751 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.814820 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.814843 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.814874 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.814896 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:11Z","lastTransitionTime":"2026-02-14T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.918167 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.918218 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.918229 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.918248 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:11 crc kubenswrapper[4897]: I0214 18:43:11.918259 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:11Z","lastTransitionTime":"2026-02-14T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.021221 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.021284 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.021309 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.021336 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.021358 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:12Z","lastTransitionTime":"2026-02-14T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.123911 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.123977 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.123993 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.124019 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.124072 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:12Z","lastTransitionTime":"2026-02-14T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.227198 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.227262 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.227279 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.227304 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.227323 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:12Z","lastTransitionTime":"2026-02-14T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.330801 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.330849 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.330870 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.330896 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.330913 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:12Z","lastTransitionTime":"2026-02-14T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.435530 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.435591 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.435607 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.435632 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.435649 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:12Z","lastTransitionTime":"2026-02-14T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.538092 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.538760 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.538794 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.538816 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.538828 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:12Z","lastTransitionTime":"2026-02-14T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.641899 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.641974 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.641997 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.642059 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.642086 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:12Z","lastTransitionTime":"2026-02-14T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.745534 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.745612 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.745632 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.745657 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.745675 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:12Z","lastTransitionTime":"2026-02-14T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.756089 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 18:07:58.564377741 +0000 UTC Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.793462 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.793551 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:12 crc kubenswrapper[4897]: E0214 18:43:12.793764 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.793521 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:12 crc kubenswrapper[4897]: E0214 18:43:12.793977 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:12 crc kubenswrapper[4897]: E0214 18:43:12.794166 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.848945 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.849016 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.849064 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.849090 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.849108 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:12Z","lastTransitionTime":"2026-02-14T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.952611 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.952671 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.952688 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.952712 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:12 crc kubenswrapper[4897]: I0214 18:43:12.952728 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:12Z","lastTransitionTime":"2026-02-14T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.055845 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.055908 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.055929 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.055956 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.055976 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:13Z","lastTransitionTime":"2026-02-14T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.159679 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.159745 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.159765 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.159798 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.159819 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:13Z","lastTransitionTime":"2026-02-14T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.263635 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.263696 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.263714 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.263742 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.263761 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:13Z","lastTransitionTime":"2026-02-14T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.366415 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.366463 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.366478 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.366498 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.366513 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:13Z","lastTransitionTime":"2026-02-14T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.470647 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.470745 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.470770 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.470805 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.470849 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:13Z","lastTransitionTime":"2026-02-14T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.574006 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.574089 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.574103 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.574126 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.574140 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:13Z","lastTransitionTime":"2026-02-14T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.677587 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.677642 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.677659 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.677682 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.677701 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:13Z","lastTransitionTime":"2026-02-14T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.756354 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 23:13:33.489259383 +0000 UTC Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.781003 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.781073 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.781086 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.781105 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.781118 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:13Z","lastTransitionTime":"2026-02-14T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.793604 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:13 crc kubenswrapper[4897]: E0214 18:43:13.793742 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.795089 4897 scope.go:117] "RemoveContainer" containerID="d0ac6b2f87ac30e1cfca9f511df4154515342eb29bdf98d1769d3a159c98f168" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.813611 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:13Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.845690 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:13Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.872946 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62a34a3861770f5f41751762c4076f7538409143fc76997ad85b95bbe5789e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:13Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.883610 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.883644 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.883656 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.883673 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.883686 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:13Z","lastTransitionTime":"2026-02-14T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.892981 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:13Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.928018 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ac6b2f87ac30e1cfca9f511df4154515342eb29bdf98d1769d3a159c98f168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0ac6b2f87ac30e1cfca9f511df4154515342eb29bdf98d1769d3a159c98f168\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"message\\\":\\\"d as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0214 18:43:00.372941 6318 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0214 18:43:00.373132 6318 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-k5mzq\\\\nI0214 18:43:00.374300 6318 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-k5mzq\\\\nI0214 18:43:00.374311 6318 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-k5mzq in node crc\\\\nI0214 18:43:00.374317 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-k5mzq after 0 failed attempt(s)\\\\nI0214 18:43:00.374323 6318 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-k5mzq\\\\nF0214 18:43:00.373245 6318 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fz879_openshift-ovn-kubernetes(f304b761-40a3-41ba-af33-a2b0634a55fb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:13Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.941169 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6wh27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5a3174-286c-4e61-a682-3367cc751fee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562ae320c8e3bf75555e510eda044b859886ae50d8583ddb65da2ac4a698dfc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qggbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6wh27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:13Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.954767 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf15f881-4696-42f3-af8d-2e1b02eee35b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b77b31ca9268c27935ddb1973488480bb124ffdb5934bb011eca818de003da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d4195bd0709714936c540f3c16554a5d06bd95ed92c23003b5d7efc05afcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhdvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:13Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.978371 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:13Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.987766 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.987795 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.987807 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.987826 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.987839 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:13Z","lastTransitionTime":"2026-02-14T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:13 crc kubenswrapper[4897]: I0214 18:43:13.997297 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:13Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.015616 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.031785 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.044492 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.056155 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrgww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b614985-b2f8-443d-9996-635d7e407b24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrgww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.078381 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.089743 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.089797 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.089811 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.089834 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.089852 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:14Z","lastTransitionTime":"2026-02-14T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.094431 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.114096 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.185318 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fz879_f304b761-40a3-41ba-af33-a2b0634a55fb/ovnkube-controller/1.log" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.188489 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" event={"ID":"f304b761-40a3-41ba-af33-a2b0634a55fb","Type":"ContainerStarted","Data":"e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2"} Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.188637 4897 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.192281 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.192321 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.192333 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.192351 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.192363 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:14Z","lastTransitionTime":"2026-02-14T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.210903 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.239925 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.261052 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62a34a3861770f5f41751762c4076f7538409143fc76997ad85b95bbe5789e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.285553 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.293467 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.293512 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.293548 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.293567 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.293578 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:14Z","lastTransitionTime":"2026-02-14T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:14 crc kubenswrapper[4897]: E0214 18:43:14.311444 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.314969 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.315004 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.315015 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.315047 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.315058 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:14Z","lastTransitionTime":"2026-02-14T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.316196 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0ac6b2f87ac30e1cfca9f511df4154515342eb29bdf98d1769d3a159c98f168\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"message\\\":\\\"d as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0214 18:43:00.372941 6318 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0214 18:43:00.373132 6318 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-k5mzq\\\\nI0214 18:43:00.374300 6318 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-k5mzq\\\\nI0214 18:43:00.374311 6318 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-k5mzq in node crc\\\\nI0214 18:43:00.374317 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-k5mzq after 0 failed attempt(s)\\\\nI0214 18:43:00.374323 6318 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-k5mzq\\\\nF0214 18:43:00.373245 6318 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: E0214 18:43:14.327237 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.329382 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6wh27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5a3174-286c-4e61-a682-3367cc751fee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562ae320c8e3bf75555e510eda044b859886ae50d8583ddb65da2ac4a698dfc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qggbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6wh27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.330696 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.330726 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.330752 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.330770 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.330779 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:14Z","lastTransitionTime":"2026-02-14T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.341522 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf15f881-4696-42f3-af8d-2e1b02eee35b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b77b31ca9268c27935ddb1973488480bb124ffdb5934bb011eca818de003da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d4195bd0709714936c540f3c16554a5d06bd95ed92c23003b5d7efc05afcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhdvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: E0214 18:43:14.344643 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.349572 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.349607 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.349616 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.349630 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.349642 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:14Z","lastTransitionTime":"2026-02-14T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.358573 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: E0214 18:43:14.362300 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.366525 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.366548 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.366556 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.366571 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.366580 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:14Z","lastTransitionTime":"2026-02-14T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.373776 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: E0214 18:43:14.379014 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: E0214 18:43:14.379135 4897 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.380706 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.380730 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.380739 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.380753 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.380763 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:14Z","lastTransitionTime":"2026-02-14T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.386860 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.398609 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.409734 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrgww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b614985-b2f8-443d-9996-635d7e407b24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrgww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.424985 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.444254 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.466403 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.484018 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.484130 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.484147 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.484173 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.484220 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:14Z","lastTransitionTime":"2026-02-14T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.486986 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:14Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.588493 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.588593 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.588615 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.588671 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.588694 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:14Z","lastTransitionTime":"2026-02-14T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.692177 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.692236 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.692254 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.692279 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.692298 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:14Z","lastTransitionTime":"2026-02-14T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.757193 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 02:58:58.387107099 +0000 UTC Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.793120 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.793203 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:14 crc kubenswrapper[4897]: E0214 18:43:14.793341 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:14 crc kubenswrapper[4897]: E0214 18:43:14.793626 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.793761 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:14 crc kubenswrapper[4897]: E0214 18:43:14.794104 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.795519 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.795566 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.795584 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.795606 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.795622 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:14Z","lastTransitionTime":"2026-02-14T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.898212 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.898282 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.898308 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.898338 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:14 crc kubenswrapper[4897]: I0214 18:43:14.898361 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:14Z","lastTransitionTime":"2026-02-14T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.002101 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.002172 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.002195 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.002227 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.002249 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:15Z","lastTransitionTime":"2026-02-14T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.106228 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.106297 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.106314 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.106339 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.106355 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:15Z","lastTransitionTime":"2026-02-14T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.196778 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fz879_f304b761-40a3-41ba-af33-a2b0634a55fb/ovnkube-controller/2.log" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.198275 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fz879_f304b761-40a3-41ba-af33-a2b0634a55fb/ovnkube-controller/1.log" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.205408 4897 generic.go:334] "Generic (PLEG): container finished" podID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerID="e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2" exitCode=1 Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.205468 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" event={"ID":"f304b761-40a3-41ba-af33-a2b0634a55fb","Type":"ContainerDied","Data":"e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2"} Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.205519 4897 scope.go:117] "RemoveContainer" containerID="d0ac6b2f87ac30e1cfca9f511df4154515342eb29bdf98d1769d3a159c98f168" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.206747 4897 scope.go:117] "RemoveContainer" containerID="e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2" Feb 14 18:43:15 crc kubenswrapper[4897]: E0214 18:43:15.207142 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fz879_openshift-ovn-kubernetes(f304b761-40a3-41ba-af33-a2b0634a55fb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.212646 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.213175 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.213204 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.213235 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.213258 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:15Z","lastTransitionTime":"2026-02-14T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.218221 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.240090 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.260741 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.281080 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.297238 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.313566 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrgww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b614985-b2f8-443d-9996-635d7e407b24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrgww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.315999 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.316115 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.316137 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.316206 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.316223 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:15Z","lastTransitionTime":"2026-02-14T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.335470 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.354023 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.369866 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.391838 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62a34a3861770f5f41751762c4076f7538409143fc76997ad85b95bbe5789e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.409067 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6wh27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5a3174-286c-4e61-a682-3367cc751fee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562ae320c8e3bf75555e510eda044b859886ae50d8583ddb65da2ac4a698dfc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qggbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6wh27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.418571 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.418617 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.418634 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.418656 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.418672 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:15Z","lastTransitionTime":"2026-02-14T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.425934 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf15f881-4696-42f3-af8d-2e1b02eee35b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b77b31ca9268c27935ddb1973488480bb124ffdb5934bb011eca818de003da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d4195bd0709714936c540f3c16554a5d06bd95ed92c23003b5d7efc05afcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhdvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.443734 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.474490 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0ac6b2f87ac30e1cfca9f511df4154515342eb29bdf98d1769d3a159c98f168\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"message\\\":\\\"d as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0214 18:43:00.372941 6318 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0214 18:43:00.373132 6318 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-k5mzq\\\\nI0214 18:43:00.374300 6318 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-k5mzq\\\\nI0214 18:43:00.374311 6318 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-k5mzq in node crc\\\\nI0214 18:43:00.374317 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-k5mzq after 0 failed attempt(s)\\\\nI0214 18:43:00.374323 6318 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-k5mzq\\\\nF0214 18:43:00.373245 6318 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"twork-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 18:43:14.734199 6534 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 18:43:14.734424 6534 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 18:43:14.734771 6534 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 18:43:14.734801 6534 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 18:43:14.734821 6534 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 18:43:14.734869 6534 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 18:43:14.734921 6534 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 18:43:14.734952 6534 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 18:43:14.734958 6534 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 18:43:14.734982 6534 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 18:43:14.735010 6534 factory.go:656] Stopping watch factory\\\\nI0214 18:43:14.735069 6534 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 18:43:14.735076 6534 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.479704 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.495005 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.514634 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.520970 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.521011 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.521023 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.521084 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.521100 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:15Z","lastTransitionTime":"2026-02-14T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.530844 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.544666 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.566281 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0ac6b2f87ac30e1cfca9f511df4154515342eb29bdf98d1769d3a159c98f168\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"message\\\":\\\"d as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0214 18:43:00.372941 6318 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0214 18:43:00.373132 6318 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-k5mzq\\\\nI0214 18:43:00.374300 6318 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-k5mzq\\\\nI0214 18:43:00.374311 6318 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-k5mzq in node crc\\\\nI0214 18:43:00.374317 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-k5mzq after 0 failed attempt(s)\\\\nI0214 18:43:00.374323 6318 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-k5mzq\\\\nF0214 18:43:00.373245 6318 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"twork-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 18:43:14.734199 6534 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 18:43:14.734424 6534 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 18:43:14.734771 6534 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 18:43:14.734801 6534 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 18:43:14.734821 6534 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 18:43:14.734869 6534 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 18:43:14.734921 6534 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 18:43:14.734952 6534 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 18:43:14.734958 6534 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 18:43:14.734982 6534 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 18:43:14.735010 6534 factory.go:656] Stopping watch factory\\\\nI0214 18:43:14.735069 6534 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 18:43:14.735076 6534 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.577216 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6wh27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5a3174-286c-4e61-a682-3367cc751fee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562ae320c8e3bf75555e510eda044b859886ae50d8583ddb65da2ac4a698dfc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qggbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6wh27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.590366 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf15f881-4696-42f3-af8d-2e1b02eee35b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b77b31ca9268c27935ddb1973488480bb124ffdb5934bb011eca818de003da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d4195bd0709714936c540f3c16554a5d06bd95ed92c23003b5d7efc05afcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhdvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.605720 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.620326 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.623240 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.623295 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.623307 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.623329 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.623342 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:15Z","lastTransitionTime":"2026-02-14T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.634679 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.645777 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.657691 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrgww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b614985-b2f8-443d-9996-635d7e407b24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrgww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.672810 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.686636 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.700918 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.718456 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.725487 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.725526 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.725540 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.725560 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.725576 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:15Z","lastTransitionTime":"2026-02-14T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.731274 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.742487 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.756209 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62a34a3861770f5f41751762c4076f7538409143fc76997ad85b95bbe5789e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:15Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.758217 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 15:19:14.532975279 +0000 UTC Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.793785 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:15 crc kubenswrapper[4897]: E0214 18:43:15.793950 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.828372 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.828449 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.828474 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.828504 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.828528 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:15Z","lastTransitionTime":"2026-02-14T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.935001 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.935138 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.935159 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.935189 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:15 crc kubenswrapper[4897]: I0214 18:43:15.935216 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:15Z","lastTransitionTime":"2026-02-14T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.038752 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.038832 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.038868 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.038890 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.038904 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:16Z","lastTransitionTime":"2026-02-14T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.141610 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.141686 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.141708 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.141733 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.141757 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:16Z","lastTransitionTime":"2026-02-14T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.212331 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fz879_f304b761-40a3-41ba-af33-a2b0634a55fb/ovnkube-controller/2.log" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.218535 4897 scope.go:117] "RemoveContainer" containerID="e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2" Feb 14 18:43:16 crc kubenswrapper[4897]: E0214 18:43:16.218943 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fz879_openshift-ovn-kubernetes(f304b761-40a3-41ba-af33-a2b0634a55fb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.240339 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:16Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.244465 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.244524 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.244542 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.244568 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.244584 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:16Z","lastTransitionTime":"2026-02-14T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.261653 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:16Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.279024 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:16Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.294656 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrgww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b614985-b2f8-443d-9996-635d7e407b24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrgww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:16Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.316203 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:16Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.336339 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:16Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.347490 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.347537 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.347553 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.347576 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.347592 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:16Z","lastTransitionTime":"2026-02-14T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.355940 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:16Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.376559 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:16Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.392490 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:16Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.411823 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:16Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.429858 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:16Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.451158 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.451384 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.451522 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.451675 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.451816 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:16Z","lastTransitionTime":"2026-02-14T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.452337 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62a34a3861770f5f41751762c4076f7538409143fc76997ad85b95bbe5789e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:16Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.470989 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:16Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.500696 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"twork-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 18:43:14.734199 6534 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 18:43:14.734424 6534 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 18:43:14.734771 6534 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 18:43:14.734801 6534 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 18:43:14.734821 6534 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 18:43:14.734869 6534 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 18:43:14.734921 6534 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 18:43:14.734952 6534 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 18:43:14.734958 6534 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 18:43:14.734982 6534 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 18:43:14.735010 6534 factory.go:656] Stopping watch factory\\\\nI0214 18:43:14.735069 6534 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 18:43:14.735076 6534 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:43:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fz879_openshift-ovn-kubernetes(f304b761-40a3-41ba-af33-a2b0634a55fb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:16Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.526380 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6wh27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5a3174-286c-4e61-a682-3367cc751fee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562ae320c8e3bf75555e510eda044b859886ae50d8583ddb65da2ac4a698dfc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qggbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6wh27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:16Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.545107 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf15f881-4696-42f3-af8d-2e1b02eee35b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b77b31ca9268c27935ddb1973488480bb124ffdb5934bb011eca818de003da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d4195bd0709714936c540f3c16554a5d06bd95ed92c23003b5d7efc05afcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhdvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:16Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.556304 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.556363 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.556380 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.556410 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.556427 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:16Z","lastTransitionTime":"2026-02-14T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.659390 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.659444 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.659461 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.659489 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.659507 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:16Z","lastTransitionTime":"2026-02-14T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.758578 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 08:56:33.905213955 +0000 UTC Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.762253 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.762342 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.762362 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.762392 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.762412 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:16Z","lastTransitionTime":"2026-02-14T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.793842 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.793898 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.793911 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:16 crc kubenswrapper[4897]: E0214 18:43:16.794017 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:16 crc kubenswrapper[4897]: E0214 18:43:16.794239 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:16 crc kubenswrapper[4897]: E0214 18:43:16.794362 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.866315 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.866380 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.866399 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.866426 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.866445 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:16Z","lastTransitionTime":"2026-02-14T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.970124 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.970220 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.970241 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.970273 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:16 crc kubenswrapper[4897]: I0214 18:43:16.970297 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:16Z","lastTransitionTime":"2026-02-14T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.073860 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.073932 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.073949 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.073976 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.073998 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:17Z","lastTransitionTime":"2026-02-14T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.177672 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.177744 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.177762 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.177788 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.177809 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:17Z","lastTransitionTime":"2026-02-14T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.280592 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.280963 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.281045 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.281122 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.281184 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:17Z","lastTransitionTime":"2026-02-14T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.384725 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.384788 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.384806 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.384831 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.384849 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:17Z","lastTransitionTime":"2026-02-14T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.488573 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.489071 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.489146 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.489259 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.489343 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:17Z","lastTransitionTime":"2026-02-14T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.601073 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.601948 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.602026 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.602105 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.602128 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:17Z","lastTransitionTime":"2026-02-14T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.704861 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.704914 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.704932 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.704954 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.704973 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:17Z","lastTransitionTime":"2026-02-14T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.759578 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 00:31:37.6691431 +0000 UTC Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.793162 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:17 crc kubenswrapper[4897]: E0214 18:43:17.793341 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.808512 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.808948 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.809511 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.810013 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.810544 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6wh27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5a3174-286c-4e61-a682-3367cc751fee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562ae320c8e3bf75555e510eda044b859886ae50d8583ddb65da2ac4a698dfc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qggbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6wh27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:17Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.810590 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:17Z","lastTransitionTime":"2026-02-14T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.829127 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf15f881-4696-42f3-af8d-2e1b02eee35b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b77b31ca9268c27935ddb1973488480bb124ffdb5934bb011eca818de003da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d4195bd0709714936c540f3c16554a5d06bd95ed92c23003b5d7efc05afcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhdvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:17Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.854709 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:17Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.888810 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"twork-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 18:43:14.734199 6534 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 18:43:14.734424 6534 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 18:43:14.734771 6534 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 18:43:14.734801 6534 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 18:43:14.734821 6534 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 18:43:14.734869 6534 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 18:43:14.734921 6534 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 18:43:14.734952 6534 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 18:43:14.734958 6534 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 18:43:14.734982 6534 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 18:43:14.735010 6534 factory.go:656] Stopping watch factory\\\\nI0214 18:43:14.735069 6534 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 18:43:14.735076 6534 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:43:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fz879_openshift-ovn-kubernetes(f304b761-40a3-41ba-af33-a2b0634a55fb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:17Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.911551 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:17Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.914328 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.914543 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.914700 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.914848 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.914970 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:17Z","lastTransitionTime":"2026-02-14T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.931907 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:17Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.938325 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b614985-b2f8-443d-9996-635d7e407b24-metrics-certs\") pod \"network-metrics-daemon-xrgww\" (UID: \"6b614985-b2f8-443d-9996-635d7e407b24\") " pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:17 crc kubenswrapper[4897]: E0214 18:43:17.938567 4897 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 18:43:17 crc kubenswrapper[4897]: E0214 18:43:17.938706 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b614985-b2f8-443d-9996-635d7e407b24-metrics-certs podName:6b614985-b2f8-443d-9996-635d7e407b24 nodeName:}" failed. No retries permitted until 2026-02-14 18:43:33.938672128 +0000 UTC m=+66.915080791 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b614985-b2f8-443d-9996-635d7e407b24-metrics-certs") pod "network-metrics-daemon-xrgww" (UID: "6b614985-b2f8-443d-9996-635d7e407b24") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.942563 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.951941 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:17Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.959558 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.969416 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:17Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:17 crc kubenswrapper[4897]: I0214 18:43:17.984552 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:17Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.001612 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:17Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.014199 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.028144 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.028196 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.028219 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.028251 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.028272 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:18Z","lastTransitionTime":"2026-02-14T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.028832 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrgww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b614985-b2f8-443d-9996-635d7e407b24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrgww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.045669 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.063982 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.078552 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.101441 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62a34a3861770f5f41751762c4076f7538409143fc76997ad85b95bbe5789e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.115824 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.128719 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.131107 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.131152 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.131165 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.131186 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.131201 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:18Z","lastTransitionTime":"2026-02-14T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.143492 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.161118 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.174323 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.188552 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrgww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b614985-b2f8-443d-9996-635d7e407b24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrgww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.212581 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.235357 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.235432 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.235451 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.235475 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.235491 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:18Z","lastTransitionTime":"2026-02-14T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.236820 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.250117 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.262313 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.271902 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.285192 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62a34a3861770f5f41751762c4076f7538409143fc76997ad85b95bbe5789e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.297616 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e1f20e2-fb27-410d-8019-82e73c0be2e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81f5449b8c083d713a05ff1299a5a4025873014bb736633762af4acc1d6d7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73eab27b7a388abb7c9142d8ef6520646cf9d804e5c7c1ae4980749e175134d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9cdf53bb32ab9932f350a61855d31c9ff38fba5ad977fede380b8f3272fc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e42a9e3cce41d65878109f32ae572a4a52fbb5f9dbe1915498499e344004417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e42a9e3cce41d65878109f32ae572a4a52fbb5f9dbe1915498499e344004417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.308179 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.323851 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"twork-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 18:43:14.734199 6534 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 18:43:14.734424 6534 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 18:43:14.734771 6534 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 18:43:14.734801 6534 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 18:43:14.734821 6534 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 18:43:14.734869 6534 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 18:43:14.734921 6534 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 18:43:14.734952 6534 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 18:43:14.734958 6534 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 18:43:14.734982 6534 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 18:43:14.735010 6534 factory.go:656] Stopping watch factory\\\\nI0214 18:43:14.735069 6534 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 18:43:14.735076 6534 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:43:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fz879_openshift-ovn-kubernetes(f304b761-40a3-41ba-af33-a2b0634a55fb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.335245 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6wh27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5a3174-286c-4e61-a682-3367cc751fee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562ae320c8e3bf75555e510eda044b859886ae50d8583ddb65da2ac4a698dfc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qggbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6wh27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.338169 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.338202 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.338210 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.338225 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.338236 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:18Z","lastTransitionTime":"2026-02-14T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.350978 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf15f881-4696-42f3-af8d-2e1b02eee35b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b77b31ca9268c27935ddb1973488480bb124ffdb5934bb011eca818de003da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d4195bd0709714936c540f3c16554a5d06bd95ed92c23003b5d7efc05afcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhdvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:18Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.442168 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.442317 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.442340 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.442368 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.442389 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:18Z","lastTransitionTime":"2026-02-14T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.546100 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.546142 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.546157 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.546179 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.546196 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:18Z","lastTransitionTime":"2026-02-14T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.648825 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.648854 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.648862 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.648876 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.648924 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:18Z","lastTransitionTime":"2026-02-14T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.751388 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.751447 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.751464 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.751490 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.751507 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:18Z","lastTransitionTime":"2026-02-14T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.760749 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 20:08:44.128266836 +0000 UTC Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.793231 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.793315 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.793258 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:18 crc kubenswrapper[4897]: E0214 18:43:18.793428 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:18 crc kubenswrapper[4897]: E0214 18:43:18.793609 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:18 crc kubenswrapper[4897]: E0214 18:43:18.793789 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.854593 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.854655 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.854676 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.854699 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.854716 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:18Z","lastTransitionTime":"2026-02-14T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.958076 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.958110 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.958120 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.958137 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:18 crc kubenswrapper[4897]: I0214 18:43:18.958148 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:18Z","lastTransitionTime":"2026-02-14T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.060018 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.060130 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.060188 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.060217 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.060239 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:19Z","lastTransitionTime":"2026-02-14T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.163172 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.163247 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.163271 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.163302 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.163328 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:19Z","lastTransitionTime":"2026-02-14T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.266682 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.267161 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.267311 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.267761 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.268081 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:19Z","lastTransitionTime":"2026-02-14T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.370785 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.370913 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.370935 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.370957 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.370973 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:19Z","lastTransitionTime":"2026-02-14T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.474315 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.474348 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.474362 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.474378 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.474390 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:19Z","lastTransitionTime":"2026-02-14T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.577610 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.578080 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.578283 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.578444 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.578578 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:19Z","lastTransitionTime":"2026-02-14T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.657905 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.658155 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.658213 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:19 crc kubenswrapper[4897]: E0214 18:43:19.658374 4897 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 18:43:19 crc kubenswrapper[4897]: E0214 18:43:19.658444 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 18:43:51.658424236 +0000 UTC m=+84.634832759 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 18:43:19 crc kubenswrapper[4897]: E0214 18:43:19.658534 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:43:51.658520999 +0000 UTC m=+84.634929522 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:43:19 crc kubenswrapper[4897]: E0214 18:43:19.658783 4897 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 18:43:19 crc kubenswrapper[4897]: E0214 18:43:19.658991 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 18:43:51.658969854 +0000 UTC m=+84.635378367 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.682471 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.682529 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.682549 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.682586 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.682609 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:19Z","lastTransitionTime":"2026-02-14T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.759447 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.759791 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:19 crc kubenswrapper[4897]: E0214 18:43:19.759823 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 18:43:19 crc kubenswrapper[4897]: E0214 18:43:19.760131 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 18:43:19 crc kubenswrapper[4897]: E0214 18:43:19.760165 4897 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:43:19 crc kubenswrapper[4897]: E0214 18:43:19.759961 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 18:43:19 crc kubenswrapper[4897]: E0214 18:43:19.760248 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 18:43:51.760224699 +0000 UTC m=+84.736633212 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:43:19 crc kubenswrapper[4897]: E0214 18:43:19.760282 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 18:43:19 crc kubenswrapper[4897]: E0214 18:43:19.760317 4897 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:43:19 crc kubenswrapper[4897]: E0214 18:43:19.760435 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 18:43:51.760389105 +0000 UTC m=+84.736797778 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.761734 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 17:46:53.20908345 +0000 UTC Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.786605 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.786735 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.786818 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.786912 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.786949 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:19Z","lastTransitionTime":"2026-02-14T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.793643 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:19 crc kubenswrapper[4897]: E0214 18:43:19.794000 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.890472 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.890523 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.890535 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.890554 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:19 crc kubenswrapper[4897]: I0214 18:43:19.890576 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:19Z","lastTransitionTime":"2026-02-14T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.001849 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.001933 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.001952 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.001983 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.002005 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:20Z","lastTransitionTime":"2026-02-14T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.104828 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.105169 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.105237 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.105326 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.105398 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:20Z","lastTransitionTime":"2026-02-14T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.208642 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.209111 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.209341 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.209487 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.209624 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:20Z","lastTransitionTime":"2026-02-14T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.313548 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.313623 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.313648 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.313679 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.313704 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:20Z","lastTransitionTime":"2026-02-14T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.417129 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.417197 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.417215 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.417242 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.417260 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:20Z","lastTransitionTime":"2026-02-14T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.519556 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.519619 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.519641 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.519668 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.519688 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:20Z","lastTransitionTime":"2026-02-14T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.623130 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.623201 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.623224 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.623253 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.623278 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:20Z","lastTransitionTime":"2026-02-14T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.726224 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.726291 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.726315 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.726346 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.726370 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:20Z","lastTransitionTime":"2026-02-14T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.762805 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 11:54:14.241042422 +0000 UTC Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.793514 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:20 crc kubenswrapper[4897]: E0214 18:43:20.793993 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.793592 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:20 crc kubenswrapper[4897]: E0214 18:43:20.794460 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.793531 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:20 crc kubenswrapper[4897]: E0214 18:43:20.794842 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.829804 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.829864 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.829876 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.829903 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.829920 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:20Z","lastTransitionTime":"2026-02-14T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.932740 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.933089 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.933235 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.933412 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:20 crc kubenswrapper[4897]: I0214 18:43:20.933555 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:20Z","lastTransitionTime":"2026-02-14T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.036184 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.036219 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.036244 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.036259 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.036268 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:21Z","lastTransitionTime":"2026-02-14T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.138350 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.138573 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.138633 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.138697 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.138762 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:21Z","lastTransitionTime":"2026-02-14T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.240728 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.240804 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.240824 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.240848 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.240866 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:21Z","lastTransitionTime":"2026-02-14T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.343875 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.343954 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.343974 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.344000 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.344017 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:21Z","lastTransitionTime":"2026-02-14T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.447266 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.447322 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.447338 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.447368 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.447386 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:21Z","lastTransitionTime":"2026-02-14T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.550010 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.550148 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.550166 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.550195 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.550214 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:21Z","lastTransitionTime":"2026-02-14T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.654320 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.654380 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.654396 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.654420 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.654438 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:21Z","lastTransitionTime":"2026-02-14T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.757448 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.757516 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.757535 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.757562 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.757582 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:21Z","lastTransitionTime":"2026-02-14T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.763807 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 12:52:14.373602634 +0000 UTC Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.793575 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:21 crc kubenswrapper[4897]: E0214 18:43:21.793776 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.860391 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.860456 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.860475 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.860501 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.860519 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:21Z","lastTransitionTime":"2026-02-14T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.963721 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.963791 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.963815 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.963841 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:21 crc kubenswrapper[4897]: I0214 18:43:21.963856 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:21Z","lastTransitionTime":"2026-02-14T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.067462 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.067531 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.067551 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.067579 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.067600 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:22Z","lastTransitionTime":"2026-02-14T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.171267 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.171388 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.171457 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.171492 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.171520 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:22Z","lastTransitionTime":"2026-02-14T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.274073 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.274179 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.274232 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.274257 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.274280 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:22Z","lastTransitionTime":"2026-02-14T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.377801 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.377886 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.377951 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.377982 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.378008 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:22Z","lastTransitionTime":"2026-02-14T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.481772 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.481846 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.481864 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.481890 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.481940 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:22Z","lastTransitionTime":"2026-02-14T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.585901 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.586022 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.586120 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.586153 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.586171 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:22Z","lastTransitionTime":"2026-02-14T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.689653 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.689715 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.689733 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.689758 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.689777 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:22Z","lastTransitionTime":"2026-02-14T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.764889 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 23:39:40.219346813 +0000 UTC Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.792913 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.792971 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.793063 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.793127 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:22 crc kubenswrapper[4897]: E0214 18:43:22.793166 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.793196 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.793213 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.793239 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.793277 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:22Z","lastTransitionTime":"2026-02-14T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:22 crc kubenswrapper[4897]: E0214 18:43:22.793301 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:22 crc kubenswrapper[4897]: E0214 18:43:22.793448 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.895984 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.896107 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.896138 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.896177 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.896199 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:22Z","lastTransitionTime":"2026-02-14T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.999297 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.999388 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.999407 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.999438 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:22 crc kubenswrapper[4897]: I0214 18:43:22.999459 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:22Z","lastTransitionTime":"2026-02-14T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.103573 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.103642 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.103664 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.103691 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.103710 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:23Z","lastTransitionTime":"2026-02-14T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.207527 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.207630 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.207654 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.207691 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.207720 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:23Z","lastTransitionTime":"2026-02-14T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.311193 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.311258 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.311274 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.311300 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.311318 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:23Z","lastTransitionTime":"2026-02-14T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.414806 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.414883 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.414909 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.414954 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.414979 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:23Z","lastTransitionTime":"2026-02-14T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.517952 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.518074 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.518094 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.518119 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.518137 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:23Z","lastTransitionTime":"2026-02-14T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.622390 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.622452 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.622471 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.622498 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.622520 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:23Z","lastTransitionTime":"2026-02-14T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.725291 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.725349 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.725368 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.725397 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.725414 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:23Z","lastTransitionTime":"2026-02-14T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.765328 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 00:21:58.051549153 +0000 UTC Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.793786 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:23 crc kubenswrapper[4897]: E0214 18:43:23.793973 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.827476 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.827545 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.827571 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.827599 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.827617 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:23Z","lastTransitionTime":"2026-02-14T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.929931 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.929984 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.930001 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.930095 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:23 crc kubenswrapper[4897]: I0214 18:43:23.930121 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:23Z","lastTransitionTime":"2026-02-14T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.034447 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.034508 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.034526 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.034552 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.034574 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:24Z","lastTransitionTime":"2026-02-14T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.136922 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.136978 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.137000 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.137024 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.137068 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:24Z","lastTransitionTime":"2026-02-14T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.239809 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.239881 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.239904 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.239935 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.239956 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:24Z","lastTransitionTime":"2026-02-14T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.343645 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.343706 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.343725 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.343753 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.343777 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:24Z","lastTransitionTime":"2026-02-14T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.447002 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.447141 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.447168 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.447200 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.447225 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:24Z","lastTransitionTime":"2026-02-14T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.550321 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.550387 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.550409 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.550439 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.550462 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:24Z","lastTransitionTime":"2026-02-14T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.654249 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.654319 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.654338 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.654363 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.654380 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:24Z","lastTransitionTime":"2026-02-14T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.688340 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.688434 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.688468 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.688501 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.688524 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:24Z","lastTransitionTime":"2026-02-14T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:24 crc kubenswrapper[4897]: E0214 18:43:24.717141 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:24Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.722790 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.722844 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.722861 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.722888 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.722905 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:24Z","lastTransitionTime":"2026-02-14T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:24 crc kubenswrapper[4897]: E0214 18:43:24.743650 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:24Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.749867 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.749913 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.749980 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.750006 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.750024 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:24Z","lastTransitionTime":"2026-02-14T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.766291 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 11:05:20.420139545 +0000 UTC Feb 14 18:43:24 crc kubenswrapper[4897]: E0214 18:43:24.771217 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:24Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.790541 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.790586 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.790606 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.790649 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.790677 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:24Z","lastTransitionTime":"2026-02-14T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.793419 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:24 crc kubenswrapper[4897]: E0214 18:43:24.793589 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.794265 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.794307 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:24 crc kubenswrapper[4897]: E0214 18:43:24.794406 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:24 crc kubenswrapper[4897]: E0214 18:43:24.794627 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:24 crc kubenswrapper[4897]: E0214 18:43:24.830441 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:24Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.837376 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.837419 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.837431 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.837452 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.837464 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:24Z","lastTransitionTime":"2026-02-14T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:24 crc kubenswrapper[4897]: E0214 18:43:24.855903 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:24Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:24 crc kubenswrapper[4897]: E0214 18:43:24.856166 4897 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.858164 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.858214 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.858230 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.858254 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.858269 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:24Z","lastTransitionTime":"2026-02-14T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.961127 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.961274 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.961306 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.961337 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:24 crc kubenswrapper[4897]: I0214 18:43:24.961359 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:24Z","lastTransitionTime":"2026-02-14T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.064804 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.064863 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.064880 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.064904 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.064921 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:25Z","lastTransitionTime":"2026-02-14T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.167661 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.167933 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.168099 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.168276 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.168411 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:25Z","lastTransitionTime":"2026-02-14T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.270529 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.270580 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.270596 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.270619 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.270638 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:25Z","lastTransitionTime":"2026-02-14T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.373564 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.373619 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.373640 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.373664 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.373681 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:25Z","lastTransitionTime":"2026-02-14T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.476206 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.476263 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.476285 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.476308 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.476325 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:25Z","lastTransitionTime":"2026-02-14T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.579464 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.579513 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.579528 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.579547 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.579561 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:25Z","lastTransitionTime":"2026-02-14T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.682695 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.682740 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.682757 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.682781 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.682798 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:25Z","lastTransitionTime":"2026-02-14T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.766695 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 08:43:42.055521723 +0000 UTC Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.785474 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.785512 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.785523 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.785538 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.785550 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:25Z","lastTransitionTime":"2026-02-14T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.793233 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:25 crc kubenswrapper[4897]: E0214 18:43:25.793413 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.888702 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.888777 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.888795 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.888820 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.888837 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:25Z","lastTransitionTime":"2026-02-14T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.991657 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.991723 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.991741 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.991768 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:25 crc kubenswrapper[4897]: I0214 18:43:25.991786 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:25Z","lastTransitionTime":"2026-02-14T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.094895 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.094959 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.094981 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.095009 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.095064 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:26Z","lastTransitionTime":"2026-02-14T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.197501 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.197575 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.197585 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.197603 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.197617 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:26Z","lastTransitionTime":"2026-02-14T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.303771 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.303837 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.303876 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.303909 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.303937 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:26Z","lastTransitionTime":"2026-02-14T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.407613 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.407695 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.407713 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.407738 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.407756 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:26Z","lastTransitionTime":"2026-02-14T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.511134 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.511226 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.511241 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.511264 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.511281 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:26Z","lastTransitionTime":"2026-02-14T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.613776 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.614259 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.614494 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.614710 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.614908 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:26Z","lastTransitionTime":"2026-02-14T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.718412 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.718537 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.718562 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.718595 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.718620 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:26Z","lastTransitionTime":"2026-02-14T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.767569 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 11:38:38.5507787 +0000 UTC Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.793467 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:26 crc kubenswrapper[4897]: E0214 18:43:26.793643 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.793923 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.794086 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:26 crc kubenswrapper[4897]: E0214 18:43:26.794361 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:26 crc kubenswrapper[4897]: E0214 18:43:26.794148 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.821645 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.821689 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.821701 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.821718 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.821730 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:26Z","lastTransitionTime":"2026-02-14T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.924580 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.924662 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.924687 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.924717 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:26 crc kubenswrapper[4897]: I0214 18:43:26.924743 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:26Z","lastTransitionTime":"2026-02-14T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.027799 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.027870 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.027896 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.027925 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.027949 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:27Z","lastTransitionTime":"2026-02-14T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.130368 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.130405 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.130415 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.130451 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.130465 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:27Z","lastTransitionTime":"2026-02-14T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.233687 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.233784 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.233801 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.233822 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.233837 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:27Z","lastTransitionTime":"2026-02-14T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.336314 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.336412 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.336437 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.336472 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.336499 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:27Z","lastTransitionTime":"2026-02-14T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.439597 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.439658 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.439676 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.439699 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.439718 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:27Z","lastTransitionTime":"2026-02-14T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.543650 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.543748 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.543771 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.543796 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.543867 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:27Z","lastTransitionTime":"2026-02-14T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.648014 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.648112 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.648131 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.648163 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.648182 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:27Z","lastTransitionTime":"2026-02-14T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.751518 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.751569 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.751582 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.751601 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.751616 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:27Z","lastTransitionTime":"2026-02-14T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.768176 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 04:41:54.72185163 +0000 UTC Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.793192 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:27 crc kubenswrapper[4897]: E0214 18:43:27.793405 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.813336 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:27Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.832164 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:27Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.850140 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:27Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.856299 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.856363 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.856381 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.856413 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.856431 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:27Z","lastTransitionTime":"2026-02-14T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.868392 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:27Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.891891 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:27Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.916556 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:27Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.938939 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:27Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.952630 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:27Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.959726 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.959788 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.959809 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.959835 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.959854 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:27Z","lastTransitionTime":"2026-02-14T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.970148 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrgww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b614985-b2f8-443d-9996-635d7e407b24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrgww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:27Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:27 crc kubenswrapper[4897]: I0214 18:43:27.996939 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:27Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.027803 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62a34a3861770f5f41751762c4076f7538409143fc76997ad85b95bbe5789e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:28Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.047187 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:28Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.062049 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.062100 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.062112 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.062133 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.062149 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:28Z","lastTransitionTime":"2026-02-14T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.067891 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:28Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.097149 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"twork-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 18:43:14.734199 6534 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 18:43:14.734424 6534 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 18:43:14.734771 6534 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 18:43:14.734801 6534 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 18:43:14.734821 6534 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 18:43:14.734869 6534 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 18:43:14.734921 6534 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 18:43:14.734952 6534 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 18:43:14.734958 6534 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 18:43:14.734982 6534 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 18:43:14.735010 6534 factory.go:656] Stopping watch factory\\\\nI0214 18:43:14.735069 6534 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 18:43:14.735076 6534 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:43:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fz879_openshift-ovn-kubernetes(f304b761-40a3-41ba-af33-a2b0634a55fb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:28Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.111169 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6wh27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5a3174-286c-4e61-a682-3367cc751fee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562ae320c8e3bf75555e510eda044b859886ae50d8583ddb65da2ac4a698dfc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qggbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6wh27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:28Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.126821 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf15f881-4696-42f3-af8d-2e1b02eee35b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b77b31ca9268c27935ddb1973488480bb124ffdb5934bb011eca818de003da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d4195bd0709714936c540f3c16554a5d06bd95ed92c23003b5d7efc05afcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhdvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:28Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.146135 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e1f20e2-fb27-410d-8019-82e73c0be2e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81f5449b8c083d713a05ff1299a5a4025873014bb736633762af4acc1d6d7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73eab27b7a388abb7c9142d8ef6520646cf9d804e5c7c1ae4980749e175134d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9cdf53bb32ab9932f350a61855d31c9ff38fba5ad977fede380b8f3272fc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e42a9e3cce41d65878109f32ae572a4a52fbb5f9dbe1915498499e344004417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e42a9e3cce41d65878109f32ae572a4a52fbb5f9dbe1915498499e344004417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:28Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.164600 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.164627 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.164636 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.164650 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.164659 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:28Z","lastTransitionTime":"2026-02-14T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.267270 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.267334 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.267356 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.267381 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.267415 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:28Z","lastTransitionTime":"2026-02-14T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.370760 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.370846 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.370869 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.370901 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.370927 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:28Z","lastTransitionTime":"2026-02-14T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.473226 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.473277 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.473289 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.473308 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.473323 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:28Z","lastTransitionTime":"2026-02-14T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.575990 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.576437 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.576458 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.576483 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.576504 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:28Z","lastTransitionTime":"2026-02-14T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.678538 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.678570 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.678579 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.678593 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.678607 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:28Z","lastTransitionTime":"2026-02-14T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.768573 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 15:06:27.949223626 +0000 UTC Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.780733 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.780773 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.780782 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.780796 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.780805 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:28Z","lastTransitionTime":"2026-02-14T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.793207 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.793283 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:28 crc kubenswrapper[4897]: E0214 18:43:28.793311 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.793211 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:28 crc kubenswrapper[4897]: E0214 18:43:28.793455 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:28 crc kubenswrapper[4897]: E0214 18:43:28.793466 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.883842 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.883913 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.883930 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.883953 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.883970 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:28Z","lastTransitionTime":"2026-02-14T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.990475 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.990509 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.990524 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.990539 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:28 crc kubenswrapper[4897]: I0214 18:43:28.990584 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:28Z","lastTransitionTime":"2026-02-14T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.093003 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.093090 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.093109 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.093134 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.093152 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:29Z","lastTransitionTime":"2026-02-14T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.195274 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.195319 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.195330 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.195348 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.195359 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:29Z","lastTransitionTime":"2026-02-14T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.298164 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.298226 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.298244 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.298269 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.298286 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:29Z","lastTransitionTime":"2026-02-14T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.400922 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.401002 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.401022 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.401089 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.401112 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:29Z","lastTransitionTime":"2026-02-14T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.503938 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.503989 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.504002 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.504019 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.504047 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:29Z","lastTransitionTime":"2026-02-14T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.606155 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.606190 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.606201 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.606218 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.606229 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:29Z","lastTransitionTime":"2026-02-14T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.708479 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.708536 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.708553 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.708575 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.708592 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:29Z","lastTransitionTime":"2026-02-14T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.769377 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 07:54:15.027669087 +0000 UTC Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.793079 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:29 crc kubenswrapper[4897]: E0214 18:43:29.793254 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.811278 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.811355 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.811382 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.811413 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.811438 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:29Z","lastTransitionTime":"2026-02-14T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.913604 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.913698 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.913717 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.913741 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:29 crc kubenswrapper[4897]: I0214 18:43:29.913758 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:29Z","lastTransitionTime":"2026-02-14T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.016818 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.016877 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.016895 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.016919 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.016937 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:30Z","lastTransitionTime":"2026-02-14T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.119630 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.119686 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.119696 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.119717 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.119734 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:30Z","lastTransitionTime":"2026-02-14T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.224099 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.224185 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.224214 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.224249 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.224274 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:30Z","lastTransitionTime":"2026-02-14T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.326562 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.326602 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.326612 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.326629 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.326641 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:30Z","lastTransitionTime":"2026-02-14T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.428800 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.428874 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.428892 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.428921 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.428938 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:30Z","lastTransitionTime":"2026-02-14T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.531445 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.531490 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.531501 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.531518 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.531530 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:30Z","lastTransitionTime":"2026-02-14T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.634819 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.634970 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.634989 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.635015 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.635069 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:30Z","lastTransitionTime":"2026-02-14T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.738203 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.738288 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.738306 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.738328 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.738344 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:30Z","lastTransitionTime":"2026-02-14T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.769796 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 22:10:55.944190588 +0000 UTC Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.793644 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.793691 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.793659 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:30 crc kubenswrapper[4897]: E0214 18:43:30.793856 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:30 crc kubenswrapper[4897]: E0214 18:43:30.793966 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:30 crc kubenswrapper[4897]: E0214 18:43:30.794737 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.795295 4897 scope.go:117] "RemoveContainer" containerID="e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2" Feb 14 18:43:30 crc kubenswrapper[4897]: E0214 18:43:30.795690 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fz879_openshift-ovn-kubernetes(f304b761-40a3-41ba-af33-a2b0634a55fb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.841992 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.842084 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.842102 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.842751 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.842815 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:30Z","lastTransitionTime":"2026-02-14T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.946470 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.946515 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.946526 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.946544 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:30 crc kubenswrapper[4897]: I0214 18:43:30.946583 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:30Z","lastTransitionTime":"2026-02-14T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.049685 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.049746 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.049763 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.049789 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.049807 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:31Z","lastTransitionTime":"2026-02-14T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.152748 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.152856 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.152874 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.152898 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.152915 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:31Z","lastTransitionTime":"2026-02-14T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.255487 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.255537 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.255549 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.255570 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.255583 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:31Z","lastTransitionTime":"2026-02-14T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.359088 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.359127 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.359142 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.359158 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.359169 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:31Z","lastTransitionTime":"2026-02-14T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.462519 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.462593 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.462624 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.462639 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.462671 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:31Z","lastTransitionTime":"2026-02-14T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.565833 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.565877 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.565885 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.565899 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.565908 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:31Z","lastTransitionTime":"2026-02-14T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.668649 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.668689 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.668696 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.668727 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.668743 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:31Z","lastTransitionTime":"2026-02-14T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.769912 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 16:21:15.582929993 +0000 UTC Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.771218 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.771259 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.771271 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.771289 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.771301 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:31Z","lastTransitionTime":"2026-02-14T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.793566 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:31 crc kubenswrapper[4897]: E0214 18:43:31.793717 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.873787 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.873834 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.873848 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.873865 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.873880 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:31Z","lastTransitionTime":"2026-02-14T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.976609 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.976643 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.976651 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.976665 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:31 crc kubenswrapper[4897]: I0214 18:43:31.976675 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:31Z","lastTransitionTime":"2026-02-14T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.079404 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.079432 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.079440 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.079453 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.079461 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:32Z","lastTransitionTime":"2026-02-14T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.181881 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.181932 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.181944 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.181965 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.181978 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:32Z","lastTransitionTime":"2026-02-14T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.284701 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.284737 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.284749 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.284764 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.284774 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:32Z","lastTransitionTime":"2026-02-14T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.387608 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.387649 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.387660 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.387675 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.387686 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:32Z","lastTransitionTime":"2026-02-14T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.489794 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.489874 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.489886 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.489925 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.489938 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:32Z","lastTransitionTime":"2026-02-14T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.593352 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.593406 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.593418 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.593434 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.593446 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:32Z","lastTransitionTime":"2026-02-14T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.696141 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.696168 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.696176 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.696188 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.696198 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:32Z","lastTransitionTime":"2026-02-14T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.770843 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 01:28:27.843106019 +0000 UTC Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.793133 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:32 crc kubenswrapper[4897]: E0214 18:43:32.793305 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.793659 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:32 crc kubenswrapper[4897]: E0214 18:43:32.793793 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.794088 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:32 crc kubenswrapper[4897]: E0214 18:43:32.794203 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.798727 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.798767 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.798775 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.798803 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.798812 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:32Z","lastTransitionTime":"2026-02-14T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.901338 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.901384 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.901395 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.901417 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:32 crc kubenswrapper[4897]: I0214 18:43:32.901433 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:32Z","lastTransitionTime":"2026-02-14T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.005305 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.005378 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.005390 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.005427 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.005441 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:33Z","lastTransitionTime":"2026-02-14T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.108498 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.108560 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.108571 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.108590 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.108602 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:33Z","lastTransitionTime":"2026-02-14T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.211506 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.211571 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.211585 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.211604 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.211616 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:33Z","lastTransitionTime":"2026-02-14T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.315247 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.315304 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.315319 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.315343 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.315360 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:33Z","lastTransitionTime":"2026-02-14T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.418428 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.418478 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.418489 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.418513 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.418525 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:33Z","lastTransitionTime":"2026-02-14T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.521859 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.521900 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.521910 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.521926 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.521936 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:33Z","lastTransitionTime":"2026-02-14T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.625172 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.625241 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.625259 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.625287 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.625303 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:33Z","lastTransitionTime":"2026-02-14T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.727198 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.727234 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.727246 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.727263 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.727276 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:33Z","lastTransitionTime":"2026-02-14T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.771807 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 09:05:05.795543891 +0000 UTC Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.793244 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:33 crc kubenswrapper[4897]: E0214 18:43:33.793422 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.829301 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.829353 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.829372 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.829394 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.829411 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:33Z","lastTransitionTime":"2026-02-14T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.932060 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.932114 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.932128 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.932146 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:33 crc kubenswrapper[4897]: I0214 18:43:33.932157 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:33Z","lastTransitionTime":"2026-02-14T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.029231 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b614985-b2f8-443d-9996-635d7e407b24-metrics-certs\") pod \"network-metrics-daemon-xrgww\" (UID: \"6b614985-b2f8-443d-9996-635d7e407b24\") " pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:34 crc kubenswrapper[4897]: E0214 18:43:34.029428 4897 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 18:43:34 crc kubenswrapper[4897]: E0214 18:43:34.029534 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b614985-b2f8-443d-9996-635d7e407b24-metrics-certs podName:6b614985-b2f8-443d-9996-635d7e407b24 nodeName:}" failed. No retries permitted until 2026-02-14 18:44:06.0295037 +0000 UTC m=+99.005912223 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b614985-b2f8-443d-9996-635d7e407b24-metrics-certs") pod "network-metrics-daemon-xrgww" (UID: "6b614985-b2f8-443d-9996-635d7e407b24") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.035693 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.035750 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.035761 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.035779 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.035816 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:34Z","lastTransitionTime":"2026-02-14T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.138222 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.138265 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.138276 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.138298 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.138311 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:34Z","lastTransitionTime":"2026-02-14T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.241116 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.241177 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.241196 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.241227 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.241247 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:34Z","lastTransitionTime":"2026-02-14T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.344162 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.344216 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.344233 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.344259 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.344282 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:34Z","lastTransitionTime":"2026-02-14T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.446538 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.446602 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.446620 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.446771 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.446794 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:34Z","lastTransitionTime":"2026-02-14T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.549560 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.549622 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.549631 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.549650 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.549661 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:34Z","lastTransitionTime":"2026-02-14T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.652385 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.652441 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.652450 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.652470 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.652484 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:34Z","lastTransitionTime":"2026-02-14T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.755171 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.755210 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.755221 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.755238 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.755249 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:34Z","lastTransitionTime":"2026-02-14T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.772505 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 15:22:24.202967493 +0000 UTC Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.793499 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.793538 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:34 crc kubenswrapper[4897]: E0214 18:43:34.793654 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.793772 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:34 crc kubenswrapper[4897]: E0214 18:43:34.793784 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:34 crc kubenswrapper[4897]: E0214 18:43:34.794063 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.858609 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.858680 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.858705 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.858736 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.858758 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:34Z","lastTransitionTime":"2026-02-14T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.948515 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.948569 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.948582 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.948605 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.948619 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:34Z","lastTransitionTime":"2026-02-14T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:34 crc kubenswrapper[4897]: E0214 18:43:34.966827 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:34Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.971706 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.971745 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.971759 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.971779 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.971794 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:34Z","lastTransitionTime":"2026-02-14T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:34 crc kubenswrapper[4897]: E0214 18:43:34.989592 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:34Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.993829 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.993863 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.993873 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.993904 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:34 crc kubenswrapper[4897]: I0214 18:43:34.993916 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:34Z","lastTransitionTime":"2026-02-14T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:35 crc kubenswrapper[4897]: E0214 18:43:35.013627 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:35Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.017470 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.017527 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.017549 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.017567 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.017580 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:35Z","lastTransitionTime":"2026-02-14T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:35 crc kubenswrapper[4897]: E0214 18:43:35.029520 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:35Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.033806 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.033851 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.033864 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.033882 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.033897 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:35Z","lastTransitionTime":"2026-02-14T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:35 crc kubenswrapper[4897]: E0214 18:43:35.052245 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:35Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:35 crc kubenswrapper[4897]: E0214 18:43:35.052384 4897 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.053784 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.053837 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.053847 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.053863 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.053872 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:35Z","lastTransitionTime":"2026-02-14T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.156585 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.156640 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.156654 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.156671 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.156683 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:35Z","lastTransitionTime":"2026-02-14T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.264358 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.264429 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.264451 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.264481 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.264502 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:35Z","lastTransitionTime":"2026-02-14T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.366471 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.366535 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.366553 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.366580 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.366598 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:35Z","lastTransitionTime":"2026-02-14T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.469919 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.470005 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.470059 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.470091 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.470115 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:35Z","lastTransitionTime":"2026-02-14T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.573527 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.573609 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.573635 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.573666 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.573688 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:35Z","lastTransitionTime":"2026-02-14T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.676713 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.676770 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.676782 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.676805 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.676821 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:35Z","lastTransitionTime":"2026-02-14T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.773129 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 09:05:09.703789933 +0000 UTC Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.779927 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.780001 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.780013 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.780052 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.780066 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:35Z","lastTransitionTime":"2026-02-14T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.793446 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:35 crc kubenswrapper[4897]: E0214 18:43:35.793780 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.887900 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.887982 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.887999 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.888020 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.888052 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:35Z","lastTransitionTime":"2026-02-14T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.990647 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.990715 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.990736 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.990762 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:35 crc kubenswrapper[4897]: I0214 18:43:35.990779 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:35Z","lastTransitionTime":"2026-02-14T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.093383 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.093434 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.093447 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.093466 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.093479 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:36Z","lastTransitionTime":"2026-02-14T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.195436 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.195469 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.195478 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.195491 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.195499 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:36Z","lastTransitionTime":"2026-02-14T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.295096 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ldvzr_b5b30895-0d98-44e4-8e31-2c5ebe5e1850/kube-multus/0.log" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.295164 4897 generic.go:334] "Generic (PLEG): container finished" podID="b5b30895-0d98-44e4-8e31-2c5ebe5e1850" containerID="491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd" exitCode=1 Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.295222 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ldvzr" event={"ID":"b5b30895-0d98-44e4-8e31-2c5ebe5e1850","Type":"ContainerDied","Data":"491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd"} Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.295874 4897 scope.go:117] "RemoveContainer" containerID="491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.296772 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.296809 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.296822 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.296842 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.296854 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:36Z","lastTransitionTime":"2026-02-14T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.309418 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:36Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.323203 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:36Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.336106 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:36Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.348808 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:36Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.368575 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:36Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.385025 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:35Z\\\",\\\"message\\\":\\\"2026-02-14T18:42:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c5c511e0-c251-4b05-9d13-6fd5bf0b7e4b\\\\n2026-02-14T18:42:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c5c511e0-c251-4b05-9d13-6fd5bf0b7e4b to /host/opt/cni/bin/\\\\n2026-02-14T18:42:50Z [verbose] multus-daemon started\\\\n2026-02-14T18:42:50Z [verbose] Readiness Indicator file check\\\\n2026-02-14T18:43:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:36Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.395727 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:36Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.399328 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.399375 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.399394 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.399420 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.399438 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:36Z","lastTransitionTime":"2026-02-14T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.406894 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrgww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b614985-b2f8-443d-9996-635d7e407b24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrgww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:36Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.421363 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:36Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.439250 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:36Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.453875 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:36Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.468269 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62a34a3861770f5f41751762c4076f7538409143fc76997ad85b95bbe5789e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:36Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.479551 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6wh27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5a3174-286c-4e61-a682-3367cc751fee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562ae320c8e3bf75555e510eda044b859886ae50d8583ddb65da2ac4a698dfc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qggbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6wh27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:36Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.490321 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf15f881-4696-42f3-af8d-2e1b02eee35b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b77b31ca9268c27935ddb1973488480bb124ffdb5934bb011eca818de003da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d4195bd0709714936c540f3c16554a5d06bd95ed92c23003b5d7efc05afcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhdvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:36Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.502684 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e1f20e2-fb27-410d-8019-82e73c0be2e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81f5449b8c083d713a05ff1299a5a4025873014bb736633762af4acc1d6d7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73eab27b7a388abb7c9142d8ef6520646cf9d804e5c7c1ae4980749e175134d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9cdf53bb32ab9932f350a61855d31c9ff38fba5ad977fede380b8f3272fc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e42a9e3cce41d65878109f32ae572a4a52fbb5f9dbe1915498499e344004417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e42a9e3cce41d65878109f32ae572a4a52fbb5f9dbe1915498499e344004417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:36Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.502835 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.503112 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.503174 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.503246 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.503306 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:36Z","lastTransitionTime":"2026-02-14T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.518606 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:36Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.549466 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"twork-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 18:43:14.734199 6534 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 18:43:14.734424 6534 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 18:43:14.734771 6534 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 18:43:14.734801 6534 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 18:43:14.734821 6534 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 18:43:14.734869 6534 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 18:43:14.734921 6534 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 18:43:14.734952 6534 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 18:43:14.734958 6534 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 18:43:14.734982 6534 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 18:43:14.735010 6534 factory.go:656] Stopping watch factory\\\\nI0214 18:43:14.735069 6534 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 18:43:14.735076 6534 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:43:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fz879_openshift-ovn-kubernetes(f304b761-40a3-41ba-af33-a2b0634a55fb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:36Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.605968 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.606317 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.606426 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.606513 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.606631 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:36Z","lastTransitionTime":"2026-02-14T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.714556 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.714603 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.714616 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.714634 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.714643 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:36Z","lastTransitionTime":"2026-02-14T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.775159 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 06:10:41.218896473 +0000 UTC Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.793632 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.793661 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:36 crc kubenswrapper[4897]: E0214 18:43:36.793864 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:36 crc kubenswrapper[4897]: E0214 18:43:36.793919 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.793664 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:36 crc kubenswrapper[4897]: E0214 18:43:36.793987 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.817594 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.817637 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.817647 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.817666 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.817676 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:36Z","lastTransitionTime":"2026-02-14T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.920232 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.920276 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.920285 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.920302 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:36 crc kubenswrapper[4897]: I0214 18:43:36.920313 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:36Z","lastTransitionTime":"2026-02-14T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.023649 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.023720 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.023742 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.023771 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.023791 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:37Z","lastTransitionTime":"2026-02-14T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.127830 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.127888 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.127899 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.127917 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.127930 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:37Z","lastTransitionTime":"2026-02-14T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.231743 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.231784 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.231796 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.231812 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.231824 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:37Z","lastTransitionTime":"2026-02-14T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.302024 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ldvzr_b5b30895-0d98-44e4-8e31-2c5ebe5e1850/kube-multus/0.log" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.302557 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ldvzr" event={"ID":"b5b30895-0d98-44e4-8e31-2c5ebe5e1850","Type":"ContainerStarted","Data":"59dea786c4d826f44c37335db7c4d2752d93bf799ec0044b1c6fd22efab3256d"} Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.316520 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.330474 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrgww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b614985-b2f8-443d-9996-635d7e407b24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrgww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.334306 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.334353 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.334366 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.334389 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.334402 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:37Z","lastTransitionTime":"2026-02-14T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.345926 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.359495 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.374676 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.387890 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59dea786c4d826f44c37335db7c4d2752d93bf799ec0044b1c6fd22efab3256d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:35Z\\\",\\\"message\\\":\\\"2026-02-14T18:42:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c5c511e0-c251-4b05-9d13-6fd5bf0b7e4b\\\\n2026-02-14T18:42:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c5c511e0-c251-4b05-9d13-6fd5bf0b7e4b to /host/opt/cni/bin/\\\\n2026-02-14T18:42:50Z [verbose] multus-daemon started\\\\n2026-02-14T18:42:50Z [verbose] Readiness Indicator file check\\\\n2026-02-14T18:43:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.412075 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.428258 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.437016 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.437089 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.437104 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.437125 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.437139 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:37Z","lastTransitionTime":"2026-02-14T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.448013 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62a34a3861770f5f41751762c4076f7538409143fc76997ad85b95bbe5789e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.462247 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e1f20e2-fb27-410d-8019-82e73c0be2e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81f5449b8c083d713a05ff1299a5a4025873014bb736633762af4acc1d6d7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73eab27b7a388abb7c9142d8ef6520646cf9d804e5c7c1ae4980749e175134d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9cdf53bb32ab9932f350a61855d31c9ff38fba5ad977fede380b8f3272fc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e42a9e3cce41d65878109f32ae572a4a52fbb5f9dbe1915498499e344004417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e42a9e3cce41d65878109f32ae572a4a52fbb5f9dbe1915498499e344004417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.480531 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.501656 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"twork-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 18:43:14.734199 6534 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 18:43:14.734424 6534 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 18:43:14.734771 6534 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 18:43:14.734801 6534 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 18:43:14.734821 6534 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 18:43:14.734869 6534 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 18:43:14.734921 6534 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 18:43:14.734952 6534 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 18:43:14.734958 6534 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 18:43:14.734982 6534 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 18:43:14.735010 6534 factory.go:656] Stopping watch factory\\\\nI0214 18:43:14.735069 6534 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 18:43:14.735076 6534 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:43:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fz879_openshift-ovn-kubernetes(f304b761-40a3-41ba-af33-a2b0634a55fb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.516097 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6wh27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5a3174-286c-4e61-a682-3367cc751fee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562ae320c8e3bf75555e510eda044b859886ae50d8583ddb65da2ac4a698dfc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qggbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6wh27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.527394 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf15f881-4696-42f3-af8d-2e1b02eee35b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b77b31ca9268c27935ddb1973488480bb124ffdb5934bb011eca818de003da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d4195bd0709714936c540f3c16554a5d06bd95ed92c23003b5d7efc05afcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhdvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.539970 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.540039 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.540052 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.540076 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.540088 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:37Z","lastTransitionTime":"2026-02-14T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.541935 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.556618 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.571931 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.643743 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.643852 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.644014 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.644175 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.644221 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:37Z","lastTransitionTime":"2026-02-14T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.747210 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.747276 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.747295 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.747382 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.747403 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:37Z","lastTransitionTime":"2026-02-14T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.777102 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 12:39:20.698807493 +0000 UTC Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.793526 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:37 crc kubenswrapper[4897]: E0214 18:43:37.793798 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.810222 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.828866 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.844819 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.849866 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.849925 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.849935 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.849956 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.849968 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:37Z","lastTransitionTime":"2026-02-14T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.861817 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.881675 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.898964 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59dea786c4d826f44c37335db7c4d2752d93bf799ec0044b1c6fd22efab3256d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:35Z\\\",\\\"message\\\":\\\"2026-02-14T18:42:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c5c511e0-c251-4b05-9d13-6fd5bf0b7e4b\\\\n2026-02-14T18:42:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c5c511e0-c251-4b05-9d13-6fd5bf0b7e4b to /host/opt/cni/bin/\\\\n2026-02-14T18:42:50Z [verbose] multus-daemon started\\\\n2026-02-14T18:42:50Z [verbose] Readiness Indicator file check\\\\n2026-02-14T18:43:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.913463 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.928068 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrgww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b614985-b2f8-443d-9996-635d7e407b24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrgww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.951755 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.954536 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.954594 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.954609 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.954633 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.954646 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:37Z","lastTransitionTime":"2026-02-14T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.972837 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:37 crc kubenswrapper[4897]: I0214 18:43:37.991849 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:37Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.012775 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62a34a3861770f5f41751762c4076f7538409143fc76997ad85b95bbe5789e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:38Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.023931 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6wh27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5a3174-286c-4e61-a682-3367cc751fee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562ae320c8e3bf75555e510eda044b859886ae50d8583ddb65da2ac4a698dfc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qggbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6wh27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:38Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.035413 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf15f881-4696-42f3-af8d-2e1b02eee35b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b77b31ca9268c27935ddb1973488480bb124ffdb5934bb011eca818de003da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d4195bd0709714936c540f3c16554a5d06bd95ed92c23003b5d7efc05afcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhdvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:38Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.052631 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e1f20e2-fb27-410d-8019-82e73c0be2e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81f5449b8c083d713a05ff1299a5a4025873014bb736633762af4acc1d6d7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73eab27b7a388abb7c9142d8ef6520646cf9d804e5c7c1ae4980749e175134d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9cdf53bb32ab9932f350a61855d31c9ff38fba5ad977fede380b8f3272fc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e42a9e3cce41d65878109f32ae572a4a52fbb5f9dbe1915498499e344004417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e42a9e3cce41d65878109f32ae572a4a52fbb5f9dbe1915498499e344004417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:38Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.057870 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.057929 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.057942 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.057963 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.057977 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:38Z","lastTransitionTime":"2026-02-14T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.068250 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:38Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.087919 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"twork-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 18:43:14.734199 6534 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 18:43:14.734424 6534 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 18:43:14.734771 6534 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 18:43:14.734801 6534 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 18:43:14.734821 6534 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 18:43:14.734869 6534 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 18:43:14.734921 6534 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 18:43:14.734952 6534 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 18:43:14.734958 6534 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 18:43:14.734982 6534 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 18:43:14.735010 6534 factory.go:656] Stopping watch factory\\\\nI0214 18:43:14.735069 6534 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 18:43:14.735076 6534 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:43:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fz879_openshift-ovn-kubernetes(f304b761-40a3-41ba-af33-a2b0634a55fb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:38Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.161467 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.161537 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.161602 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.161633 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.161684 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:38Z","lastTransitionTime":"2026-02-14T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.263874 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.263931 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.263944 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.263964 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.263978 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:38Z","lastTransitionTime":"2026-02-14T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.366806 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.366871 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.366885 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.366910 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.366924 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:38Z","lastTransitionTime":"2026-02-14T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.470090 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.470566 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.470715 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.470829 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.470934 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:38Z","lastTransitionTime":"2026-02-14T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.574399 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.574438 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.574447 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.574464 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.574475 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:38Z","lastTransitionTime":"2026-02-14T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.677944 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.678230 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.678314 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.678398 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.678485 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:38Z","lastTransitionTime":"2026-02-14T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.778342 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 03:08:42.340977346 +0000 UTC Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.781670 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.781788 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.781904 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.782005 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.782113 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:38Z","lastTransitionTime":"2026-02-14T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.793152 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.793346 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.793212 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:38 crc kubenswrapper[4897]: E0214 18:43:38.793527 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:38 crc kubenswrapper[4897]: E0214 18:43:38.793542 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:38 crc kubenswrapper[4897]: E0214 18:43:38.793694 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.885696 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.885758 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.885776 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.885801 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.885817 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:38Z","lastTransitionTime":"2026-02-14T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.989759 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.990342 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.990458 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.990792 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:38 crc kubenswrapper[4897]: I0214 18:43:38.990902 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:38Z","lastTransitionTime":"2026-02-14T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.095164 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.095247 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.095261 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.095282 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.095295 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:39Z","lastTransitionTime":"2026-02-14T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.198447 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.198513 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.198534 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.198560 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.198579 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:39Z","lastTransitionTime":"2026-02-14T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.301921 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.301972 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.301983 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.302001 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.302013 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:39Z","lastTransitionTime":"2026-02-14T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.404652 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.404698 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.404708 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.404723 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.404731 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:39Z","lastTransitionTime":"2026-02-14T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.507979 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.508400 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.508633 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.508837 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.508970 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:39Z","lastTransitionTime":"2026-02-14T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.611880 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.611909 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.611916 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.611930 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.611939 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:39Z","lastTransitionTime":"2026-02-14T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.715117 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.715157 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.715167 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.715183 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.715193 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:39Z","lastTransitionTime":"2026-02-14T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.779642 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 07:31:22.441524904 +0000 UTC Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.793249 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:39 crc kubenswrapper[4897]: E0214 18:43:39.793390 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.817720 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.817797 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.817816 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.817845 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.817864 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:39Z","lastTransitionTime":"2026-02-14T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.920159 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.920184 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.920191 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.920204 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:39 crc kubenswrapper[4897]: I0214 18:43:39.920213 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:39Z","lastTransitionTime":"2026-02-14T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.021968 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.022079 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.022099 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.022127 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.022149 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:40Z","lastTransitionTime":"2026-02-14T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.125400 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.125467 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.125485 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.125517 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.125537 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:40Z","lastTransitionTime":"2026-02-14T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.228164 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.228219 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.228231 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.228250 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.228262 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:40Z","lastTransitionTime":"2026-02-14T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.330117 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.330148 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.330160 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.330174 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.330184 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:40Z","lastTransitionTime":"2026-02-14T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.433194 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.433251 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.433268 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.433294 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.433311 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:40Z","lastTransitionTime":"2026-02-14T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.535413 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.535447 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.535455 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.535470 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.535481 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:40Z","lastTransitionTime":"2026-02-14T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.637379 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.637426 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.637438 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.637461 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.637478 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:40Z","lastTransitionTime":"2026-02-14T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.740358 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.740414 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.740436 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.740465 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.740491 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:40Z","lastTransitionTime":"2026-02-14T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.780738 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 01:08:06.238923783 +0000 UTC Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.793079 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.793149 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:40 crc kubenswrapper[4897]: E0214 18:43:40.793210 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.793236 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:40 crc kubenswrapper[4897]: E0214 18:43:40.793418 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:40 crc kubenswrapper[4897]: E0214 18:43:40.793529 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.844136 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.844194 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.844217 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.844248 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.844272 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:40Z","lastTransitionTime":"2026-02-14T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.950089 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.950149 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.950162 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.950184 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:40 crc kubenswrapper[4897]: I0214 18:43:40.950202 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:40Z","lastTransitionTime":"2026-02-14T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.053395 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.053460 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.053477 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.053506 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.053524 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:41Z","lastTransitionTime":"2026-02-14T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.156718 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.156780 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.156799 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.156826 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.156844 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:41Z","lastTransitionTime":"2026-02-14T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.260070 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.260138 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.260149 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.260171 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.260186 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:41Z","lastTransitionTime":"2026-02-14T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.363506 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.363593 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.363612 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.363699 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.363724 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:41Z","lastTransitionTime":"2026-02-14T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.467187 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.467258 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.467277 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.467307 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.467331 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:41Z","lastTransitionTime":"2026-02-14T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.571160 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.571241 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.571264 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.571294 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.571316 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:41Z","lastTransitionTime":"2026-02-14T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.674150 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.674196 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.674210 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.674229 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.674243 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:41Z","lastTransitionTime":"2026-02-14T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.777322 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.777376 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.777391 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.777417 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.777432 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:41Z","lastTransitionTime":"2026-02-14T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.781406 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 19:37:52.325667864 +0000 UTC Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.793877 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:41 crc kubenswrapper[4897]: E0214 18:43:41.794017 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.880923 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.881007 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.881057 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.881091 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.881112 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:41Z","lastTransitionTime":"2026-02-14T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.984090 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.984176 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.984197 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.984226 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:41 crc kubenswrapper[4897]: I0214 18:43:41.984248 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:41Z","lastTransitionTime":"2026-02-14T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.086564 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.086617 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.086634 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.086660 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.086681 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:42Z","lastTransitionTime":"2026-02-14T18:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.189393 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.189459 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.189476 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.189503 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.189519 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:42Z","lastTransitionTime":"2026-02-14T18:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.292993 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.293097 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.293117 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.293144 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.293162 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:42Z","lastTransitionTime":"2026-02-14T18:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.395882 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.395965 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.396001 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.396094 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.396120 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:42Z","lastTransitionTime":"2026-02-14T18:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.499075 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.499142 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.499160 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.499188 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.499209 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:42Z","lastTransitionTime":"2026-02-14T18:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.602783 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.602855 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.602876 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.602905 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.602929 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:42Z","lastTransitionTime":"2026-02-14T18:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.706557 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.706630 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.706652 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.706682 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.706707 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:42Z","lastTransitionTime":"2026-02-14T18:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.782464 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 01:57:51.254352273 +0000 UTC Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.793291 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.793445 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.793287 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:42 crc kubenswrapper[4897]: E0214 18:43:42.793512 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:42 crc kubenswrapper[4897]: E0214 18:43:42.793688 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:42 crc kubenswrapper[4897]: E0214 18:43:42.793828 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.809604 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.809650 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.809668 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.809863 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.809881 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:42Z","lastTransitionTime":"2026-02-14T18:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.912614 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.912682 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.912706 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.912734 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:42 crc kubenswrapper[4897]: I0214 18:43:42.912754 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:42Z","lastTransitionTime":"2026-02-14T18:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.015816 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.015876 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.015896 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.015920 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.015938 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:43Z","lastTransitionTime":"2026-02-14T18:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.118622 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.118685 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.118703 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.118731 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.118755 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:43Z","lastTransitionTime":"2026-02-14T18:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.221833 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.221890 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.221910 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.221934 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.221951 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:43Z","lastTransitionTime":"2026-02-14T18:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.324276 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.324336 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.324354 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.324378 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.324395 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:43Z","lastTransitionTime":"2026-02-14T18:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.427020 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.427101 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.427118 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.427142 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.427159 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:43Z","lastTransitionTime":"2026-02-14T18:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.530589 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.530655 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.530674 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.530699 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.530725 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:43Z","lastTransitionTime":"2026-02-14T18:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.633961 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.634131 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.634161 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.634185 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.634202 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:43Z","lastTransitionTime":"2026-02-14T18:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.737117 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.737172 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.737192 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.737216 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.737233 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:43Z","lastTransitionTime":"2026-02-14T18:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.782887 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 18:38:33.949049513 +0000 UTC Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.793315 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:43 crc kubenswrapper[4897]: E0214 18:43:43.793929 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.794422 4897 scope.go:117] "RemoveContainer" containerID="e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.839799 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.839847 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.839864 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.839887 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.839906 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:43Z","lastTransitionTime":"2026-02-14T18:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.942839 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.942903 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.942922 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.942948 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:43 crc kubenswrapper[4897]: I0214 18:43:43.942965 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:43Z","lastTransitionTime":"2026-02-14T18:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.046926 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.046974 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.046992 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.047020 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.047064 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:44Z","lastTransitionTime":"2026-02-14T18:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.150565 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.150622 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.150639 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.150663 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.150681 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:44Z","lastTransitionTime":"2026-02-14T18:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.254117 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.254184 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.254202 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.254227 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.254245 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:44Z","lastTransitionTime":"2026-02-14T18:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.330781 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fz879_f304b761-40a3-41ba-af33-a2b0634a55fb/ovnkube-controller/2.log" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.333908 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" event={"ID":"f304b761-40a3-41ba-af33-a2b0634a55fb","Type":"ContainerStarted","Data":"da8347fb3c0aebea3a85959f11e26786d5507dd1ddfe24b700b7d4981f23ef63"} Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.335089 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.354799 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf15f881-4696-42f3-af8d-2e1b02eee35b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b77b31ca9268c27935ddb1973488480bb124ffdb5934bb011eca818de003da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d4195bd0709714936c540f3c16554a5d06bd95ed92c23003b5d7efc05afcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhdvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:44Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.356677 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.356739 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.356751 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.356766 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.356778 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:44Z","lastTransitionTime":"2026-02-14T18:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.375475 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e1f20e2-fb27-410d-8019-82e73c0be2e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81f5449b8c083d713a05ff1299a5a4025873014bb736633762af4acc1d6d7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73eab27b7a388abb7c9142d8ef6520646cf9d804e5c7c1ae4980749e175134d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9cdf53bb32ab9932f350a61855d31c9ff38fba5ad977fede380b8f3272fc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e42a9e3cce41d65878109f32ae572a4a52fbb5f9dbe1915498499e344004417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e42a9e3cce41d65878109f32ae572a4a52fbb5f9dbe1915498499e344004417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:44Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.393053 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:44Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.412977 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8347fb3c0aebea3a85959f11e26786d5507dd1ddfe24b700b7d4981f23ef63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"twork-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 18:43:14.734199 6534 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 18:43:14.734424 6534 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 18:43:14.734771 6534 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 18:43:14.734801 6534 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 18:43:14.734821 6534 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 18:43:14.734869 6534 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 18:43:14.734921 6534 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 18:43:14.734952 6534 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 18:43:14.734958 6534 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 18:43:14.734982 6534 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 18:43:14.735010 6534 factory.go:656] Stopping watch factory\\\\nI0214 18:43:14.735069 6534 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 18:43:14.735076 6534 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:43:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:44Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.427199 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6wh27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5a3174-286c-4e61-a682-3367cc751fee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562ae320c8e3bf75555e510eda044b859886ae50d8583ddb65da2ac4a698dfc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qggbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6wh27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:44Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.444525 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:44Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.459170 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.459221 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.459232 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.459252 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.459266 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:44Z","lastTransitionTime":"2026-02-14T18:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.466499 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:44Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.482461 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:44Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.499849 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:44Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.525872 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59dea786c4d826f44c37335db7c4d2752d93bf799ec0044b1c6fd22efab3256d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:35Z\\\",\\\"message\\\":\\\"2026-02-14T18:42:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c5c511e0-c251-4b05-9d13-6fd5bf0b7e4b\\\\n2026-02-14T18:42:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c5c511e0-c251-4b05-9d13-6fd5bf0b7e4b to /host/opt/cni/bin/\\\\n2026-02-14T18:42:50Z [verbose] multus-daemon started\\\\n2026-02-14T18:42:50Z [verbose] Readiness Indicator file check\\\\n2026-02-14T18:43:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:44Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.540882 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:44Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.552256 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrgww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b614985-b2f8-443d-9996-635d7e407b24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrgww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:44Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.562529 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.562576 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.562585 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.562602 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.562616 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:44Z","lastTransitionTime":"2026-02-14T18:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.575699 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:44Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.591735 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:44Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.610939 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:44Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.623189 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:44Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.645775 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62a34a3861770f5f41751762c4076f7538409143fc76997ad85b95bbe5789e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:44Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.665045 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.665084 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.665094 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.665111 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.665120 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:44Z","lastTransitionTime":"2026-02-14T18:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.767329 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.767378 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.767395 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.767419 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.767436 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:44Z","lastTransitionTime":"2026-02-14T18:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.783805 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 13:50:51.890584507 +0000 UTC Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.793300 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.793427 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:44 crc kubenswrapper[4897]: E0214 18:43:44.793526 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:44 crc kubenswrapper[4897]: E0214 18:43:44.793621 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.793773 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:44 crc kubenswrapper[4897]: E0214 18:43:44.794091 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.870315 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.870378 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.870396 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.870423 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.870441 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:44Z","lastTransitionTime":"2026-02-14T18:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.974595 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.974664 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.974682 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.974709 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:44 crc kubenswrapper[4897]: I0214 18:43:44.974728 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:44Z","lastTransitionTime":"2026-02-14T18:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.061189 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.061263 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.061280 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.061310 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.061328 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:45Z","lastTransitionTime":"2026-02-14T18:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:45 crc kubenswrapper[4897]: E0214 18:43:45.082090 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:45Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.087329 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.087399 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.087416 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.087442 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.087460 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:45Z","lastTransitionTime":"2026-02-14T18:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:45 crc kubenswrapper[4897]: E0214 18:43:45.108986 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:45Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.114455 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.114498 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.114517 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.114541 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.114558 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:45Z","lastTransitionTime":"2026-02-14T18:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:45 crc kubenswrapper[4897]: E0214 18:43:45.134009 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:45Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.139881 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.139932 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.139949 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.139973 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.139990 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:45Z","lastTransitionTime":"2026-02-14T18:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:45 crc kubenswrapper[4897]: E0214 18:43:45.161142 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:45Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.166092 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.166366 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.166568 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.166779 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.166991 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:45Z","lastTransitionTime":"2026-02-14T18:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:45 crc kubenswrapper[4897]: E0214 18:43:45.187246 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:45Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:45 crc kubenswrapper[4897]: E0214 18:43:45.187890 4897 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.190175 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.190228 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.190253 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.190284 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.190307 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:45Z","lastTransitionTime":"2026-02-14T18:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.293201 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.293270 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.293293 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.293325 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.293352 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:45Z","lastTransitionTime":"2026-02-14T18:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.340980 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fz879_f304b761-40a3-41ba-af33-a2b0634a55fb/ovnkube-controller/3.log" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.342482 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fz879_f304b761-40a3-41ba-af33-a2b0634a55fb/ovnkube-controller/2.log" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.346918 4897 generic.go:334] "Generic (PLEG): container finished" podID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerID="da8347fb3c0aebea3a85959f11e26786d5507dd1ddfe24b700b7d4981f23ef63" exitCode=1 Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.346975 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" event={"ID":"f304b761-40a3-41ba-af33-a2b0634a55fb","Type":"ContainerDied","Data":"da8347fb3c0aebea3a85959f11e26786d5507dd1ddfe24b700b7d4981f23ef63"} Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.347023 4897 scope.go:117] "RemoveContainer" containerID="e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.348000 4897 scope.go:117] "RemoveContainer" containerID="da8347fb3c0aebea3a85959f11e26786d5507dd1ddfe24b700b7d4981f23ef63" Feb 14 18:43:45 crc kubenswrapper[4897]: E0214 18:43:45.348329 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fz879_openshift-ovn-kubernetes(f304b761-40a3-41ba-af33-a2b0634a55fb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.371594 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:45Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.394181 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:45Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.396585 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.396640 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.396658 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.396724 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.396744 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:45Z","lastTransitionTime":"2026-02-14T18:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.413376 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:45Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.431616 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:45Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.448662 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:45Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.467637 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59dea786c4d826f44c37335db7c4d2752d93bf799ec0044b1c6fd22efab3256d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:35Z\\\",\\\"message\\\":\\\"2026-02-14T18:42:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c5c511e0-c251-4b05-9d13-6fd5bf0b7e4b\\\\n2026-02-14T18:42:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c5c511e0-c251-4b05-9d13-6fd5bf0b7e4b to /host/opt/cni/bin/\\\\n2026-02-14T18:42:50Z [verbose] multus-daemon started\\\\n2026-02-14T18:42:50Z [verbose] Readiness Indicator file check\\\\n2026-02-14T18:43:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:45Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.480300 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:45Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.495425 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrgww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b614985-b2f8-443d-9996-635d7e407b24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrgww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:45Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.499755 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.499825 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.499859 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.499882 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.499897 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:45Z","lastTransitionTime":"2026-02-14T18:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.517001 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:45Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.535511 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:45Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.548802 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:45Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.565163 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62a34a3861770f5f41751762c4076f7538409143fc76997ad85b95bbe5789e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:45Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.581639 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6wh27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5a3174-286c-4e61-a682-3367cc751fee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562ae320c8e3bf75555e510eda044b859886ae50d8583ddb65da2ac4a698dfc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qggbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6wh27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:45Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.593139 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf15f881-4696-42f3-af8d-2e1b02eee35b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b77b31ca9268c27935ddb1973488480bb124ffdb5934bb011eca818de003da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d4195bd0709714936c540f3c16554a5d06bd95ed92c23003b5d7efc05afcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhdvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:45Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.602787 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.602938 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.603022 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.603115 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.603177 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:45Z","lastTransitionTime":"2026-02-14T18:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.608181 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e1f20e2-fb27-410d-8019-82e73c0be2e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81f5449b8c083d713a05ff1299a5a4025873014bb736633762af4acc1d6d7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73eab27b7a388abb7c9142d8ef6520646cf9d804e5c7c1ae4980749e175134d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9cdf53bb32ab9932f350a61855d31c9ff38fba5ad977fede380b8f3272fc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e42a9e3cce41d65878109f32ae572a4a52fbb5f9dbe1915498499e344004417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e42a9e3cce41d65878109f32ae572a4a52fbb5f9dbe1915498499e344004417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:45Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.622815 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:45Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.642837 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8347fb3c0aebea3a85959f11e26786d5507dd1ddfe24b700b7d4981f23ef63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0ea677be9340d9113464701f54a49f1396f521e585b2130385dbd758edf5ee2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:14Z\\\",\\\"message\\\":\\\"twork-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 18:43:14.734199 6534 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 18:43:14.734424 6534 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 18:43:14.734771 6534 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 18:43:14.734801 6534 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 18:43:14.734821 6534 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 18:43:14.734869 6534 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 18:43:14.734921 6534 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 18:43:14.734952 6534 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 18:43:14.734958 6534 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0214 18:43:14.734982 6534 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 18:43:14.735010 6534 factory.go:656] Stopping watch factory\\\\nI0214 18:43:14.735069 6534 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 18:43:14.735076 6534 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:43:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8347fb3c0aebea3a85959f11e26786d5507dd1ddfe24b700b7d4981f23ef63\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:44Z\\\",\\\"message\\\":\\\"imeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0214 18:43:44.816262 6935 lb_config.go:1031] Cluster endpoints for openshift-kube-scheduler-operator/metrics for network=default are: map[]\\\\nI0214 18:43:44.816269 6935 services_controller.go:451] Built service openshift-kube-apiserver/apiserver cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.93\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0214 18:43:44.816299 6935 services_controller.go:443] Built service openshift-kube-scheduler-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.233\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:45Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.706669 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.706740 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.706765 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.706797 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.706823 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:45Z","lastTransitionTime":"2026-02-14T18:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.784131 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 18:25:19.487333715 +0000 UTC Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.793198 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:45 crc kubenswrapper[4897]: E0214 18:43:45.793471 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.809359 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.809420 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.809440 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.809467 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.809487 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:45Z","lastTransitionTime":"2026-02-14T18:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.912693 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.912757 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.912777 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.912804 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:45 crc kubenswrapper[4897]: I0214 18:43:45.912822 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:45Z","lastTransitionTime":"2026-02-14T18:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.015347 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.015391 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.015402 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.015420 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.015432 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:46Z","lastTransitionTime":"2026-02-14T18:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.118295 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.118362 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.118380 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.118403 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.118422 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:46Z","lastTransitionTime":"2026-02-14T18:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.220999 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.221079 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.221096 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.221119 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.221135 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:46Z","lastTransitionTime":"2026-02-14T18:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.323912 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.323955 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.323971 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.323993 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.324009 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:46Z","lastTransitionTime":"2026-02-14T18:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.354215 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fz879_f304b761-40a3-41ba-af33-a2b0634a55fb/ovnkube-controller/3.log" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.360122 4897 scope.go:117] "RemoveContainer" containerID="da8347fb3c0aebea3a85959f11e26786d5507dd1ddfe24b700b7d4981f23ef63" Feb 14 18:43:46 crc kubenswrapper[4897]: E0214 18:43:46.360446 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fz879_openshift-ovn-kubernetes(f304b761-40a3-41ba-af33-a2b0634a55fb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.381861 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:46Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.400257 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:46Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.421509 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:46Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.426400 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.426486 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.426505 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.426527 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.426543 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:46Z","lastTransitionTime":"2026-02-14T18:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.444216 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59dea786c4d826f44c37335db7c4d2752d93bf799ec0044b1c6fd22efab3256d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:35Z\\\",\\\"message\\\":\\\"2026-02-14T18:42:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c5c511e0-c251-4b05-9d13-6fd5bf0b7e4b\\\\n2026-02-14T18:42:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c5c511e0-c251-4b05-9d13-6fd5bf0b7e4b to /host/opt/cni/bin/\\\\n2026-02-14T18:42:50Z [verbose] multus-daemon started\\\\n2026-02-14T18:42:50Z [verbose] Readiness Indicator file check\\\\n2026-02-14T18:43:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:46Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.462223 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:46Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.480734 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrgww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b614985-b2f8-443d-9996-635d7e407b24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrgww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:46Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.501653 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:46Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.521132 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:46Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.530511 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.530571 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.530591 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.530617 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.530636 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:46Z","lastTransitionTime":"2026-02-14T18:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.547642 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62a34a3861770f5f41751762c4076f7538409143fc76997ad85b95bbe5789e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:46Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.567514 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e1f20e2-fb27-410d-8019-82e73c0be2e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81f5449b8c083d713a05ff1299a5a4025873014bb736633762af4acc1d6d7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73eab27b7a388abb7c9142d8ef6520646cf9d804e5c7c1ae4980749e175134d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9cdf53bb32ab9932f350a61855d31c9ff38fba5ad977fede380b8f3272fc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e42a9e3cce41d65878109f32ae572a4a52fbb5f9dbe1915498499e344004417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e42a9e3cce41d65878109f32ae572a4a52fbb5f9dbe1915498499e344004417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:46Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.589510 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:46Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.621945 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8347fb3c0aebea3a85959f11e26786d5507dd1ddfe24b700b7d4981f23ef63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8347fb3c0aebea3a85959f11e26786d5507dd1ddfe24b700b7d4981f23ef63\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:44Z\\\",\\\"message\\\":\\\"imeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0214 18:43:44.816262 6935 lb_config.go:1031] Cluster endpoints for openshift-kube-scheduler-operator/metrics for network=default are: map[]\\\\nI0214 18:43:44.816269 6935 services_controller.go:451] Built service openshift-kube-apiserver/apiserver cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.93\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0214 18:43:44.816299 6935 services_controller.go:443] Built service openshift-kube-scheduler-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.233\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:43:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fz879_openshift-ovn-kubernetes(f304b761-40a3-41ba-af33-a2b0634a55fb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:46Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.635773 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.635826 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.635837 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.635856 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.635869 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:46Z","lastTransitionTime":"2026-02-14T18:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.640661 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6wh27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5a3174-286c-4e61-a682-3367cc751fee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562ae320c8e3bf75555e510eda044b859886ae50d8583ddb65da2ac4a698dfc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qggbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6wh27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:46Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.662806 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf15f881-4696-42f3-af8d-2e1b02eee35b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b77b31ca9268c27935ddb1973488480bb124ffdb5934bb011eca818de003da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d4195bd0709714936c540f3c16554a5d06bd95ed92c23003b5d7efc05afcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhdvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:46Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.683970 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:46Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.701212 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:46Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.714745 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:46Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.739161 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.739235 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.739259 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.739285 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.739304 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:46Z","lastTransitionTime":"2026-02-14T18:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.784946 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 20:30:47.430795811 +0000 UTC Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.793307 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.793350 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.793382 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:46 crc kubenswrapper[4897]: E0214 18:43:46.793462 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:46 crc kubenswrapper[4897]: E0214 18:43:46.793631 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:46 crc kubenswrapper[4897]: E0214 18:43:46.793676 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.842128 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.842197 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.842218 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.842242 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.842260 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:46Z","lastTransitionTime":"2026-02-14T18:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.945695 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.945757 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.945777 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.945801 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:46 crc kubenswrapper[4897]: I0214 18:43:46.945821 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:46Z","lastTransitionTime":"2026-02-14T18:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.048807 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.048885 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.048910 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.048940 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.048962 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:47Z","lastTransitionTime":"2026-02-14T18:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.151875 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.151929 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.151944 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.151964 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.151977 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:47Z","lastTransitionTime":"2026-02-14T18:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.254849 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.254931 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.254969 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.255002 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.255058 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:47Z","lastTransitionTime":"2026-02-14T18:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.359198 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.359269 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.359317 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.359350 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.359372 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:47Z","lastTransitionTime":"2026-02-14T18:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.462841 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.462906 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.462923 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.462948 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.462966 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:47Z","lastTransitionTime":"2026-02-14T18:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.566782 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.566853 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.566876 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.566905 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.566928 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:47Z","lastTransitionTime":"2026-02-14T18:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.669441 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.669514 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.669535 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.669564 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.669585 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:47Z","lastTransitionTime":"2026-02-14T18:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.772646 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.772713 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.772737 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.772767 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.772792 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:47Z","lastTransitionTime":"2026-02-14T18:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.785449 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 02:00:19.324003621 +0000 UTC Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.793877 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:47 crc kubenswrapper[4897]: E0214 18:43:47.794236 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.833394 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf15f881-4696-42f3-af8d-2e1b02eee35b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b77b31ca9268c27935ddb1973488480bb124ffdb5934bb011eca818de003da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d4195bd0709714936c540f3c16554a5d06bd95ed92c23003b5d7efc05afcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ckh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhdvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:47Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.864335 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e1f20e2-fb27-410d-8019-82e73c0be2e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81f5449b8c083d713a05ff1299a5a4025873014bb736633762af4acc1d6d7214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73eab27b7a388abb7c9142d8ef6520646cf9d804e5c7c1ae4980749e175134d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9cdf53bb32ab9932f350a61855d31c9ff38fba5ad977fede380b8f3272fc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e42a9e3cce41d65878109f32ae572a4a52fbb5f9dbe1915498499e344004417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e42a9e3cce41d65878109f32ae572a4a52fbb5f9dbe1915498499e344004417\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:47Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.875792 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.876082 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.876217 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.876331 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.876417 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:47Z","lastTransitionTime":"2026-02-14T18:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.892843 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:47Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.913291 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f304b761-40a3-41ba-af33-a2b0634a55fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8347fb3c0aebea3a85959f11e26786d5507dd1ddfe24b700b7d4981f23ef63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8347fb3c0aebea3a85959f11e26786d5507dd1ddfe24b700b7d4981f23ef63\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:44Z\\\",\\\"message\\\":\\\"imeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0214 18:43:44.816262 6935 lb_config.go:1031] Cluster endpoints for openshift-kube-scheduler-operator/metrics for network=default are: map[]\\\\nI0214 18:43:44.816269 6935 services_controller.go:451] Built service openshift-kube-apiserver/apiserver cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.93\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0214 18:43:44.816299 6935 services_controller.go:443] Built service openshift-kube-scheduler-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.233\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:43:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fz879_openshift-ovn-kubernetes(f304b761-40a3-41ba-af33-a2b0634a55fb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7j56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fz879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:47Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.924335 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6wh27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5a3174-286c-4e61-a682-3367cc751fee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562ae320c8e3bf75555e510eda044b859886ae50d8583ddb65da2ac4a698dfc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qggbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6wh27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:47Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.939458 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b741b6871e8863f0a1da9fc3f0733d31401726b7c5b2be8c55d705cbab65aa0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:47Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.951942 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:47Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.968386 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f885c6c-b913-48e3-93fc-abf932515ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4d8efa204514cb1693dcab065ec748fd685083cb0ae76837cd9c03dbbe4e47a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqlkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k5mzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:47Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.979281 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.979315 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.979328 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.979345 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.979359 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:47Z","lastTransitionTime":"2026-02-14T18:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:47 crc kubenswrapper[4897]: I0214 18:43:47.989657 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://387bad0a6729b2bb5dd740f2e79ed46b1adbb754f4df32c9f306cdd2671ec123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137e1e2516c580f51c970e0836aed9e5afe9aad73846dc9ed18bce70ca4f2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:47Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.005580 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ldvzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b30895-0d98-44e4-8e31-2c5ebe5e1850\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59dea786c4d826f44c37335db7c4d2752d93bf799ec0044b1c6fd22efab3256d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T18:43:35Z\\\",\\\"message\\\":\\\"2026-02-14T18:42:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c5c511e0-c251-4b05-9d13-6fd5bf0b7e4b\\\\n2026-02-14T18:42:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c5c511e0-c251-4b05-9d13-6fd5bf0b7e4b to /host/opt/cni/bin/\\\\n2026-02-14T18:42:50Z [verbose] multus-daemon started\\\\n2026-02-14T18:42:50Z [verbose] Readiness Indicator file check\\\\n2026-02-14T18:43:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7vst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ldvzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:48Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.016313 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rpwkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39dde9bd-372a-45b1-bfa5-937929b27c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa52c55d9c8d85d4db96f8dfae8878a31e618b9a5888ce7fb0d189d039fe9fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-784sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rpwkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:48Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.028234 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrgww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b614985-b2f8-443d-9996-635d7e407b24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gqqw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrgww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:48Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.040310 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8356f5-2c48-45bc-a850-d81b87845955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0214 18:42:47.482300 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0214 18:42:47.482539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 18:42:47.488851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1316964624/tls.crt::/tmp/serving-cert-1316964624/tls.key\\\\\\\"\\\\nI0214 18:42:48.017916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0214 18:42:48.020405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0214 18:42:48.020423 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0214 18:42:48.020441 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0214 18:42:48.020446 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0214 18:42:48.025422 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0214 18:42:48.025436 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0214 18:42:48.025464 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0214 18:42:48.025484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0214 18:42:48.025492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0214 18:42:48.025497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0214 18:42:48.025504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0214 18:42:48.027676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:48Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.052721 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:48Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.066369 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9ef77f8-7007-4f1d-8c1c-0c3fd2610cf1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89974ff119daf35829378f0dfdf513087609c39b605744ac26f665db1302dbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://808a8c6c54a1cd720fa0aa9e1b50886e7199d86005920d6698dd5ac2b018630a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0570288c5232f1f744a2e867db0b61aa5839ee3e927747a1526d03a2f3a8dfbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:48Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.081888 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.081933 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.081946 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.081967 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.081984 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:48Z","lastTransitionTime":"2026-02-14T18:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.082500 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9c70b24efe38d375812597319d35c4febd3a5195602b6817a40d261bb55d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:48Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.104130 4897 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"302cd01a-17a5-4519-aa94-02e79495e73c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T18:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62a34a3861770f5f41751762c4076f7538409143fc76997ad85b95bbe5789e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T18:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04360ddd01585b2f02cd672e74658721fc998c447d11d7094b094075fca73be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b21603ce9f62679e9d8dc0e74e7f39a8a324a99926116458ee705b54cd393b0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dcd25ef7b397aae35866747c3cde56e78a93b780b6554f7c1ad4b818fd36a82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31a2fbe0b652fc81cce48e9bfcfb1e72c4fadc967e9d254d332bbd177bf3cfa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7696b82c594afbeca2b7cd4cbb2104944f76ec97d14e7536968aef1c58b183a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0fc1ba25a807d0ebae7bcd42cbe55f13efdf7a8d841f17db5bff6e5970a5e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T18:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T18:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbdfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T18:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rnbbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:48Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.185085 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.185138 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.185154 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.185181 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.185201 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:48Z","lastTransitionTime":"2026-02-14T18:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.295611 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.295660 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.295679 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.295705 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.295723 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:48Z","lastTransitionTime":"2026-02-14T18:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.399573 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.399959 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.400193 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.400369 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.400500 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:48Z","lastTransitionTime":"2026-02-14T18:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.503646 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.503712 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.503730 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.503755 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.503772 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:48Z","lastTransitionTime":"2026-02-14T18:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.607541 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.607595 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.607617 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.607642 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.607659 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:48Z","lastTransitionTime":"2026-02-14T18:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.710078 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.710140 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.710157 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.710184 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.710203 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:48Z","lastTransitionTime":"2026-02-14T18:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.785708 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 23:52:54.598425024 +0000 UTC Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.793158 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.793196 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.793165 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:48 crc kubenswrapper[4897]: E0214 18:43:48.793351 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:48 crc kubenswrapper[4897]: E0214 18:43:48.793460 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:48 crc kubenswrapper[4897]: E0214 18:43:48.793640 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.813296 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.813385 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.813403 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.813423 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.813441 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:48Z","lastTransitionTime":"2026-02-14T18:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.916586 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.916659 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.916682 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.916714 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:48 crc kubenswrapper[4897]: I0214 18:43:48.916739 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:48Z","lastTransitionTime":"2026-02-14T18:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.020576 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.020644 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.020655 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.020676 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.020691 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:49Z","lastTransitionTime":"2026-02-14T18:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.124011 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.124111 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.124135 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.124171 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.124194 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:49Z","lastTransitionTime":"2026-02-14T18:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.226802 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.226872 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.226899 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.226928 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.226949 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:49Z","lastTransitionTime":"2026-02-14T18:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.330019 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.330141 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.330160 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.330187 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.330205 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:49Z","lastTransitionTime":"2026-02-14T18:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.433531 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.433597 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.433620 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.433648 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.433671 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:49Z","lastTransitionTime":"2026-02-14T18:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.536903 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.536979 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.536996 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.537064 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.537083 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:49Z","lastTransitionTime":"2026-02-14T18:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.640579 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.640634 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.640650 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.640673 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.640691 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:49Z","lastTransitionTime":"2026-02-14T18:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.743574 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.743635 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.743652 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.743678 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.743696 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:49Z","lastTransitionTime":"2026-02-14T18:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.786394 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 17:09:07.227073853 +0000 UTC Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.793910 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:49 crc kubenswrapper[4897]: E0214 18:43:49.794138 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.847163 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.847224 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.847241 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.847270 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.847292 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:49Z","lastTransitionTime":"2026-02-14T18:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.950189 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.950273 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.950296 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.950328 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:49 crc kubenswrapper[4897]: I0214 18:43:49.950356 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:49Z","lastTransitionTime":"2026-02-14T18:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.053532 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.053591 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.053608 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.053633 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.053652 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:50Z","lastTransitionTime":"2026-02-14T18:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.157295 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.157358 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.157379 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.157409 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.157433 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:50Z","lastTransitionTime":"2026-02-14T18:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.260176 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.260230 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.260248 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.260273 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.260292 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:50Z","lastTransitionTime":"2026-02-14T18:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.363401 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.363459 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.363477 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.363503 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.363521 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:50Z","lastTransitionTime":"2026-02-14T18:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.466054 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.466258 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.466325 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.466423 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.466508 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:50Z","lastTransitionTime":"2026-02-14T18:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.570007 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.570415 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.570644 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.570931 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.571191 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:50Z","lastTransitionTime":"2026-02-14T18:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.674319 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.674368 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.674384 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.674406 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.674424 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:50Z","lastTransitionTime":"2026-02-14T18:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.776755 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.777101 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.777300 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.777473 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.777635 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:50Z","lastTransitionTime":"2026-02-14T18:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.786622 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 12:56:06.505716573 +0000 UTC Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.793534 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.793563 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.793534 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:50 crc kubenswrapper[4897]: E0214 18:43:50.793689 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:50 crc kubenswrapper[4897]: E0214 18:43:50.793787 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:50 crc kubenswrapper[4897]: E0214 18:43:50.793878 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.881064 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.881130 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.881149 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.881177 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.881199 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:50Z","lastTransitionTime":"2026-02-14T18:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.983721 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.983776 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.983797 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.983825 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:50 crc kubenswrapper[4897]: I0214 18:43:50.983846 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:50Z","lastTransitionTime":"2026-02-14T18:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.087210 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.087271 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.087288 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.087310 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.087327 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:51Z","lastTransitionTime":"2026-02-14T18:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.191144 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.191680 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.191877 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.192026 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.192232 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:51Z","lastTransitionTime":"2026-02-14T18:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.295844 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.295902 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.295918 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.295941 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.295958 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:51Z","lastTransitionTime":"2026-02-14T18:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.399068 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.399129 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.399146 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.399170 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.399189 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:51Z","lastTransitionTime":"2026-02-14T18:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.502109 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.502184 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.502208 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.502235 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.502253 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:51Z","lastTransitionTime":"2026-02-14T18:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.605427 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.605493 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.605513 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.605537 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.605554 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:51Z","lastTransitionTime":"2026-02-14T18:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.709140 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.709208 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.709226 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.709254 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.709274 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:51Z","lastTransitionTime":"2026-02-14T18:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.728729 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:43:51 crc kubenswrapper[4897]: E0214 18:43:51.728900 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:55.728866743 +0000 UTC m=+148.705275266 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.728970 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.729068 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:51 crc kubenswrapper[4897]: E0214 18:43:51.729206 4897 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 18:43:51 crc kubenswrapper[4897]: E0214 18:43:51.729238 4897 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 18:43:51 crc kubenswrapper[4897]: E0214 18:43:51.729299 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 18:44:55.729285265 +0000 UTC m=+148.705693788 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 18:43:51 crc kubenswrapper[4897]: E0214 18:43:51.729325 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 18:44:55.729310916 +0000 UTC m=+148.705719429 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.786929 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 20:53:57.158582454 +0000 UTC Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.793359 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:51 crc kubenswrapper[4897]: E0214 18:43:51.793540 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.811481 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.811555 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.811578 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.811609 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.811635 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:51Z","lastTransitionTime":"2026-02-14T18:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.829847 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.829981 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:51 crc kubenswrapper[4897]: E0214 18:43:51.830063 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 18:43:51 crc kubenswrapper[4897]: E0214 18:43:51.830098 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 18:43:51 crc kubenswrapper[4897]: E0214 18:43:51.830119 4897 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:43:51 crc kubenswrapper[4897]: E0214 18:43:51.830191 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 18:43:51 crc kubenswrapper[4897]: E0214 18:43:51.830221 4897 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 18:43:51 crc kubenswrapper[4897]: E0214 18:43:51.830241 4897 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:43:51 crc kubenswrapper[4897]: E0214 18:43:51.830202 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 18:44:55.830174611 +0000 UTC m=+148.806583124 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:43:51 crc kubenswrapper[4897]: E0214 18:43:51.830325 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 18:44:55.830303425 +0000 UTC m=+148.806711948 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.916363 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.916422 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.916440 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.916465 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:51 crc kubenswrapper[4897]: I0214 18:43:51.916482 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:51Z","lastTransitionTime":"2026-02-14T18:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.020080 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.020402 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.020562 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.020726 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.020878 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:52Z","lastTransitionTime":"2026-02-14T18:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.124272 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.124332 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.124348 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.124393 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.124412 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:52Z","lastTransitionTime":"2026-02-14T18:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.227803 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.227873 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.227901 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.227931 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.227955 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:52Z","lastTransitionTime":"2026-02-14T18:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.331108 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.331564 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.331582 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.331606 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.331625 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:52Z","lastTransitionTime":"2026-02-14T18:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.434846 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.434906 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.434926 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.434951 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.434969 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:52Z","lastTransitionTime":"2026-02-14T18:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.537494 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.537579 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.537602 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.537631 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.537654 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:52Z","lastTransitionTime":"2026-02-14T18:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.640535 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.640605 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.640623 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.640770 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.640803 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:52Z","lastTransitionTime":"2026-02-14T18:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.744064 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.744139 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.744158 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.744186 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.744235 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:52Z","lastTransitionTime":"2026-02-14T18:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.788355 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 14:52:33.97324796 +0000 UTC Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.793237 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.793260 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:52 crc kubenswrapper[4897]: E0214 18:43:52.793418 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.793266 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:52 crc kubenswrapper[4897]: E0214 18:43:52.793522 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:52 crc kubenswrapper[4897]: E0214 18:43:52.793689 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.846530 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.846604 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.846621 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.846646 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.846664 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:52Z","lastTransitionTime":"2026-02-14T18:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.949851 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.949922 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.949944 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.949974 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:52 crc kubenswrapper[4897]: I0214 18:43:52.949996 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:52Z","lastTransitionTime":"2026-02-14T18:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.053122 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.053164 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.053175 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.053191 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.053202 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:53Z","lastTransitionTime":"2026-02-14T18:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.155953 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.156022 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.156074 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.156100 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.156120 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:53Z","lastTransitionTime":"2026-02-14T18:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.259188 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.259270 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.259290 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.259315 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.259332 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:53Z","lastTransitionTime":"2026-02-14T18:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.362519 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.362591 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.362616 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.362645 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.362668 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:53Z","lastTransitionTime":"2026-02-14T18:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.466203 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.466269 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.466285 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.466314 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.466332 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:53Z","lastTransitionTime":"2026-02-14T18:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.569206 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.569291 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.569312 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.569340 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.569359 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:53Z","lastTransitionTime":"2026-02-14T18:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.671852 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.671918 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.671937 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.671960 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.671979 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:53Z","lastTransitionTime":"2026-02-14T18:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.774831 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.774895 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.774914 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.774938 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.774958 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:53Z","lastTransitionTime":"2026-02-14T18:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.788905 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 06:41:19.872184105 +0000 UTC Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.793304 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:53 crc kubenswrapper[4897]: E0214 18:43:53.793482 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.878863 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.878927 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.878946 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.878970 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.878989 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:53Z","lastTransitionTime":"2026-02-14T18:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.982279 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.982353 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.982374 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.982402 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:53 crc kubenswrapper[4897]: I0214 18:43:53.982420 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:53Z","lastTransitionTime":"2026-02-14T18:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.085332 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.085400 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.085420 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.085448 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.085469 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:54Z","lastTransitionTime":"2026-02-14T18:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.188770 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.189174 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.189386 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.189548 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.189700 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:54Z","lastTransitionTime":"2026-02-14T18:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.293057 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.293130 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.293154 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.293181 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.293198 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:54Z","lastTransitionTime":"2026-02-14T18:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.396121 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.396179 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.396195 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.396218 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.396236 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:54Z","lastTransitionTime":"2026-02-14T18:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.499297 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.499369 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.499391 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.499449 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.499472 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:54Z","lastTransitionTime":"2026-02-14T18:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.602851 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.602928 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.602951 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.602981 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.603002 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:54Z","lastTransitionTime":"2026-02-14T18:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.706465 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.706534 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.706557 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.706588 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.706610 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:54Z","lastTransitionTime":"2026-02-14T18:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.789544 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 10:53:00.456358272 +0000 UTC Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.792960 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.792982 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.793125 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:54 crc kubenswrapper[4897]: E0214 18:43:54.793254 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:54 crc kubenswrapper[4897]: E0214 18:43:54.793368 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:54 crc kubenswrapper[4897]: E0214 18:43:54.793530 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.809769 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.809806 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.809820 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.809837 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.809849 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:54Z","lastTransitionTime":"2026-02-14T18:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.912698 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.912783 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.912803 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.912828 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:54 crc kubenswrapper[4897]: I0214 18:43:54.912847 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:54Z","lastTransitionTime":"2026-02-14T18:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.016703 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.016781 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.016803 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.016832 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.016855 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:55Z","lastTransitionTime":"2026-02-14T18:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.119517 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.119881 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.120055 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.120444 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.120592 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:55Z","lastTransitionTime":"2026-02-14T18:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.224270 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.224336 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.224355 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.224413 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.224432 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:55Z","lastTransitionTime":"2026-02-14T18:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.299567 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.299635 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.299658 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.299685 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.299707 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:55Z","lastTransitionTime":"2026-02-14T18:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:55 crc kubenswrapper[4897]: E0214 18:43:55.320104 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:55Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.326149 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.326202 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.326220 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.326244 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.326266 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:55Z","lastTransitionTime":"2026-02-14T18:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:55 crc kubenswrapper[4897]: E0214 18:43:55.346268 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:55Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.351440 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.351634 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.351829 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.352016 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.352255 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:55Z","lastTransitionTime":"2026-02-14T18:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:55 crc kubenswrapper[4897]: E0214 18:43:55.373829 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:55Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.379522 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.379592 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.379610 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.379637 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.379657 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:55Z","lastTransitionTime":"2026-02-14T18:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:55 crc kubenswrapper[4897]: E0214 18:43:55.400221 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:55Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.405580 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.405651 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.405671 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.405699 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.405717 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:55Z","lastTransitionTime":"2026-02-14T18:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:55 crc kubenswrapper[4897]: E0214 18:43:55.426955 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T18:43:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"41bffe32-6f10-4c7d-a67d-9930279261bf\\\",\\\"systemUUID\\\":\\\"3852ed47-2b76-43f4-bf60-51d80952e808\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T18:43:55Z is after 2025-08-24T17:21:41Z" Feb 14 18:43:55 crc kubenswrapper[4897]: E0214 18:43:55.427256 4897 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.429329 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.429393 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.429411 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.429435 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.429453 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:55Z","lastTransitionTime":"2026-02-14T18:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.533536 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.533589 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.533606 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.533629 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.533651 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:55Z","lastTransitionTime":"2026-02-14T18:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.637373 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.637468 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.637503 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.637534 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.637558 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:55Z","lastTransitionTime":"2026-02-14T18:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.740400 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.740824 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.740998 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.741243 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.741399 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:55Z","lastTransitionTime":"2026-02-14T18:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.790547 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 10:39:02.621169141 +0000 UTC Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.792987 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:55 crc kubenswrapper[4897]: E0214 18:43:55.793365 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.844832 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.844908 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.844934 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.844967 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.844989 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:55Z","lastTransitionTime":"2026-02-14T18:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.948513 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.948603 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.948623 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.948659 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:55 crc kubenswrapper[4897]: I0214 18:43:55.948725 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:55Z","lastTransitionTime":"2026-02-14T18:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.052328 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.052407 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.052465 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.052499 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.052522 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:56Z","lastTransitionTime":"2026-02-14T18:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.155740 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.155831 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.155857 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.155887 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.155912 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:56Z","lastTransitionTime":"2026-02-14T18:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.258676 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.258736 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.258758 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.258786 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.258807 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:56Z","lastTransitionTime":"2026-02-14T18:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.361617 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.361704 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.361725 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.361757 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.361779 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:56Z","lastTransitionTime":"2026-02-14T18:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.464584 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.464651 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.464673 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.464698 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.464716 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:56Z","lastTransitionTime":"2026-02-14T18:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.568086 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.568167 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.568191 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.568226 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.568249 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:56Z","lastTransitionTime":"2026-02-14T18:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.671204 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.671299 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.671320 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.671348 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.671368 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:56Z","lastTransitionTime":"2026-02-14T18:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.774161 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.774213 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.774230 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.774255 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.774273 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:56Z","lastTransitionTime":"2026-02-14T18:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.791126 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 21:41:52.245487069 +0000 UTC Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.793301 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.793352 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.793299 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:56 crc kubenswrapper[4897]: E0214 18:43:56.793489 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:56 crc kubenswrapper[4897]: E0214 18:43:56.793580 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:56 crc kubenswrapper[4897]: E0214 18:43:56.793718 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.876598 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.876667 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.876692 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.876719 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.876739 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:56Z","lastTransitionTime":"2026-02-14T18:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.979677 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.979735 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.979753 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.979776 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:56 crc kubenswrapper[4897]: I0214 18:43:56.979793 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:56Z","lastTransitionTime":"2026-02-14T18:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.082636 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.082698 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.082715 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.082741 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.082758 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:57Z","lastTransitionTime":"2026-02-14T18:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.186166 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.186546 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.186753 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.186899 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.187064 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:57Z","lastTransitionTime":"2026-02-14T18:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.290947 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.291018 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.291073 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.291106 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.291128 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:57Z","lastTransitionTime":"2026-02-14T18:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.394722 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.394788 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.394805 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.394831 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.394849 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:57Z","lastTransitionTime":"2026-02-14T18:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.498180 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.498239 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.498256 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.498286 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.498304 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:57Z","lastTransitionTime":"2026-02-14T18:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.601005 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.601083 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.601103 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.601131 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.601150 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:57Z","lastTransitionTime":"2026-02-14T18:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.703895 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.704220 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.704362 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.704416 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.704439 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:57Z","lastTransitionTime":"2026-02-14T18:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.791435 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 19:43:21.112141249 +0000 UTC Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.793886 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:57 crc kubenswrapper[4897]: E0214 18:43:57.794120 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.807394 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.807455 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.807472 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.807496 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.807514 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:57Z","lastTransitionTime":"2026-02-14T18:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.890351 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rnbbh" podStartSLOduration=70.890325779 podStartE2EDuration="1m10.890325779s" podCreationTimestamp="2026-02-14 18:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:43:57.861165148 +0000 UTC m=+90.837573681" watchObservedRunningTime="2026-02-14 18:43:57.890325779 +0000 UTC m=+90.866734302" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.890620 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=69.890607578 podStartE2EDuration="1m9.890607578s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:43:57.888997137 +0000 UTC m=+90.865405710" watchObservedRunningTime="2026-02-14 18:43:57.890607578 +0000 UTC m=+90.867016111" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.910089 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.910188 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.910208 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.910275 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.910296 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:57Z","lastTransitionTime":"2026-02-14T18:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:57 crc kubenswrapper[4897]: I0214 18:43:57.972620 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-6wh27" podStartSLOduration=69.972591617 podStartE2EDuration="1m9.972591617s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:43:57.972456162 +0000 UTC m=+90.948864685" watchObservedRunningTime="2026-02-14 18:43:57.972591617 +0000 UTC m=+90.949000110" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.010213 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhdvl" podStartSLOduration=70.010195305 podStartE2EDuration="1m10.010195305s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:43:57.992257908 +0000 UTC m=+90.968666421" watchObservedRunningTime="2026-02-14 18:43:58.010195305 +0000 UTC m=+90.986603788" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.013555 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.013612 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.013636 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.013668 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.013689 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:58Z","lastTransitionTime":"2026-02-14T18:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.033296 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=41.033267743 podStartE2EDuration="41.033267743s" podCreationTimestamp="2026-02-14 18:43:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:43:58.012823418 +0000 UTC m=+90.989231901" watchObservedRunningTime="2026-02-14 18:43:58.033267743 +0000 UTC m=+91.009676266" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.072239 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podStartSLOduration=71.072205993 podStartE2EDuration="1m11.072205993s" podCreationTimestamp="2026-02-14 18:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:43:58.050373954 +0000 UTC m=+91.026782507" watchObservedRunningTime="2026-02-14 18:43:58.072205993 +0000 UTC m=+91.048614516" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.116933 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.116996 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.117014 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.117085 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.117104 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:58Z","lastTransitionTime":"2026-02-14T18:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.124804 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=70.124779953 podStartE2EDuration="1m10.124779953s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:43:58.09842315 +0000 UTC m=+91.074831693" watchObservedRunningTime="2026-02-14 18:43:58.124779953 +0000 UTC m=+91.101188466" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.166785 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-ldvzr" podStartSLOduration=71.166763029 podStartE2EDuration="1m11.166763029s" podCreationTimestamp="2026-02-14 18:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:43:58.166239692 +0000 UTC m=+91.142648205" watchObservedRunningTime="2026-02-14 18:43:58.166763029 +0000 UTC m=+91.143171542" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.181323 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-rpwkf" podStartSLOduration=71.181292628 podStartE2EDuration="1m11.181292628s" podCreationTimestamp="2026-02-14 18:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:43:58.180291486 +0000 UTC m=+91.156700019" watchObservedRunningTime="2026-02-14 18:43:58.181292628 +0000 UTC m=+91.157701151" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.219580 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.219639 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.219656 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.219680 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.219698 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:58Z","lastTransitionTime":"2026-02-14T18:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.323074 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.323134 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.323151 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.323174 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.323207 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:58Z","lastTransitionTime":"2026-02-14T18:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.425172 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.425591 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.425885 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.426020 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.426185 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:58Z","lastTransitionTime":"2026-02-14T18:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.529822 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.529902 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.529938 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.529964 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.529983 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:58Z","lastTransitionTime":"2026-02-14T18:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.633480 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.633573 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.633600 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.633629 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.633647 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:58Z","lastTransitionTime":"2026-02-14T18:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.737984 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.738070 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.738089 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.738115 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.738132 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:58Z","lastTransitionTime":"2026-02-14T18:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.792837 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 04:35:05.923391284 +0000 UTC Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.792941 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.792985 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.793026 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:43:58 crc kubenswrapper[4897]: E0214 18:43:58.793166 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:43:58 crc kubenswrapper[4897]: E0214 18:43:58.793293 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:43:58 crc kubenswrapper[4897]: E0214 18:43:58.793442 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.840852 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.840908 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.840924 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.840947 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.840965 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:58Z","lastTransitionTime":"2026-02-14T18:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.943906 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.943968 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.943991 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.944019 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:58 crc kubenswrapper[4897]: I0214 18:43:58.944073 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:58Z","lastTransitionTime":"2026-02-14T18:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.046473 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.046526 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.046541 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.046560 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.046576 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:59Z","lastTransitionTime":"2026-02-14T18:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.149965 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.150072 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.150097 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.150127 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.150150 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:59Z","lastTransitionTime":"2026-02-14T18:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.253416 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.253490 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.253508 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.253532 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.253552 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:59Z","lastTransitionTime":"2026-02-14T18:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.357221 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.357302 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.357327 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.357363 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.357382 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:59Z","lastTransitionTime":"2026-02-14T18:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.460862 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.460940 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.460965 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.460996 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.461021 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:59Z","lastTransitionTime":"2026-02-14T18:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.564653 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.564789 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.564824 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.564857 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.564879 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:59Z","lastTransitionTime":"2026-02-14T18:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.668783 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.668854 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.668874 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.668900 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.668918 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:59Z","lastTransitionTime":"2026-02-14T18:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.772862 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.772953 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.772978 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.773013 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.773080 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:59Z","lastTransitionTime":"2026-02-14T18:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.793562 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 19:19:06.663324921 +0000 UTC Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.793778 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:43:59 crc kubenswrapper[4897]: E0214 18:43:59.793968 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.814339 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.876182 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.876266 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.876285 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.876314 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.876333 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:59Z","lastTransitionTime":"2026-02-14T18:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.980556 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.980637 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.980660 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.980693 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:43:59 crc kubenswrapper[4897]: I0214 18:43:59.980713 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:43:59Z","lastTransitionTime":"2026-02-14T18:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.083999 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.084114 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.084133 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.084166 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.084188 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:00Z","lastTransitionTime":"2026-02-14T18:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.186506 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.186550 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.186561 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.186577 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.186590 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:00Z","lastTransitionTime":"2026-02-14T18:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.289942 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.290026 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.290080 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.290108 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.290131 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:00Z","lastTransitionTime":"2026-02-14T18:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.393096 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.393161 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.393184 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.393210 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.393229 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:00Z","lastTransitionTime":"2026-02-14T18:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.496521 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.496574 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.496591 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.496615 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.496633 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:00Z","lastTransitionTime":"2026-02-14T18:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.599478 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.599549 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.599568 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.599598 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.599619 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:00Z","lastTransitionTime":"2026-02-14T18:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.703011 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.703110 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.703135 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.703163 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.703186 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:00Z","lastTransitionTime":"2026-02-14T18:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.793487 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.793549 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.793506 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:44:00 crc kubenswrapper[4897]: E0214 18:44:00.793727 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.793751 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 16:25:01.651929419 +0000 UTC Feb 14 18:44:00 crc kubenswrapper[4897]: E0214 18:44:00.793897 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:44:00 crc kubenswrapper[4897]: E0214 18:44:00.794016 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.820452 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.820520 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.820542 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.820569 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.820587 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:00Z","lastTransitionTime":"2026-02-14T18:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.924395 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.924472 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.924494 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.924519 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:00 crc kubenswrapper[4897]: I0214 18:44:00.924537 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:00Z","lastTransitionTime":"2026-02-14T18:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.027207 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.027280 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.027296 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.027321 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.027339 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:01Z","lastTransitionTime":"2026-02-14T18:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.130442 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.130541 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.130567 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.130597 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.130616 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:01Z","lastTransitionTime":"2026-02-14T18:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.233444 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.233531 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.233548 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.233571 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.233589 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:01Z","lastTransitionTime":"2026-02-14T18:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.336116 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.336187 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.336204 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.336229 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.336249 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:01Z","lastTransitionTime":"2026-02-14T18:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.442903 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.442979 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.443021 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.443079 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.443097 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:01Z","lastTransitionTime":"2026-02-14T18:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.545956 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.546367 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.546399 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.546662 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.546729 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:01Z","lastTransitionTime":"2026-02-14T18:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.650467 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.651009 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.651230 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.651438 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.651643 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:01Z","lastTransitionTime":"2026-02-14T18:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.755191 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.755256 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.755278 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.755307 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.755329 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:01Z","lastTransitionTime":"2026-02-14T18:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.793758 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.793867 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 21:55:27.52571018 +0000 UTC Feb 14 18:44:01 crc kubenswrapper[4897]: E0214 18:44:01.794694 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.795064 4897 scope.go:117] "RemoveContainer" containerID="da8347fb3c0aebea3a85959f11e26786d5507dd1ddfe24b700b7d4981f23ef63" Feb 14 18:44:01 crc kubenswrapper[4897]: E0214 18:44:01.795382 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fz879_openshift-ovn-kubernetes(f304b761-40a3-41ba-af33-a2b0634a55fb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.818583 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.858718 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.858778 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.858796 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.858820 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.858838 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:01Z","lastTransitionTime":"2026-02-14T18:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.962652 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.962730 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.962755 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.962789 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:01 crc kubenswrapper[4897]: I0214 18:44:01.962817 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:01Z","lastTransitionTime":"2026-02-14T18:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.066375 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.066441 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.066462 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.066490 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.066510 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:02Z","lastTransitionTime":"2026-02-14T18:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.170242 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.170317 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.170341 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.170378 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.170403 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:02Z","lastTransitionTime":"2026-02-14T18:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.273154 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.273221 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.273241 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.273269 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.273288 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:02Z","lastTransitionTime":"2026-02-14T18:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.376524 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.376581 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.376593 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.376610 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.376622 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:02Z","lastTransitionTime":"2026-02-14T18:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.480382 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.480444 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.480460 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.480486 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.480504 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:02Z","lastTransitionTime":"2026-02-14T18:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.583871 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.583939 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.583957 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.583987 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.584005 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:02Z","lastTransitionTime":"2026-02-14T18:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.686612 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.686670 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.686691 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.686717 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.686735 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:02Z","lastTransitionTime":"2026-02-14T18:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.790123 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.790186 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.790206 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.790230 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.790247 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:02Z","lastTransitionTime":"2026-02-14T18:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.793629 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.793697 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.793649 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:02 crc kubenswrapper[4897]: E0214 18:44:02.793868 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:44:02 crc kubenswrapper[4897]: E0214 18:44:02.793979 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:44:02 crc kubenswrapper[4897]: E0214 18:44:02.794177 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.794123 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:56:59.838289295 +0000 UTC Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.893224 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.893292 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.893308 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.893336 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.893355 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:02Z","lastTransitionTime":"2026-02-14T18:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.996260 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.996322 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.996339 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.996365 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:02 crc kubenswrapper[4897]: I0214 18:44:02.996384 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:02Z","lastTransitionTime":"2026-02-14T18:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.100575 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.100628 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.100645 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.100668 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.100684 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:03Z","lastTransitionTime":"2026-02-14T18:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.203531 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.203617 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.203640 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.203670 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.203689 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:03Z","lastTransitionTime":"2026-02-14T18:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.306727 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.306808 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.306826 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.306850 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.306867 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:03Z","lastTransitionTime":"2026-02-14T18:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.410483 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.410543 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.410564 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.410590 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.410613 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:03Z","lastTransitionTime":"2026-02-14T18:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.514106 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.514246 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.514368 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.514523 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.514549 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:03Z","lastTransitionTime":"2026-02-14T18:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.617744 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.617806 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.617827 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.617856 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.617879 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:03Z","lastTransitionTime":"2026-02-14T18:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.720569 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.720636 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.720654 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.720684 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.720702 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:03Z","lastTransitionTime":"2026-02-14T18:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.793771 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:44:03 crc kubenswrapper[4897]: E0214 18:44:03.793976 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.794380 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 14:56:08.244923139 +0000 UTC Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.830258 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.830325 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.830342 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.830367 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.830386 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:03Z","lastTransitionTime":"2026-02-14T18:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.933663 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.933739 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.933758 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.933784 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:03 crc kubenswrapper[4897]: I0214 18:44:03.933801 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:03Z","lastTransitionTime":"2026-02-14T18:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.036911 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.036966 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.036986 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.037066 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.037087 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:04Z","lastTransitionTime":"2026-02-14T18:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.139859 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.139914 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.139932 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.139957 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.139977 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:04Z","lastTransitionTime":"2026-02-14T18:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.242639 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.242727 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.242744 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.242769 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.242786 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:04Z","lastTransitionTime":"2026-02-14T18:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.345848 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.345960 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.345980 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.346003 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.346019 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:04Z","lastTransitionTime":"2026-02-14T18:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.448626 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.448693 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.448712 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.448734 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.448751 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:04Z","lastTransitionTime":"2026-02-14T18:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.550979 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.551071 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.551089 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.551112 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.551130 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:04Z","lastTransitionTime":"2026-02-14T18:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.653660 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.653715 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.653735 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.653757 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.653774 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:04Z","lastTransitionTime":"2026-02-14T18:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.756727 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.756794 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.756812 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.756838 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.756855 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:04Z","lastTransitionTime":"2026-02-14T18:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.793632 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.793672 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.793715 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:44:04 crc kubenswrapper[4897]: E0214 18:44:04.793798 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:44:04 crc kubenswrapper[4897]: E0214 18:44:04.793973 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:44:04 crc kubenswrapper[4897]: E0214 18:44:04.794148 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.794626 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 11:35:03.39734086 +0000 UTC Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.859375 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.859433 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.859450 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.859480 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.859497 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:04Z","lastTransitionTime":"2026-02-14T18:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.962836 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.962895 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.962914 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.962937 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:04 crc kubenswrapper[4897]: I0214 18:44:04.962957 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:04Z","lastTransitionTime":"2026-02-14T18:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.065547 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.065637 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.065672 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.065706 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.065738 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:05Z","lastTransitionTime":"2026-02-14T18:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.168719 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.168846 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.168920 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.168948 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.168965 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:05Z","lastTransitionTime":"2026-02-14T18:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.274003 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.274124 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.274150 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.274181 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.274203 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:05Z","lastTransitionTime":"2026-02-14T18:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.376800 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.376871 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.376919 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.376949 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.376969 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:05Z","lastTransitionTime":"2026-02-14T18:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.480152 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.480231 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.480258 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.480288 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.480315 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:05Z","lastTransitionTime":"2026-02-14T18:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.584012 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.584107 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.584125 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.584150 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.584169 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:05Z","lastTransitionTime":"2026-02-14T18:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.687269 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.687327 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.687345 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.687369 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.687386 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:05Z","lastTransitionTime":"2026-02-14T18:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.776946 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.777064 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.777090 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.777120 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.777184 4897 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T18:44:05Z","lastTransitionTime":"2026-02-14T18:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.793233 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:44:05 crc kubenswrapper[4897]: E0214 18:44:05.793424 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.795433 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 22:43:04.911818318 +0000 UTC Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.843711 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsm78"] Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.844393 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsm78" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.847838 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.848326 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.848587 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.849101 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.902394 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ff3b528-9413-42f5-9852-576e7a1b1a8e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bsm78\" (UID: \"2ff3b528-9413-42f5-9852-576e7a1b1a8e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsm78" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.903026 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2ff3b528-9413-42f5-9852-576e7a1b1a8e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bsm78\" (UID: \"2ff3b528-9413-42f5-9852-576e7a1b1a8e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsm78" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.903205 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2ff3b528-9413-42f5-9852-576e7a1b1a8e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bsm78\" (UID: \"2ff3b528-9413-42f5-9852-576e7a1b1a8e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsm78" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.903294 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2ff3b528-9413-42f5-9852-576e7a1b1a8e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bsm78\" (UID: \"2ff3b528-9413-42f5-9852-576e7a1b1a8e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsm78" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.903406 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff3b528-9413-42f5-9852-576e7a1b1a8e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bsm78\" (UID: \"2ff3b528-9413-42f5-9852-576e7a1b1a8e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsm78" Feb 14 18:44:05 crc kubenswrapper[4897]: I0214 18:44:05.912388 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=4.912361933 podStartE2EDuration="4.912361933s" podCreationTimestamp="2026-02-14 18:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:05.910254867 +0000 UTC m=+98.886663450" watchObservedRunningTime="2026-02-14 18:44:05.912361933 +0000 UTC m=+98.888770426" Feb 14 18:44:06 crc kubenswrapper[4897]: I0214 18:44:06.008331 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ff3b528-9413-42f5-9852-576e7a1b1a8e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bsm78\" (UID: \"2ff3b528-9413-42f5-9852-576e7a1b1a8e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsm78" Feb 14 18:44:06 crc kubenswrapper[4897]: I0214 18:44:06.008432 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2ff3b528-9413-42f5-9852-576e7a1b1a8e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bsm78\" (UID: \"2ff3b528-9413-42f5-9852-576e7a1b1a8e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsm78" Feb 14 18:44:06 crc kubenswrapper[4897]: I0214 18:44:06.008467 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2ff3b528-9413-42f5-9852-576e7a1b1a8e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bsm78\" (UID: \"2ff3b528-9413-42f5-9852-576e7a1b1a8e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsm78" Feb 14 18:44:06 crc kubenswrapper[4897]: I0214 18:44:06.008509 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2ff3b528-9413-42f5-9852-576e7a1b1a8e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bsm78\" (UID: \"2ff3b528-9413-42f5-9852-576e7a1b1a8e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsm78" Feb 14 18:44:06 crc kubenswrapper[4897]: I0214 18:44:06.008545 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff3b528-9413-42f5-9852-576e7a1b1a8e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bsm78\" (UID: \"2ff3b528-9413-42f5-9852-576e7a1b1a8e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsm78" Feb 14 18:44:06 crc kubenswrapper[4897]: I0214 18:44:06.009320 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2ff3b528-9413-42f5-9852-576e7a1b1a8e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bsm78\" (UID: \"2ff3b528-9413-42f5-9852-576e7a1b1a8e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsm78" Feb 14 18:44:06 crc kubenswrapper[4897]: I0214 18:44:06.009393 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2ff3b528-9413-42f5-9852-576e7a1b1a8e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bsm78\" (UID: \"2ff3b528-9413-42f5-9852-576e7a1b1a8e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsm78" Feb 14 18:44:06 crc kubenswrapper[4897]: I0214 18:44:06.011068 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2ff3b528-9413-42f5-9852-576e7a1b1a8e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bsm78\" (UID: \"2ff3b528-9413-42f5-9852-576e7a1b1a8e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsm78" Feb 14 18:44:06 crc kubenswrapper[4897]: I0214 18:44:06.015884 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff3b528-9413-42f5-9852-576e7a1b1a8e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bsm78\" (UID: \"2ff3b528-9413-42f5-9852-576e7a1b1a8e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsm78" Feb 14 18:44:06 crc kubenswrapper[4897]: I0214 18:44:06.040475 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ff3b528-9413-42f5-9852-576e7a1b1a8e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bsm78\" (UID: \"2ff3b528-9413-42f5-9852-576e7a1b1a8e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsm78" Feb 14 18:44:06 crc kubenswrapper[4897]: I0214 18:44:06.110327 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b614985-b2f8-443d-9996-635d7e407b24-metrics-certs\") pod \"network-metrics-daemon-xrgww\" (UID: \"6b614985-b2f8-443d-9996-635d7e407b24\") " pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:44:06 crc kubenswrapper[4897]: E0214 18:44:06.110538 4897 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 18:44:06 crc kubenswrapper[4897]: E0214 18:44:06.110647 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b614985-b2f8-443d-9996-635d7e407b24-metrics-certs podName:6b614985-b2f8-443d-9996-635d7e407b24 nodeName:}" failed. No retries permitted until 2026-02-14 18:45:10.110616464 +0000 UTC m=+163.087024987 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b614985-b2f8-443d-9996-635d7e407b24-metrics-certs") pod "network-metrics-daemon-xrgww" (UID: "6b614985-b2f8-443d-9996-635d7e407b24") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 18:44:06 crc kubenswrapper[4897]: I0214 18:44:06.230145 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsm78" Feb 14 18:44:06 crc kubenswrapper[4897]: I0214 18:44:06.432339 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsm78" event={"ID":"2ff3b528-9413-42f5-9852-576e7a1b1a8e","Type":"ContainerStarted","Data":"0d9f890fa16ac8bbdf167153deb29be4742accf8007385f24011d7305555c24a"} Feb 14 18:44:06 crc kubenswrapper[4897]: I0214 18:44:06.432423 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsm78" event={"ID":"2ff3b528-9413-42f5-9852-576e7a1b1a8e","Type":"ContainerStarted","Data":"943d8f98eba1837834b6db53933b46a8fe69b5b2dc40afe15f4b77fdbc24fc2c"} Feb 14 18:44:06 crc kubenswrapper[4897]: I0214 18:44:06.453532 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsm78" podStartSLOduration=79.453502443 podStartE2EDuration="1m19.453502443s" podCreationTimestamp="2026-02-14 18:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:06.452430278 +0000 UTC m=+99.428838801" watchObservedRunningTime="2026-02-14 18:44:06.453502443 +0000 UTC m=+99.429910966" Feb 14 18:44:06 crc kubenswrapper[4897]: I0214 18:44:06.455082 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=7.455069312 podStartE2EDuration="7.455069312s" podCreationTimestamp="2026-02-14 18:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:05.928533545 +0000 UTC m=+98.904942068" watchObservedRunningTime="2026-02-14 18:44:06.455069312 +0000 UTC m=+99.431477835" Feb 14 18:44:06 crc kubenswrapper[4897]: I0214 18:44:06.793636 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:44:06 crc kubenswrapper[4897]: E0214 18:44:06.794223 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:44:06 crc kubenswrapper[4897]: I0214 18:44:06.793732 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:44:06 crc kubenswrapper[4897]: I0214 18:44:06.793799 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:06 crc kubenswrapper[4897]: E0214 18:44:06.794437 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:44:06 crc kubenswrapper[4897]: E0214 18:44:06.794577 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:44:06 crc kubenswrapper[4897]: I0214 18:44:06.795620 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 09:35:49.972059814 +0000 UTC Feb 14 18:44:06 crc kubenswrapper[4897]: I0214 18:44:06.795698 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 14 18:44:06 crc kubenswrapper[4897]: I0214 18:44:06.806576 4897 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 14 18:44:07 crc kubenswrapper[4897]: I0214 18:44:07.793070 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:44:07 crc kubenswrapper[4897]: E0214 18:44:07.795254 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:44:08 crc kubenswrapper[4897]: I0214 18:44:08.793244 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:08 crc kubenswrapper[4897]: I0214 18:44:08.793287 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:44:08 crc kubenswrapper[4897]: I0214 18:44:08.793368 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:44:08 crc kubenswrapper[4897]: E0214 18:44:08.793457 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:44:08 crc kubenswrapper[4897]: E0214 18:44:08.793546 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:44:08 crc kubenswrapper[4897]: E0214 18:44:08.793620 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:44:09 crc kubenswrapper[4897]: I0214 18:44:09.793369 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:44:09 crc kubenswrapper[4897]: E0214 18:44:09.793693 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:44:10 crc kubenswrapper[4897]: I0214 18:44:10.793399 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:10 crc kubenswrapper[4897]: I0214 18:44:10.793411 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:44:10 crc kubenswrapper[4897]: I0214 18:44:10.793414 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:44:10 crc kubenswrapper[4897]: E0214 18:44:10.793565 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:44:10 crc kubenswrapper[4897]: E0214 18:44:10.793690 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:44:10 crc kubenswrapper[4897]: E0214 18:44:10.793797 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:44:11 crc kubenswrapper[4897]: I0214 18:44:11.793075 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:44:11 crc kubenswrapper[4897]: E0214 18:44:11.793274 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:44:12 crc kubenswrapper[4897]: I0214 18:44:12.792998 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:12 crc kubenswrapper[4897]: I0214 18:44:12.793074 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:44:12 crc kubenswrapper[4897]: I0214 18:44:12.793152 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:44:12 crc kubenswrapper[4897]: E0214 18:44:12.793385 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:44:12 crc kubenswrapper[4897]: E0214 18:44:12.793510 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:44:12 crc kubenswrapper[4897]: E0214 18:44:12.793732 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:44:13 crc kubenswrapper[4897]: I0214 18:44:13.793469 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:44:13 crc kubenswrapper[4897]: E0214 18:44:13.794388 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:44:14 crc kubenswrapper[4897]: I0214 18:44:14.793277 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:14 crc kubenswrapper[4897]: I0214 18:44:14.793313 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:44:14 crc kubenswrapper[4897]: I0214 18:44:14.793411 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:44:14 crc kubenswrapper[4897]: E0214 18:44:14.793572 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:44:14 crc kubenswrapper[4897]: E0214 18:44:14.793665 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:44:14 crc kubenswrapper[4897]: E0214 18:44:14.793822 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:44:15 crc kubenswrapper[4897]: I0214 18:44:15.792955 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:44:15 crc kubenswrapper[4897]: E0214 18:44:15.793969 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:44:16 crc kubenswrapper[4897]: I0214 18:44:16.793391 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:44:16 crc kubenswrapper[4897]: E0214 18:44:16.793570 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:44:16 crc kubenswrapper[4897]: I0214 18:44:16.793614 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:44:16 crc kubenswrapper[4897]: I0214 18:44:16.794102 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:16 crc kubenswrapper[4897]: E0214 18:44:16.794242 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:44:16 crc kubenswrapper[4897]: E0214 18:44:16.794533 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:44:16 crc kubenswrapper[4897]: I0214 18:44:16.795836 4897 scope.go:117] "RemoveContainer" containerID="da8347fb3c0aebea3a85959f11e26786d5507dd1ddfe24b700b7d4981f23ef63" Feb 14 18:44:16 crc kubenswrapper[4897]: E0214 18:44:16.796131 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fz879_openshift-ovn-kubernetes(f304b761-40a3-41ba-af33-a2b0634a55fb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" Feb 14 18:44:17 crc kubenswrapper[4897]: I0214 18:44:17.793330 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:44:17 crc kubenswrapper[4897]: E0214 18:44:17.795786 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:44:18 crc kubenswrapper[4897]: I0214 18:44:18.793737 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:18 crc kubenswrapper[4897]: E0214 18:44:18.793926 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:44:18 crc kubenswrapper[4897]: I0214 18:44:18.794210 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:44:18 crc kubenswrapper[4897]: I0214 18:44:18.794331 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:44:18 crc kubenswrapper[4897]: E0214 18:44:18.794472 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:44:18 crc kubenswrapper[4897]: E0214 18:44:18.794615 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:44:19 crc kubenswrapper[4897]: I0214 18:44:19.794497 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:44:19 crc kubenswrapper[4897]: E0214 18:44:19.794757 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:44:20 crc kubenswrapper[4897]: I0214 18:44:20.793676 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:20 crc kubenswrapper[4897]: I0214 18:44:20.793769 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:44:20 crc kubenswrapper[4897]: E0214 18:44:20.793900 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:44:20 crc kubenswrapper[4897]: I0214 18:44:20.793930 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:44:20 crc kubenswrapper[4897]: E0214 18:44:20.794110 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:44:20 crc kubenswrapper[4897]: E0214 18:44:20.794262 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:44:21 crc kubenswrapper[4897]: I0214 18:44:21.793367 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:44:21 crc kubenswrapper[4897]: E0214 18:44:21.793553 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:44:22 crc kubenswrapper[4897]: I0214 18:44:22.495320 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ldvzr_b5b30895-0d98-44e4-8e31-2c5ebe5e1850/kube-multus/1.log" Feb 14 18:44:22 crc kubenswrapper[4897]: I0214 18:44:22.496371 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ldvzr_b5b30895-0d98-44e4-8e31-2c5ebe5e1850/kube-multus/0.log" Feb 14 18:44:22 crc kubenswrapper[4897]: I0214 18:44:22.496488 4897 generic.go:334] "Generic (PLEG): container finished" podID="b5b30895-0d98-44e4-8e31-2c5ebe5e1850" containerID="59dea786c4d826f44c37335db7c4d2752d93bf799ec0044b1c6fd22efab3256d" exitCode=1 Feb 14 18:44:22 crc kubenswrapper[4897]: I0214 18:44:22.496568 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ldvzr" event={"ID":"b5b30895-0d98-44e4-8e31-2c5ebe5e1850","Type":"ContainerDied","Data":"59dea786c4d826f44c37335db7c4d2752d93bf799ec0044b1c6fd22efab3256d"} Feb 14 18:44:22 crc kubenswrapper[4897]: I0214 18:44:22.496656 4897 scope.go:117] "RemoveContainer" containerID="491a7c0e79b5313a5f53175ca251d5ebdbdd034f6d0aa0c6dd71669842b1c2dd" Feb 14 18:44:22 crc kubenswrapper[4897]: I0214 18:44:22.497365 4897 scope.go:117] "RemoveContainer" containerID="59dea786c4d826f44c37335db7c4d2752d93bf799ec0044b1c6fd22efab3256d" Feb 14 18:44:22 crc kubenswrapper[4897]: E0214 18:44:22.497727 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-ldvzr_openshift-multus(b5b30895-0d98-44e4-8e31-2c5ebe5e1850)\"" pod="openshift-multus/multus-ldvzr" podUID="b5b30895-0d98-44e4-8e31-2c5ebe5e1850" Feb 14 18:44:22 crc kubenswrapper[4897]: I0214 18:44:22.793746 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:44:22 crc kubenswrapper[4897]: I0214 18:44:22.793794 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:44:22 crc kubenswrapper[4897]: I0214 18:44:22.793767 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:22 crc kubenswrapper[4897]: E0214 18:44:22.794166 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:44:22 crc kubenswrapper[4897]: E0214 18:44:22.794288 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:44:22 crc kubenswrapper[4897]: E0214 18:44:22.794541 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:44:23 crc kubenswrapper[4897]: I0214 18:44:23.503124 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ldvzr_b5b30895-0d98-44e4-8e31-2c5ebe5e1850/kube-multus/1.log" Feb 14 18:44:23 crc kubenswrapper[4897]: I0214 18:44:23.793384 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:44:23 crc kubenswrapper[4897]: E0214 18:44:23.793625 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:44:24 crc kubenswrapper[4897]: I0214 18:44:24.793544 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:24 crc kubenswrapper[4897]: I0214 18:44:24.793653 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:44:24 crc kubenswrapper[4897]: I0214 18:44:24.793613 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:44:24 crc kubenswrapper[4897]: E0214 18:44:24.793788 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:44:24 crc kubenswrapper[4897]: E0214 18:44:24.793936 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:44:24 crc kubenswrapper[4897]: E0214 18:44:24.794016 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:44:25 crc kubenswrapper[4897]: I0214 18:44:25.792978 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:44:25 crc kubenswrapper[4897]: E0214 18:44:25.793206 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:44:26 crc kubenswrapper[4897]: I0214 18:44:26.793485 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:26 crc kubenswrapper[4897]: I0214 18:44:26.793573 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:44:26 crc kubenswrapper[4897]: E0214 18:44:26.794323 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:44:26 crc kubenswrapper[4897]: I0214 18:44:26.793613 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:44:26 crc kubenswrapper[4897]: E0214 18:44:26.794442 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:44:26 crc kubenswrapper[4897]: E0214 18:44:26.794619 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:44:27 crc kubenswrapper[4897]: I0214 18:44:27.792944 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:44:27 crc kubenswrapper[4897]: E0214 18:44:27.794701 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:44:27 crc kubenswrapper[4897]: E0214 18:44:27.817873 4897 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 14 18:44:27 crc kubenswrapper[4897]: E0214 18:44:27.897065 4897 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 18:44:28 crc kubenswrapper[4897]: I0214 18:44:28.793273 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:44:28 crc kubenswrapper[4897]: I0214 18:44:28.793357 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:28 crc kubenswrapper[4897]: I0214 18:44:28.793433 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:44:28 crc kubenswrapper[4897]: E0214 18:44:28.793508 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:44:28 crc kubenswrapper[4897]: E0214 18:44:28.793597 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:44:28 crc kubenswrapper[4897]: E0214 18:44:28.793672 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:44:29 crc kubenswrapper[4897]: I0214 18:44:29.793336 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:44:29 crc kubenswrapper[4897]: E0214 18:44:29.793503 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:44:30 crc kubenswrapper[4897]: I0214 18:44:30.792981 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:44:30 crc kubenswrapper[4897]: E0214 18:44:30.793240 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:44:30 crc kubenswrapper[4897]: I0214 18:44:30.793371 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:30 crc kubenswrapper[4897]: E0214 18:44:30.793527 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:44:30 crc kubenswrapper[4897]: I0214 18:44:30.793591 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:44:30 crc kubenswrapper[4897]: E0214 18:44:30.793748 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:44:30 crc kubenswrapper[4897]: I0214 18:44:30.794595 4897 scope.go:117] "RemoveContainer" containerID="da8347fb3c0aebea3a85959f11e26786d5507dd1ddfe24b700b7d4981f23ef63" Feb 14 18:44:31 crc kubenswrapper[4897]: I0214 18:44:31.530682 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fz879_f304b761-40a3-41ba-af33-a2b0634a55fb/ovnkube-controller/3.log" Feb 14 18:44:31 crc kubenswrapper[4897]: I0214 18:44:31.533502 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" event={"ID":"f304b761-40a3-41ba-af33-a2b0634a55fb","Type":"ContainerStarted","Data":"c1354ad043e398dad7fb05c9ffd329f33064647c36d846905fdbe863d60a6b16"} Feb 14 18:44:31 crc kubenswrapper[4897]: I0214 18:44:31.534052 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:44:31 crc kubenswrapper[4897]: I0214 18:44:31.572139 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" podStartSLOduration=104.572122153 podStartE2EDuration="1m44.572122153s" podCreationTimestamp="2026-02-14 18:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:31.571219444 +0000 UTC m=+124.547627947" watchObservedRunningTime="2026-02-14 18:44:31.572122153 +0000 UTC m=+124.548530636" Feb 14 18:44:31 crc kubenswrapper[4897]: I0214 18:44:31.772404 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xrgww"] Feb 14 18:44:31 crc kubenswrapper[4897]: I0214 18:44:31.772557 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:44:31 crc kubenswrapper[4897]: E0214 18:44:31.772661 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:44:32 crc kubenswrapper[4897]: I0214 18:44:32.793139 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:44:32 crc kubenswrapper[4897]: I0214 18:44:32.793143 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:44:32 crc kubenswrapper[4897]: E0214 18:44:32.793619 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:44:32 crc kubenswrapper[4897]: I0214 18:44:32.793158 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:32 crc kubenswrapper[4897]: E0214 18:44:32.793783 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:44:32 crc kubenswrapper[4897]: E0214 18:44:32.793892 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:44:32 crc kubenswrapper[4897]: E0214 18:44:32.898424 4897 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 18:44:33 crc kubenswrapper[4897]: I0214 18:44:33.793418 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:44:33 crc kubenswrapper[4897]: E0214 18:44:33.793620 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:44:34 crc kubenswrapper[4897]: I0214 18:44:34.793380 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:34 crc kubenswrapper[4897]: I0214 18:44:34.793435 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:44:34 crc kubenswrapper[4897]: I0214 18:44:34.793409 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:44:34 crc kubenswrapper[4897]: E0214 18:44:34.793574 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:44:34 crc kubenswrapper[4897]: E0214 18:44:34.793662 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:44:34 crc kubenswrapper[4897]: E0214 18:44:34.793801 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:44:35 crc kubenswrapper[4897]: I0214 18:44:35.793087 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:44:35 crc kubenswrapper[4897]: E0214 18:44:35.793331 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:44:36 crc kubenswrapper[4897]: I0214 18:44:36.793697 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:36 crc kubenswrapper[4897]: I0214 18:44:36.793721 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:44:36 crc kubenswrapper[4897]: I0214 18:44:36.793747 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:44:36 crc kubenswrapper[4897]: E0214 18:44:36.793878 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:44:36 crc kubenswrapper[4897]: E0214 18:44:36.794018 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:44:36 crc kubenswrapper[4897]: E0214 18:44:36.794298 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:44:36 crc kubenswrapper[4897]: I0214 18:44:36.794468 4897 scope.go:117] "RemoveContainer" containerID="59dea786c4d826f44c37335db7c4d2752d93bf799ec0044b1c6fd22efab3256d" Feb 14 18:44:37 crc kubenswrapper[4897]: I0214 18:44:37.557887 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ldvzr_b5b30895-0d98-44e4-8e31-2c5ebe5e1850/kube-multus/1.log" Feb 14 18:44:37 crc kubenswrapper[4897]: I0214 18:44:37.558003 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ldvzr" event={"ID":"b5b30895-0d98-44e4-8e31-2c5ebe5e1850","Type":"ContainerStarted","Data":"a994cd3d62a87d79d3720ba26ad60a180a3ea6b395c07485dd6d24071ac72977"} Feb 14 18:44:37 crc kubenswrapper[4897]: I0214 18:44:37.793673 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:44:37 crc kubenswrapper[4897]: E0214 18:44:37.795927 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:44:37 crc kubenswrapper[4897]: E0214 18:44:37.900271 4897 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 18:44:38 crc kubenswrapper[4897]: I0214 18:44:38.793238 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:38 crc kubenswrapper[4897]: I0214 18:44:38.793323 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:44:38 crc kubenswrapper[4897]: E0214 18:44:38.793410 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:44:38 crc kubenswrapper[4897]: I0214 18:44:38.793255 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:44:38 crc kubenswrapper[4897]: E0214 18:44:38.793470 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:44:38 crc kubenswrapper[4897]: E0214 18:44:38.793519 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:44:39 crc kubenswrapper[4897]: I0214 18:44:39.794168 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:44:39 crc kubenswrapper[4897]: E0214 18:44:39.794410 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:44:40 crc kubenswrapper[4897]: I0214 18:44:40.793519 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:44:40 crc kubenswrapper[4897]: I0214 18:44:40.793622 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:44:40 crc kubenswrapper[4897]: I0214 18:44:40.793542 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:40 crc kubenswrapper[4897]: E0214 18:44:40.793675 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:44:40 crc kubenswrapper[4897]: E0214 18:44:40.793796 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:44:40 crc kubenswrapper[4897]: E0214 18:44:40.793858 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:44:41 crc kubenswrapper[4897]: I0214 18:44:41.794024 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:44:41 crc kubenswrapper[4897]: E0214 18:44:41.794260 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrgww" podUID="6b614985-b2f8-443d-9996-635d7e407b24" Feb 14 18:44:42 crc kubenswrapper[4897]: I0214 18:44:42.793442 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:44:42 crc kubenswrapper[4897]: I0214 18:44:42.793529 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:44:42 crc kubenswrapper[4897]: I0214 18:44:42.793442 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:42 crc kubenswrapper[4897]: E0214 18:44:42.793639 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 18:44:42 crc kubenswrapper[4897]: E0214 18:44:42.793795 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 18:44:42 crc kubenswrapper[4897]: E0214 18:44:42.793957 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 18:44:43 crc kubenswrapper[4897]: I0214 18:44:43.793202 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:44:43 crc kubenswrapper[4897]: I0214 18:44:43.796734 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 14 18:44:43 crc kubenswrapper[4897]: I0214 18:44:43.798817 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 14 18:44:44 crc kubenswrapper[4897]: I0214 18:44:44.792882 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:44 crc kubenswrapper[4897]: I0214 18:44:44.792958 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:44:44 crc kubenswrapper[4897]: I0214 18:44:44.793085 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:44:44 crc kubenswrapper[4897]: I0214 18:44:44.796056 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 14 18:44:44 crc kubenswrapper[4897]: I0214 18:44:44.796144 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 14 18:44:44 crc kubenswrapper[4897]: I0214 18:44:44.796731 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 14 18:44:44 crc kubenswrapper[4897]: I0214 18:44:44.796826 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 14 18:44:45 crc kubenswrapper[4897]: I0214 18:44:45.247188 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.844853 4897 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.902817 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g8d99"] Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.903414 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.908609 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-zs6vd"] Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.909366 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs6vd" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.909447 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.909606 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.910112 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.910402 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.912678 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.912735 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.914754 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.914972 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.915780 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tndnf"] Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.917194 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.918323 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zh576"] Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.919363 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-zh576" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.925623 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.926204 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.926424 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.927178 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.928592 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.928856 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.930135 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5"] Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.930217 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.930236 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.931265 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-62b7q"] Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.931314 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.931785 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-62b7q" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.932538 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.933488 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-6jjtk"] Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.934259 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.935522 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c8v6s"] Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.936652 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.937939 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.939830 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fzn9r"] Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.941614 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.938346 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.938505 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.938548 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.938389 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.941274 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.941640 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.949920 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-klcwn"] Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.959549 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.968958 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fzn9r" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.995513 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.995794 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.996382 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eca953dd-cbbc-404a-974f-babb9bf2d0e8-serving-cert\") pod \"controller-manager-879f6c89f-g8d99\" (UID: \"eca953dd-cbbc-404a-974f-babb9bf2d0e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.996431 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5c5ace00-d072-440a-bc7b-982b96f636e7-audit\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.996474 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c5ace00-d072-440a-bc7b-982b96f636e7-serving-cert\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.996503 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eca953dd-cbbc-404a-974f-babb9bf2d0e8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-g8d99\" (UID: \"eca953dd-cbbc-404a-974f-babb9bf2d0e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.996532 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5c5ace00-d072-440a-bc7b-982b96f636e7-node-pullsecrets\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.996563 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eca953dd-cbbc-404a-974f-babb9bf2d0e8-client-ca\") pod \"controller-manager-879f6c89f-g8d99\" (UID: \"eca953dd-cbbc-404a-974f-babb9bf2d0e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.996592 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c6a281f2-1a7e-419e-8736-57c1a3bae82e-images\") pod \"machine-api-operator-5694c8668f-zh576\" (UID: \"c6a281f2-1a7e-419e-8736-57c1a3bae82e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zh576" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.996620 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0909c109-9799-4bc9-9d4f-1d97a95ec410-config\") pod \"machine-approver-56656f9798-zs6vd\" (UID: \"0909c109-9799-4bc9-9d4f-1d97a95ec410\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs6vd" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.996645 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eca953dd-cbbc-404a-974f-babb9bf2d0e8-config\") pod \"controller-manager-879f6c89f-g8d99\" (UID: \"eca953dd-cbbc-404a-974f-babb9bf2d0e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.996671 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5c5ace00-d072-440a-bc7b-982b96f636e7-etcd-client\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.996715 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0909c109-9799-4bc9-9d4f-1d97a95ec410-auth-proxy-config\") pod \"machine-approver-56656f9798-zs6vd\" (UID: \"0909c109-9799-4bc9-9d4f-1d97a95ec410\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs6vd" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.996756 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kljwk\" (UniqueName: \"kubernetes.io/projected/c6a281f2-1a7e-419e-8736-57c1a3bae82e-kube-api-access-kljwk\") pod \"machine-api-operator-5694c8668f-zh576\" (UID: \"c6a281f2-1a7e-419e-8736-57c1a3bae82e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zh576" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.996779 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6a281f2-1a7e-419e-8736-57c1a3bae82e-config\") pod \"machine-api-operator-5694c8668f-zh576\" (UID: \"c6a281f2-1a7e-419e-8736-57c1a3bae82e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zh576" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.996799 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj469\" (UniqueName: \"kubernetes.io/projected/0909c109-9799-4bc9-9d4f-1d97a95ec410-kube-api-access-nj469\") pod \"machine-approver-56656f9798-zs6vd\" (UID: \"0909c109-9799-4bc9-9d4f-1d97a95ec410\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs6vd" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.996821 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44zzq\" (UniqueName: \"kubernetes.io/projected/eca953dd-cbbc-404a-974f-babb9bf2d0e8-kube-api-access-44zzq\") pod \"controller-manager-879f6c89f-g8d99\" (UID: \"eca953dd-cbbc-404a-974f-babb9bf2d0e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.996844 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5c5ace00-d072-440a-bc7b-982b96f636e7-encryption-config\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.996866 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5c5ace00-d072-440a-bc7b-982b96f636e7-etcd-serving-ca\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.996928 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c5ace00-d072-440a-bc7b-982b96f636e7-config\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.996949 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w94j7\" (UniqueName: \"kubernetes.io/projected/5c5ace00-d072-440a-bc7b-982b96f636e7-kube-api-access-w94j7\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.996980 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0909c109-9799-4bc9-9d4f-1d97a95ec410-machine-approver-tls\") pod \"machine-approver-56656f9798-zs6vd\" (UID: \"0909c109-9799-4bc9-9d4f-1d97a95ec410\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs6vd" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.997000 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5c5ace00-d072-440a-bc7b-982b96f636e7-image-import-ca\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.997047 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c5ace00-d072-440a-bc7b-982b96f636e7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.997072 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5c5ace00-d072-440a-bc7b-982b96f636e7-audit-dir\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.997095 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c6a281f2-1a7e-419e-8736-57c1a3bae82e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zh576\" (UID: \"c6a281f2-1a7e-419e-8736-57c1a3bae82e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zh576" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.997207 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.997286 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.997290 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.997440 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.997522 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.997544 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.997635 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7lrwj"] Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.997751 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.997873 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.998719 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-7lrwj" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.998835 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.999651 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.999728 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.999920 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 14 18:44:46 crc kubenswrapper[4897]: I0214 18:44:46.999946 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.000005 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.000114 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.000202 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.000261 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.000278 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.000347 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.000391 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.000586 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.000680 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.000758 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.000831 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.000902 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.000968 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.001062 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.001144 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.001243 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.001338 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.001418 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.002760 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.002884 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.003002 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.003151 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.003258 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.003720 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.003860 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.004195 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.004389 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.004538 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xcksp"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.006422 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-9kvql"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.006869 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-9kvql" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.007087 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-msfx9"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.007662 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-msfx9" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.005291 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.008119 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g8d99"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.008264 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xcksp" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.005652 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.005361 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.005419 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.005981 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.013050 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ppn2g"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.013679 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ppn2g" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.013840 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7xmvn"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.014355 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7xmvn" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.015641 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.015678 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.015943 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mgfqz"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.016577 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mgfqz" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.017053 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.020956 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.022053 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.022134 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.022071 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.022165 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.022357 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.022388 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.023826 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.029932 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.030400 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.030607 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.030868 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.031484 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.032669 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.032981 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.038992 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.085092 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.085622 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.085823 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.086018 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.086648 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.086833 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.086968 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.087042 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.087288 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.087661 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xlrkp"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.088304 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.088330 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xlrkp" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.088944 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.089905 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9n8vm"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.090294 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9n8vm" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.090837 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hqm2"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.091305 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hqm2" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.092008 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.092104 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.092234 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.094615 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.097010 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fq4zf"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.097714 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-c5z8g"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.098139 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-c5z8g" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.098439 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.099449 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c5ace00-d072-440a-bc7b-982b96f636e7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.099669 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5c5ace00-d072-440a-bc7b-982b96f636e7-audit-dir\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.099687 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c6a281f2-1a7e-419e-8736-57c1a3bae82e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zh576\" (UID: \"c6a281f2-1a7e-419e-8736-57c1a3bae82e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zh576" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.099709 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b63cb010-df8f-4e29-a7f3-6b68cb03e63a-config\") pod \"route-controller-manager-6576b87f9c-ws2d2\" (UID: \"b63cb010-df8f-4e29-a7f3-6b68cb03e63a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.099728 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56tqh\" (UniqueName: \"kubernetes.io/projected/65112b94-8028-49f5-91fc-b83b49f30017-kube-api-access-56tqh\") pod \"ingress-operator-5b745b69d9-7xmvn\" (UID: \"65112b94-8028-49f5-91fc-b83b49f30017\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7xmvn" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.099744 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b15d9f59-a87a-47ef-a61f-4e791186229d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ppn2g\" (UID: \"b15d9f59-a87a-47ef-a61f-4e791186229d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ppn2g" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.099764 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b15d9f59-a87a-47ef-a61f-4e791186229d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ppn2g\" (UID: \"b15d9f59-a87a-47ef-a61f-4e791186229d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ppn2g" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.099779 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb57x\" (UniqueName: \"kubernetes.io/projected/b63cb010-df8f-4e29-a7f3-6b68cb03e63a-kube-api-access-sb57x\") pod \"route-controller-manager-6576b87f9c-ws2d2\" (UID: \"b63cb010-df8f-4e29-a7f3-6b68cb03e63a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.099795 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cd062a1-246d-4ad6-b81a-a9f103576a32-serving-cert\") pod \"console-operator-58897d9998-62b7q\" (UID: \"0cd062a1-246d-4ad6-b81a-a9f103576a32\") " pod="openshift-console-operator/console-operator-58897d9998-62b7q" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.099821 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.099837 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75s5t\" (UniqueName: \"kubernetes.io/projected/e6fba668-d4b4-45fb-89ec-7808a1269d1d-kube-api-access-75s5t\") pod \"dns-operator-744455d44c-7lrwj\" (UID: \"e6fba668-d4b4-45fb-89ec-7808a1269d1d\") " pod="openshift-dns-operator/dns-operator-744455d44c-7lrwj" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.099852 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzwgm\" (UniqueName: \"kubernetes.io/projected/67063058-60ca-4efd-a102-cd90d5e43e56-kube-api-access-kzwgm\") pod \"apiserver-7bbb656c7d-f9lc5\" (UID: \"67063058-60ca-4efd-a102-cd90d5e43e56\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.099870 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v7bw\" (UniqueName: \"kubernetes.io/projected/88a85445-8209-4b30-a0e0-c0f14d790fb5-kube-api-access-6v7bw\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.099888 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e6fba668-d4b4-45fb-89ec-7808a1269d1d-metrics-tls\") pod \"dns-operator-744455d44c-7lrwj\" (UID: \"e6fba668-d4b4-45fb-89ec-7808a1269d1d\") " pod="openshift-dns-operator/dns-operator-744455d44c-7lrwj" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.099904 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-console-serving-cert\") pod \"console-f9d7485db-6jjtk\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.099921 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnmvt\" (UniqueName: \"kubernetes.io/projected/a7b90621-706c-47e9-b361-14c9bb002f11-kube-api-access-cnmvt\") pod \"openshift-controller-manager-operator-756b6f6bc6-fzn9r\" (UID: \"a7b90621-706c-47e9-b361-14c9bb002f11\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fzn9r" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.099937 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/67063058-60ca-4efd-a102-cd90d5e43e56-etcd-client\") pod \"apiserver-7bbb656c7d-f9lc5\" (UID: \"67063058-60ca-4efd-a102-cd90d5e43e56\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.099980 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/67063058-60ca-4efd-a102-cd90d5e43e56-encryption-config\") pod \"apiserver-7bbb656c7d-f9lc5\" (UID: \"67063058-60ca-4efd-a102-cd90d5e43e56\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.099999 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eca953dd-cbbc-404a-974f-babb9bf2d0e8-serving-cert\") pod \"controller-manager-879f6c89f-g8d99\" (UID: \"eca953dd-cbbc-404a-974f-babb9bf2d0e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100015 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100047 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5c5ace00-d072-440a-bc7b-982b96f636e7-audit\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100070 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/88a85445-8209-4b30-a0e0-c0f14d790fb5-audit-dir\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100087 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100103 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/67063058-60ca-4efd-a102-cd90d5e43e56-audit-dir\") pod \"apiserver-7bbb656c7d-f9lc5\" (UID: \"67063058-60ca-4efd-a102-cd90d5e43e56\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100119 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c5ace00-d072-440a-bc7b-982b96f636e7-serving-cert\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100136 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eca953dd-cbbc-404a-974f-babb9bf2d0e8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-g8d99\" (UID: \"eca953dd-cbbc-404a-974f-babb9bf2d0e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100151 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/88a85445-8209-4b30-a0e0-c0f14d790fb5-audit-policies\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100168 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5c5ace00-d072-440a-bc7b-982b96f636e7-node-pullsecrets\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100186 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-console-oauth-config\") pod \"console-f9d7485db-6jjtk\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100204 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e05c0cf5-7ca3-47f2-810f-492e73edc19a-etcd-service-ca\") pod \"etcd-operator-b45778765-msfx9\" (UID: \"e05c0cf5-7ca3-47f2-810f-492e73edc19a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-msfx9" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100220 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3b9a689e-54e3-48df-a102-500878c35aa2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-klcwn\" (UID: \"3b9a689e-54e3-48df-a102-500878c35aa2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100236 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf7c6\" (UniqueName: \"kubernetes.io/projected/103a8a7a-d7e9-4d28-b909-cf3468e483e9-kube-api-access-bf7c6\") pod \"cluster-samples-operator-665b6dd947-xcksp\" (UID: \"103a8a7a-d7e9-4d28-b909-cf3468e483e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xcksp" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100253 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eca953dd-cbbc-404a-974f-babb9bf2d0e8-client-ca\") pod \"controller-manager-879f6c89f-g8d99\" (UID: \"eca953dd-cbbc-404a-974f-babb9bf2d0e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100268 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c6a281f2-1a7e-419e-8736-57c1a3bae82e-images\") pod \"machine-api-operator-5694c8668f-zh576\" (UID: \"c6a281f2-1a7e-419e-8736-57c1a3bae82e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zh576" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100284 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0909c109-9799-4bc9-9d4f-1d97a95ec410-config\") pod \"machine-approver-56656f9798-zs6vd\" (UID: \"0909c109-9799-4bc9-9d4f-1d97a95ec410\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs6vd" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100298 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eca953dd-cbbc-404a-974f-babb9bf2d0e8-config\") pod \"controller-manager-879f6c89f-g8d99\" (UID: \"eca953dd-cbbc-404a-974f-babb9bf2d0e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100314 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-oauth-serving-cert\") pod \"console-f9d7485db-6jjtk\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100329 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e05c0cf5-7ca3-47f2-810f-492e73edc19a-serving-cert\") pod \"etcd-operator-b45778765-msfx9\" (UID: \"e05c0cf5-7ca3-47f2-810f-492e73edc19a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-msfx9" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100344 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5c5ace00-d072-440a-bc7b-982b96f636e7-etcd-client\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100361 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/65112b94-8028-49f5-91fc-b83b49f30017-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7xmvn\" (UID: \"65112b94-8028-49f5-91fc-b83b49f30017\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7xmvn" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100378 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77cd9be5-c96a-494c-9d40-1068555dceda-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mgfqz\" (UID: \"77cd9be5-c96a-494c-9d40-1068555dceda\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mgfqz" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100391 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77cd9be5-c96a-494c-9d40-1068555dceda-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mgfqz\" (UID: \"77cd9be5-c96a-494c-9d40-1068555dceda\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mgfqz" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100406 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-trusted-ca-bundle\") pod \"console-f9d7485db-6jjtk\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100420 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67063058-60ca-4efd-a102-cd90d5e43e56-serving-cert\") pod \"apiserver-7bbb656c7d-f9lc5\" (UID: \"67063058-60ca-4efd-a102-cd90d5e43e56\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100442 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0909c109-9799-4bc9-9d4f-1d97a95ec410-auth-proxy-config\") pod \"machine-approver-56656f9798-zs6vd\" (UID: \"0909c109-9799-4bc9-9d4f-1d97a95ec410\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs6vd" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100457 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-service-ca\") pod \"console-f9d7485db-6jjtk\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100471 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b15d9f59-a87a-47ef-a61f-4e791186229d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ppn2g\" (UID: \"b15d9f59-a87a-47ef-a61f-4e791186229d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ppn2g" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100486 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67063058-60ca-4efd-a102-cd90d5e43e56-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-f9lc5\" (UID: \"67063058-60ca-4efd-a102-cd90d5e43e56\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100503 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100517 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b63cb010-df8f-4e29-a7f3-6b68cb03e63a-serving-cert\") pod \"route-controller-manager-6576b87f9c-ws2d2\" (UID: \"b63cb010-df8f-4e29-a7f3-6b68cb03e63a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100535 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kljwk\" (UniqueName: \"kubernetes.io/projected/c6a281f2-1a7e-419e-8736-57c1a3bae82e-kube-api-access-kljwk\") pod \"machine-api-operator-5694c8668f-zh576\" (UID: \"c6a281f2-1a7e-419e-8736-57c1a3bae82e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zh576" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100551 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlsx8\" (UniqueName: \"kubernetes.io/projected/e05c0cf5-7ca3-47f2-810f-492e73edc19a-kube-api-access-nlsx8\") pod \"etcd-operator-b45778765-msfx9\" (UID: \"e05c0cf5-7ca3-47f2-810f-492e73edc19a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-msfx9" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100566 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cd062a1-246d-4ad6-b81a-a9f103576a32-config\") pod \"console-operator-58897d9998-62b7q\" (UID: \"0cd062a1-246d-4ad6-b81a-a9f103576a32\") " pod="openshift-console-operator/console-operator-58897d9998-62b7q" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100580 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6a281f2-1a7e-419e-8736-57c1a3bae82e-config\") pod \"machine-api-operator-5694c8668f-zh576\" (UID: \"c6a281f2-1a7e-419e-8736-57c1a3bae82e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zh576" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100603 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktbp6\" (UniqueName: \"kubernetes.io/projected/3b9a689e-54e3-48df-a102-500878c35aa2-kube-api-access-ktbp6\") pod \"openshift-config-operator-7777fb866f-klcwn\" (UID: \"3b9a689e-54e3-48df-a102-500878c35aa2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100637 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/67063058-60ca-4efd-a102-cd90d5e43e56-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-f9lc5\" (UID: \"67063058-60ca-4efd-a102-cd90d5e43e56\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100665 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b9a689e-54e3-48df-a102-500878c35aa2-serving-cert\") pod \"openshift-config-operator-7777fb866f-klcwn\" (UID: \"3b9a689e-54e3-48df-a102-500878c35aa2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100698 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj469\" (UniqueName: \"kubernetes.io/projected/0909c109-9799-4bc9-9d4f-1d97a95ec410-kube-api-access-nj469\") pod \"machine-approver-56656f9798-zs6vd\" (UID: \"0909c109-9799-4bc9-9d4f-1d97a95ec410\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs6vd" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100722 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44zzq\" (UniqueName: \"kubernetes.io/projected/eca953dd-cbbc-404a-974f-babb9bf2d0e8-kube-api-access-44zzq\") pod \"controller-manager-879f6c89f-g8d99\" (UID: \"eca953dd-cbbc-404a-974f-babb9bf2d0e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100751 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77cd9be5-c96a-494c-9d40-1068555dceda-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mgfqz\" (UID: \"77cd9be5-c96a-494c-9d40-1068555dceda\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mgfqz" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100777 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-console-config\") pod \"console-f9d7485db-6jjtk\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100809 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7b90621-706c-47e9-b361-14c9bb002f11-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fzn9r\" (UID: \"a7b90621-706c-47e9-b361-14c9bb002f11\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fzn9r" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100837 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2l7c\" (UniqueName: \"kubernetes.io/projected/b15d9f59-a87a-47ef-a61f-4e791186229d-kube-api-access-v2l7c\") pod \"cluster-image-registry-operator-dc59b4c8b-ppn2g\" (UID: \"b15d9f59-a87a-47ef-a61f-4e791186229d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ppn2g" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100865 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7b90621-706c-47e9-b361-14c9bb002f11-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fzn9r\" (UID: \"a7b90621-706c-47e9-b361-14c9bb002f11\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fzn9r" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100895 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5c5ace00-d072-440a-bc7b-982b96f636e7-encryption-config\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100912 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100928 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100944 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e05c0cf5-7ca3-47f2-810f-492e73edc19a-etcd-client\") pod \"etcd-operator-b45778765-msfx9\" (UID: \"e05c0cf5-7ca3-47f2-810f-492e73edc19a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-msfx9" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100960 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5c5ace00-d072-440a-bc7b-982b96f636e7-etcd-serving-ca\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100977 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.100994 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.101020 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b63cb010-df8f-4e29-a7f3-6b68cb03e63a-client-ca\") pod \"route-controller-manager-6576b87f9c-ws2d2\" (UID: \"b63cb010-df8f-4e29-a7f3-6b68cb03e63a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.101188 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e05c0cf5-7ca3-47f2-810f-492e73edc19a-config\") pod \"etcd-operator-b45778765-msfx9\" (UID: \"e05c0cf5-7ca3-47f2-810f-492e73edc19a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-msfx9" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.101258 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/65112b94-8028-49f5-91fc-b83b49f30017-metrics-tls\") pod \"ingress-operator-5b745b69d9-7xmvn\" (UID: \"65112b94-8028-49f5-91fc-b83b49f30017\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7xmvn" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.101283 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn94b\" (UniqueName: \"kubernetes.io/projected/7ec1f803-3889-4483-87ae-9a38bd020818-kube-api-access-jn94b\") pod \"downloads-7954f5f757-9kvql\" (UID: \"7ec1f803-3889-4483-87ae-9a38bd020818\") " pod="openshift-console/downloads-7954f5f757-9kvql" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.101313 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c5ace00-d072-440a-bc7b-982b96f636e7-config\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.101336 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w94j7\" (UniqueName: \"kubernetes.io/projected/5c5ace00-d072-440a-bc7b-982b96f636e7-kube-api-access-w94j7\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.101358 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.101381 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.101418 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0909c109-9799-4bc9-9d4f-1d97a95ec410-machine-approver-tls\") pod \"machine-approver-56656f9798-zs6vd\" (UID: \"0909c109-9799-4bc9-9d4f-1d97a95ec410\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs6vd" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.101431 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.101439 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5c5ace00-d072-440a-bc7b-982b96f636e7-image-import-ca\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.101638 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.101668 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e05c0cf5-7ca3-47f2-810f-492e73edc19a-etcd-ca\") pod \"etcd-operator-b45778765-msfx9\" (UID: \"e05c0cf5-7ca3-47f2-810f-492e73edc19a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-msfx9" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.101701 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjgf9\" (UniqueName: \"kubernetes.io/projected/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-kube-api-access-cjgf9\") pod \"console-f9d7485db-6jjtk\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.101728 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/67063058-60ca-4efd-a102-cd90d5e43e56-audit-policies\") pod \"apiserver-7bbb656c7d-f9lc5\" (UID: \"67063058-60ca-4efd-a102-cd90d5e43e56\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.101751 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xmx4\" (UniqueName: \"kubernetes.io/projected/0cd062a1-246d-4ad6-b81a-a9f103576a32-kube-api-access-4xmx4\") pod \"console-operator-58897d9998-62b7q\" (UID: \"0cd062a1-246d-4ad6-b81a-a9f103576a32\") " pod="openshift-console-operator/console-operator-58897d9998-62b7q" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.101777 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/65112b94-8028-49f5-91fc-b83b49f30017-trusted-ca\") pod \"ingress-operator-5b745b69d9-7xmvn\" (UID: \"65112b94-8028-49f5-91fc-b83b49f30017\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7xmvn" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.101807 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/103a8a7a-d7e9-4d28-b909-cf3468e483e9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xcksp\" (UID: \"103a8a7a-d7e9-4d28-b909-cf3468e483e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xcksp" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.101830 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0cd062a1-246d-4ad6-b81a-a9f103576a32-trusted-ca\") pod \"console-operator-58897d9998-62b7q\" (UID: \"0cd062a1-246d-4ad6-b81a-a9f103576a32\") " pod="openshift-console-operator/console-operator-58897d9998-62b7q" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.102149 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eca953dd-cbbc-404a-974f-babb9bf2d0e8-config\") pod \"controller-manager-879f6c89f-g8d99\" (UID: \"eca953dd-cbbc-404a-974f-babb9bf2d0e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.102533 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5c5ace00-d072-440a-bc7b-982b96f636e7-image-import-ca\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.102917 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c5ace00-d072-440a-bc7b-982b96f636e7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.102945 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5c5ace00-d072-440a-bc7b-982b96f636e7-audit-dir\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.103896 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0909c109-9799-4bc9-9d4f-1d97a95ec410-auth-proxy-config\") pod \"machine-approver-56656f9798-zs6vd\" (UID: \"0909c109-9799-4bc9-9d4f-1d97a95ec410\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs6vd" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.104723 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6a281f2-1a7e-419e-8736-57c1a3bae82e-config\") pod \"machine-api-operator-5694c8668f-zh576\" (UID: \"c6a281f2-1a7e-419e-8736-57c1a3bae82e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zh576" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.105625 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5c5ace00-d072-440a-bc7b-982b96f636e7-etcd-serving-ca\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.105795 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.106143 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5c5ace00-d072-440a-bc7b-982b96f636e7-audit\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.106164 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5c5ace00-d072-440a-bc7b-982b96f636e7-node-pullsecrets\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.106765 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c6a281f2-1a7e-419e-8736-57c1a3bae82e-images\") pod \"machine-api-operator-5694c8668f-zh576\" (UID: \"c6a281f2-1a7e-419e-8736-57c1a3bae82e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zh576" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.106813 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c5ace00-d072-440a-bc7b-982b96f636e7-config\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.107115 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eca953dd-cbbc-404a-974f-babb9bf2d0e8-client-ca\") pod \"controller-manager-879f6c89f-g8d99\" (UID: \"eca953dd-cbbc-404a-974f-babb9bf2d0e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.107171 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wn9fs"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.107492 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0909c109-9799-4bc9-9d4f-1d97a95ec410-config\") pod \"machine-approver-56656f9798-zs6vd\" (UID: \"0909c109-9799-4bc9-9d4f-1d97a95ec410\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs6vd" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.107697 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c6a281f2-1a7e-419e-8736-57c1a3bae82e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zh576\" (UID: \"c6a281f2-1a7e-419e-8736-57c1a3bae82e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zh576" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.108051 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eca953dd-cbbc-404a-974f-babb9bf2d0e8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-g8d99\" (UID: \"eca953dd-cbbc-404a-974f-babb9bf2d0e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.113172 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5c5ace00-d072-440a-bc7b-982b96f636e7-etcd-client\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.117712 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-d8kqp"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.117869 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wn9fs" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.118177 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c5ace00-d072-440a-bc7b-982b96f636e7-serving-cert\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.122485 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-g7lq6"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.123667 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-l5nd2"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.123762 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-d8kqp" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.123822 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7lq6" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.124821 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gws9q"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.125477 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l5nd2" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.125599 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wg7rv"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.125859 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gws9q" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.126662 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eca953dd-cbbc-404a-974f-babb9bf2d0e8-serving-cert\") pod \"controller-manager-879f6c89f-g8d99\" (UID: \"eca953dd-cbbc-404a-974f-babb9bf2d0e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.129332 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t5h9t"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.129735 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.132239 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wg7rv" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.134581 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rx2r9"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.139808 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5wxpc"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.140448 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.141661 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tndnf"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.141736 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kknqp"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.144325 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5c5ace00-d072-440a-bc7b-982b96f636e7-encryption-config\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.144538 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.145218 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t5h9t" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.145773 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.145840 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5wxpc" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.146817 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.151881 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.148638 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0909c109-9799-4bc9-9d4f-1d97a95ec410-machine-approver-tls\") pod \"machine-approver-56656f9798-zs6vd\" (UID: \"0909c109-9799-4bc9-9d4f-1d97a95ec410\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs6vd" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.154587 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-k5jxh"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.155260 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c8v6s"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.155289 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zh576"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.155300 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6jjtk"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.155312 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518230-qxs85"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.155684 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.156614 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.157067 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.157092 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-fh6qr"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.157505 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mgfqz"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.157597 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fh6qr" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.157867 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kknqp" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.158045 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5jxh" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.158189 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518230-qxs85" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.158382 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.158529 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.158024 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-9kvql"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.159900 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rx2r9"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.161230 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-62b7q"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.161983 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7xmvn"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.163342 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hqm2"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.164896 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-klcwn"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.165795 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-g7lq6"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.166835 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.167215 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-msfx9"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.168474 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xlrkp"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.169541 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xcksp"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.170648 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-l5nd2"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.171625 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fzn9r"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.172450 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.173648 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ppn2g"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.175217 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7lrwj"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.176201 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.176782 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-rdcrz"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.177450 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rdcrz" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.177829 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-bzvvc"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.178531 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bzvvc" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.180425 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5wxpc"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.181120 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t5h9t"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.182017 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-d8kqp"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.184143 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gws9q"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.185170 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wg7rv"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.186102 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.188324 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.190129 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-k5jxh"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.191397 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9n8vm"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.193254 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kknqp"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.195201 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fq4zf"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.196837 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.198347 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wn9fs"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.200541 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rdcrz"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.202003 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518230-qxs85"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.203644 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bzvvc"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.205668 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jmbj5"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.205933 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.207199 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.207419 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jmbj5"] Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.214274 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b15d9f59-a87a-47ef-a61f-4e791186229d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ppn2g\" (UID: \"b15d9f59-a87a-47ef-a61f-4e791186229d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ppn2g" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.214519 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67063058-60ca-4efd-a102-cd90d5e43e56-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-f9lc5\" (UID: \"67063058-60ca-4efd-a102-cd90d5e43e56\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.214595 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-service-ca\") pod \"console-f9d7485db-6jjtk\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.214868 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt5zz\" (UniqueName: \"kubernetes.io/projected/72c5452f-efd7-406e-84de-0275882c823e-kube-api-access-nt5zz\") pod \"kube-storage-version-migrator-operator-b67b599dd-gws9q\" (UID: \"72c5452f-efd7-406e-84de-0275882c823e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gws9q" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.214972 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.215113 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b63cb010-df8f-4e29-a7f3-6b68cb03e63a-serving-cert\") pod \"route-controller-manager-6576b87f9c-ws2d2\" (UID: \"b63cb010-df8f-4e29-a7f3-6b68cb03e63a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.215234 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlsx8\" (UniqueName: \"kubernetes.io/projected/e05c0cf5-7ca3-47f2-810f-492e73edc19a-kube-api-access-nlsx8\") pod \"etcd-operator-b45778765-msfx9\" (UID: \"e05c0cf5-7ca3-47f2-810f-492e73edc19a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-msfx9" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.215312 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/faa970d9-b5d7-49a1-b162-2bed0f528b71-srv-cert\") pod \"catalog-operator-68c6474976-jh8w7\" (UID: \"faa970d9-b5d7-49a1-b162-2bed0f528b71\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.215402 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cd062a1-246d-4ad6-b81a-a9f103576a32-config\") pod \"console-operator-58897d9998-62b7q\" (UID: \"0cd062a1-246d-4ad6-b81a-a9f103576a32\") " pod="openshift-console-operator/console-operator-58897d9998-62b7q" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.215474 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/67063058-60ca-4efd-a102-cd90d5e43e56-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-f9lc5\" (UID: \"67063058-60ca-4efd-a102-cd90d5e43e56\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.215555 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktbp6\" (UniqueName: \"kubernetes.io/projected/3b9a689e-54e3-48df-a102-500878c35aa2-kube-api-access-ktbp6\") pod \"openshift-config-operator-7777fb866f-klcwn\" (UID: \"3b9a689e-54e3-48df-a102-500878c35aa2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.215636 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b9a689e-54e3-48df-a102-500878c35aa2-serving-cert\") pod \"openshift-config-operator-7777fb866f-klcwn\" (UID: \"3b9a689e-54e3-48df-a102-500878c35aa2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.215709 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72c5452f-efd7-406e-84de-0275882c823e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gws9q\" (UID: \"72c5452f-efd7-406e-84de-0275882c823e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gws9q" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.215786 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77cd9be5-c96a-494c-9d40-1068555dceda-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mgfqz\" (UID: \"77cd9be5-c96a-494c-9d40-1068555dceda\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mgfqz" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.215859 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-console-config\") pod \"console-f9d7485db-6jjtk\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.215939 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67063058-60ca-4efd-a102-cd90d5e43e56-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-f9lc5\" (UID: \"67063058-60ca-4efd-a102-cd90d5e43e56\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.215950 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2l7c\" (UniqueName: \"kubernetes.io/projected/b15d9f59-a87a-47ef-a61f-4e791186229d-kube-api-access-v2l7c\") pod \"cluster-image-registry-operator-dc59b4c8b-ppn2g\" (UID: \"b15d9f59-a87a-47ef-a61f-4e791186229d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ppn2g" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216052 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7b90621-706c-47e9-b361-14c9bb002f11-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fzn9r\" (UID: \"a7b90621-706c-47e9-b361-14c9bb002f11\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fzn9r" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216075 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7b90621-706c-47e9-b361-14c9bb002f11-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fzn9r\" (UID: \"a7b90621-706c-47e9-b361-14c9bb002f11\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fzn9r" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216099 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216118 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216141 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e05c0cf5-7ca3-47f2-810f-492e73edc19a-etcd-client\") pod \"etcd-operator-b45778765-msfx9\" (UID: \"e05c0cf5-7ca3-47f2-810f-492e73edc19a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-msfx9" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216160 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b63cb010-df8f-4e29-a7f3-6b68cb03e63a-client-ca\") pod \"route-controller-manager-6576b87f9c-ws2d2\" (UID: \"b63cb010-df8f-4e29-a7f3-6b68cb03e63a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216177 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e05c0cf5-7ca3-47f2-810f-492e73edc19a-config\") pod \"etcd-operator-b45778765-msfx9\" (UID: \"e05c0cf5-7ca3-47f2-810f-492e73edc19a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-msfx9" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216211 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216229 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216245 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn94b\" (UniqueName: \"kubernetes.io/projected/7ec1f803-3889-4483-87ae-9a38bd020818-kube-api-access-jn94b\") pod \"downloads-7954f5f757-9kvql\" (UID: \"7ec1f803-3889-4483-87ae-9a38bd020818\") " pod="openshift-console/downloads-7954f5f757-9kvql" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216269 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7z2q\" (UniqueName: \"kubernetes.io/projected/faa970d9-b5d7-49a1-b162-2bed0f528b71-kube-api-access-t7z2q\") pod \"catalog-operator-68c6474976-jh8w7\" (UID: \"faa970d9-b5d7-49a1-b162-2bed0f528b71\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216290 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/65112b94-8028-49f5-91fc-b83b49f30017-metrics-tls\") pod \"ingress-operator-5b745b69d9-7xmvn\" (UID: \"65112b94-8028-49f5-91fc-b83b49f30017\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7xmvn" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216308 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216326 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216356 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216385 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e05c0cf5-7ca3-47f2-810f-492e73edc19a-etcd-ca\") pod \"etcd-operator-b45778765-msfx9\" (UID: \"e05c0cf5-7ca3-47f2-810f-492e73edc19a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-msfx9" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216415 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjgf9\" (UniqueName: \"kubernetes.io/projected/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-kube-api-access-cjgf9\") pod \"console-f9d7485db-6jjtk\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216433 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/67063058-60ca-4efd-a102-cd90d5e43e56-audit-policies\") pod \"apiserver-7bbb656c7d-f9lc5\" (UID: \"67063058-60ca-4efd-a102-cd90d5e43e56\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216453 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72c5452f-efd7-406e-84de-0275882c823e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gws9q\" (UID: \"72c5452f-efd7-406e-84de-0275882c823e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gws9q" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216456 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cd062a1-246d-4ad6-b81a-a9f103576a32-config\") pod \"console-operator-58897d9998-62b7q\" (UID: \"0cd062a1-246d-4ad6-b81a-a9f103576a32\") " pod="openshift-console-operator/console-operator-58897d9998-62b7q" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216472 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xmx4\" (UniqueName: \"kubernetes.io/projected/0cd062a1-246d-4ad6-b81a-a9f103576a32-kube-api-access-4xmx4\") pod \"console-operator-58897d9998-62b7q\" (UID: \"0cd062a1-246d-4ad6-b81a-a9f103576a32\") " pod="openshift-console-operator/console-operator-58897d9998-62b7q" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216543 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/65112b94-8028-49f5-91fc-b83b49f30017-trusted-ca\") pod \"ingress-operator-5b745b69d9-7xmvn\" (UID: \"65112b94-8028-49f5-91fc-b83b49f30017\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7xmvn" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216583 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/103a8a7a-d7e9-4d28-b909-cf3468e483e9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xcksp\" (UID: \"103a8a7a-d7e9-4d28-b909-cf3468e483e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xcksp" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216615 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0cd062a1-246d-4ad6-b81a-a9f103576a32-trusted-ca\") pod \"console-operator-58897d9998-62b7q\" (UID: \"0cd062a1-246d-4ad6-b81a-a9f103576a32\") " pod="openshift-console-operator/console-operator-58897d9998-62b7q" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216642 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b63cb010-df8f-4e29-a7f3-6b68cb03e63a-config\") pod \"route-controller-manager-6576b87f9c-ws2d2\" (UID: \"b63cb010-df8f-4e29-a7f3-6b68cb03e63a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216678 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb57x\" (UniqueName: \"kubernetes.io/projected/b63cb010-df8f-4e29-a7f3-6b68cb03e63a-kube-api-access-sb57x\") pod \"route-controller-manager-6576b87f9c-ws2d2\" (UID: \"b63cb010-df8f-4e29-a7f3-6b68cb03e63a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216706 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cd062a1-246d-4ad6-b81a-a9f103576a32-serving-cert\") pod \"console-operator-58897d9998-62b7q\" (UID: \"0cd062a1-246d-4ad6-b81a-a9f103576a32\") " pod="openshift-console-operator/console-operator-58897d9998-62b7q" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216751 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56tqh\" (UniqueName: \"kubernetes.io/projected/65112b94-8028-49f5-91fc-b83b49f30017-kube-api-access-56tqh\") pod \"ingress-operator-5b745b69d9-7xmvn\" (UID: \"65112b94-8028-49f5-91fc-b83b49f30017\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7xmvn" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216776 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b15d9f59-a87a-47ef-a61f-4e791186229d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ppn2g\" (UID: \"b15d9f59-a87a-47ef-a61f-4e791186229d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ppn2g" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216802 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b15d9f59-a87a-47ef-a61f-4e791186229d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ppn2g\" (UID: \"b15d9f59-a87a-47ef-a61f-4e791186229d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ppn2g" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216832 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/faa970d9-b5d7-49a1-b162-2bed0f528b71-profile-collector-cert\") pod \"catalog-operator-68c6474976-jh8w7\" (UID: \"faa970d9-b5d7-49a1-b162-2bed0f528b71\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216859 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216887 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75s5t\" (UniqueName: \"kubernetes.io/projected/e6fba668-d4b4-45fb-89ec-7808a1269d1d-kube-api-access-75s5t\") pod \"dns-operator-744455d44c-7lrwj\" (UID: \"e6fba668-d4b4-45fb-89ec-7808a1269d1d\") " pod="openshift-dns-operator/dns-operator-744455d44c-7lrwj" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216911 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzwgm\" (UniqueName: \"kubernetes.io/projected/67063058-60ca-4efd-a102-cd90d5e43e56-kube-api-access-kzwgm\") pod \"apiserver-7bbb656c7d-f9lc5\" (UID: \"67063058-60ca-4efd-a102-cd90d5e43e56\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216928 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-console-config\") pod \"console-f9d7485db-6jjtk\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216943 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v7bw\" (UniqueName: \"kubernetes.io/projected/88a85445-8209-4b30-a0e0-c0f14d790fb5-kube-api-access-6v7bw\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216992 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e6fba668-d4b4-45fb-89ec-7808a1269d1d-metrics-tls\") pod \"dns-operator-744455d44c-7lrwj\" (UID: \"e6fba668-d4b4-45fb-89ec-7808a1269d1d\") " pod="openshift-dns-operator/dns-operator-744455d44c-7lrwj" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.217045 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.217074 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-console-serving-cert\") pod \"console-f9d7485db-6jjtk\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.217102 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnmvt\" (UniqueName: \"kubernetes.io/projected/a7b90621-706c-47e9-b361-14c9bb002f11-kube-api-access-cnmvt\") pod \"openshift-controller-manager-operator-756b6f6bc6-fzn9r\" (UID: \"a7b90621-706c-47e9-b361-14c9bb002f11\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fzn9r" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.217129 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/67063058-60ca-4efd-a102-cd90d5e43e56-etcd-client\") pod \"apiserver-7bbb656c7d-f9lc5\" (UID: \"67063058-60ca-4efd-a102-cd90d5e43e56\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.217157 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/67063058-60ca-4efd-a102-cd90d5e43e56-encryption-config\") pod \"apiserver-7bbb656c7d-f9lc5\" (UID: \"67063058-60ca-4efd-a102-cd90d5e43e56\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.217205 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/88a85445-8209-4b30-a0e0-c0f14d790fb5-audit-dir\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.217233 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.217338 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/67063058-60ca-4efd-a102-cd90d5e43e56-audit-dir\") pod \"apiserver-7bbb656c7d-f9lc5\" (UID: \"67063058-60ca-4efd-a102-cd90d5e43e56\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.219953 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.218416 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b63cb010-df8f-4e29-a7f3-6b68cb03e63a-serving-cert\") pod \"route-controller-manager-6576b87f9c-ws2d2\" (UID: \"b63cb010-df8f-4e29-a7f3-6b68cb03e63a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.220007 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/88a85445-8209-4b30-a0e0-c0f14d790fb5-audit-policies\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.218946 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b63cb010-df8f-4e29-a7f3-6b68cb03e63a-config\") pod \"route-controller-manager-6576b87f9c-ws2d2\" (UID: \"b63cb010-df8f-4e29-a7f3-6b68cb03e63a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.219282 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/67063058-60ca-4efd-a102-cd90d5e43e56-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-f9lc5\" (UID: \"67063058-60ca-4efd-a102-cd90d5e43e56\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.219576 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.219627 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/65112b94-8028-49f5-91fc-b83b49f30017-trusted-ca\") pod \"ingress-operator-5b745b69d9-7xmvn\" (UID: \"65112b94-8028-49f5-91fc-b83b49f30017\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7xmvn" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.220872 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b9a689e-54e3-48df-a102-500878c35aa2-serving-cert\") pod \"openshift-config-operator-7777fb866f-klcwn\" (UID: \"3b9a689e-54e3-48df-a102-500878c35aa2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.216166 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-service-ca\") pod \"console-f9d7485db-6jjtk\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.221444 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/103a8a7a-d7e9-4d28-b909-cf3468e483e9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xcksp\" (UID: \"103a8a7a-d7e9-4d28-b909-cf3468e483e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xcksp" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.219256 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/67063058-60ca-4efd-a102-cd90d5e43e56-audit-dir\") pod \"apiserver-7bbb656c7d-f9lc5\" (UID: \"67063058-60ca-4efd-a102-cd90d5e43e56\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.221789 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-console-oauth-config\") pod \"console-f9d7485db-6jjtk\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.221854 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e05c0cf5-7ca3-47f2-810f-492e73edc19a-etcd-service-ca\") pod \"etcd-operator-b45778765-msfx9\" (UID: \"e05c0cf5-7ca3-47f2-810f-492e73edc19a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-msfx9" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.221901 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.221908 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf7c6\" (UniqueName: \"kubernetes.io/projected/103a8a7a-d7e9-4d28-b909-cf3468e483e9-kube-api-access-bf7c6\") pod \"cluster-samples-operator-665b6dd947-xcksp\" (UID: \"103a8a7a-d7e9-4d28-b909-cf3468e483e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xcksp" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.222178 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.222923 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3b9a689e-54e3-48df-a102-500878c35aa2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-klcwn\" (UID: \"3b9a689e-54e3-48df-a102-500878c35aa2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.223136 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77cd9be5-c96a-494c-9d40-1068555dceda-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mgfqz\" (UID: \"77cd9be5-c96a-494c-9d40-1068555dceda\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mgfqz" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.223573 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e05c0cf5-7ca3-47f2-810f-492e73edc19a-serving-cert\") pod \"etcd-operator-b45778765-msfx9\" (UID: \"e05c0cf5-7ca3-47f2-810f-492e73edc19a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-msfx9" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.223637 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-oauth-serving-cert\") pod \"console-f9d7485db-6jjtk\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.223770 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b15d9f59-a87a-47ef-a61f-4e791186229d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ppn2g\" (UID: \"b15d9f59-a87a-47ef-a61f-4e791186229d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ppn2g" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.223862 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-trusted-ca-bundle\") pod \"console-f9d7485db-6jjtk\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.223895 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67063058-60ca-4efd-a102-cd90d5e43e56-serving-cert\") pod \"apiserver-7bbb656c7d-f9lc5\" (UID: \"67063058-60ca-4efd-a102-cd90d5e43e56\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.223928 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/88a85445-8209-4b30-a0e0-c0f14d790fb5-audit-policies\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.223945 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/65112b94-8028-49f5-91fc-b83b49f30017-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7xmvn\" (UID: \"65112b94-8028-49f5-91fc-b83b49f30017\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7xmvn" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.224002 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.224017 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77cd9be5-c96a-494c-9d40-1068555dceda-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mgfqz\" (UID: \"77cd9be5-c96a-494c-9d40-1068555dceda\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mgfqz" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.224065 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77cd9be5-c96a-494c-9d40-1068555dceda-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mgfqz\" (UID: \"77cd9be5-c96a-494c-9d40-1068555dceda\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mgfqz" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.224209 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3b9a689e-54e3-48df-a102-500878c35aa2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-klcwn\" (UID: \"3b9a689e-54e3-48df-a102-500878c35aa2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.224222 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b63cb010-df8f-4e29-a7f3-6b68cb03e63a-client-ca\") pod \"route-controller-manager-6576b87f9c-ws2d2\" (UID: \"b63cb010-df8f-4e29-a7f3-6b68cb03e63a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.224488 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/88a85445-8209-4b30-a0e0-c0f14d790fb5-audit-dir\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.224554 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.224587 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.225178 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/67063058-60ca-4efd-a102-cd90d5e43e56-audit-policies\") pod \"apiserver-7bbb656c7d-f9lc5\" (UID: \"67063058-60ca-4efd-a102-cd90d5e43e56\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.225347 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7b90621-706c-47e9-b361-14c9bb002f11-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fzn9r\" (UID: \"a7b90621-706c-47e9-b361-14c9bb002f11\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fzn9r" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.225458 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.225779 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7b90621-706c-47e9-b361-14c9bb002f11-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fzn9r\" (UID: \"a7b90621-706c-47e9-b361-14c9bb002f11\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fzn9r" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.225904 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/65112b94-8028-49f5-91fc-b83b49f30017-metrics-tls\") pod \"ingress-operator-5b745b69d9-7xmvn\" (UID: \"65112b94-8028-49f5-91fc-b83b49f30017\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7xmvn" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.226192 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.226326 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.226518 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.226542 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cd062a1-246d-4ad6-b81a-a9f103576a32-serving-cert\") pod \"console-operator-58897d9998-62b7q\" (UID: \"0cd062a1-246d-4ad6-b81a-a9f103576a32\") " pod="openshift-console-operator/console-operator-58897d9998-62b7q" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.226705 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-console-serving-cert\") pod \"console-f9d7485db-6jjtk\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.226840 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0cd062a1-246d-4ad6-b81a-a9f103576a32-trusted-ca\") pod \"console-operator-58897d9998-62b7q\" (UID: \"0cd062a1-246d-4ad6-b81a-a9f103576a32\") " pod="openshift-console-operator/console-operator-58897d9998-62b7q" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.226847 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e6fba668-d4b4-45fb-89ec-7808a1269d1d-metrics-tls\") pod \"dns-operator-744455d44c-7lrwj\" (UID: \"e6fba668-d4b4-45fb-89ec-7808a1269d1d\") " pod="openshift-dns-operator/dns-operator-744455d44c-7lrwj" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.227208 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67063058-60ca-4efd-a102-cd90d5e43e56-serving-cert\") pod \"apiserver-7bbb656c7d-f9lc5\" (UID: \"67063058-60ca-4efd-a102-cd90d5e43e56\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.227423 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-oauth-serving-cert\") pod \"console-f9d7485db-6jjtk\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.227772 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-trusted-ca-bundle\") pod \"console-f9d7485db-6jjtk\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.228414 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/67063058-60ca-4efd-a102-cd90d5e43e56-etcd-client\") pod \"apiserver-7bbb656c7d-f9lc5\" (UID: \"67063058-60ca-4efd-a102-cd90d5e43e56\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.228713 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.229990 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/67063058-60ca-4efd-a102-cd90d5e43e56-encryption-config\") pod \"apiserver-7bbb656c7d-f9lc5\" (UID: \"67063058-60ca-4efd-a102-cd90d5e43e56\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.230296 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-console-oauth-config\") pod \"console-f9d7485db-6jjtk\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.231706 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e05c0cf5-7ca3-47f2-810f-492e73edc19a-config\") pod \"etcd-operator-b45778765-msfx9\" (UID: \"e05c0cf5-7ca3-47f2-810f-492e73edc19a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-msfx9" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.231794 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e05c0cf5-7ca3-47f2-810f-492e73edc19a-etcd-ca\") pod \"etcd-operator-b45778765-msfx9\" (UID: \"e05c0cf5-7ca3-47f2-810f-492e73edc19a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-msfx9" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.232098 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e05c0cf5-7ca3-47f2-810f-492e73edc19a-etcd-service-ca\") pod \"etcd-operator-b45778765-msfx9\" (UID: \"e05c0cf5-7ca3-47f2-810f-492e73edc19a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-msfx9" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.233290 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e05c0cf5-7ca3-47f2-810f-492e73edc19a-etcd-client\") pod \"etcd-operator-b45778765-msfx9\" (UID: \"e05c0cf5-7ca3-47f2-810f-492e73edc19a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-msfx9" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.234060 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e05c0cf5-7ca3-47f2-810f-492e73edc19a-serving-cert\") pod \"etcd-operator-b45778765-msfx9\" (UID: \"e05c0cf5-7ca3-47f2-810f-492e73edc19a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-msfx9" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.234388 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b15d9f59-a87a-47ef-a61f-4e791186229d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ppn2g\" (UID: \"b15d9f59-a87a-47ef-a61f-4e791186229d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ppn2g" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.235487 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77cd9be5-c96a-494c-9d40-1068555dceda-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mgfqz\" (UID: \"77cd9be5-c96a-494c-9d40-1068555dceda\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mgfqz" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.268337 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.286440 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.306930 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.324930 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt5zz\" (UniqueName: \"kubernetes.io/projected/72c5452f-efd7-406e-84de-0275882c823e-kube-api-access-nt5zz\") pod \"kube-storage-version-migrator-operator-b67b599dd-gws9q\" (UID: \"72c5452f-efd7-406e-84de-0275882c823e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gws9q" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.325005 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/faa970d9-b5d7-49a1-b162-2bed0f528b71-srv-cert\") pod \"catalog-operator-68c6474976-jh8w7\" (UID: \"faa970d9-b5d7-49a1-b162-2bed0f528b71\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.325064 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72c5452f-efd7-406e-84de-0275882c823e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gws9q\" (UID: \"72c5452f-efd7-406e-84de-0275882c823e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gws9q" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.325146 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7z2q\" (UniqueName: \"kubernetes.io/projected/faa970d9-b5d7-49a1-b162-2bed0f528b71-kube-api-access-t7z2q\") pod \"catalog-operator-68c6474976-jh8w7\" (UID: \"faa970d9-b5d7-49a1-b162-2bed0f528b71\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.325205 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72c5452f-efd7-406e-84de-0275882c823e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gws9q\" (UID: \"72c5452f-efd7-406e-84de-0275882c823e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gws9q" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.325284 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/faa970d9-b5d7-49a1-b162-2bed0f528b71-profile-collector-cert\") pod \"catalog-operator-68c6474976-jh8w7\" (UID: \"faa970d9-b5d7-49a1-b162-2bed0f528b71\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.325604 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.346571 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.365780 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.385825 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.405601 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.437373 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.446697 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.467224 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.489779 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.510332 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.526981 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.546500 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.565935 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.585870 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.606733 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.626427 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.646722 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.667017 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.715470 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kljwk\" (UniqueName: \"kubernetes.io/projected/c6a281f2-1a7e-419e-8736-57c1a3bae82e-kube-api-access-kljwk\") pod \"machine-api-operator-5694c8668f-zh576\" (UID: \"c6a281f2-1a7e-419e-8736-57c1a3bae82e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zh576" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.735109 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj469\" (UniqueName: \"kubernetes.io/projected/0909c109-9799-4bc9-9d4f-1d97a95ec410-kube-api-access-nj469\") pod \"machine-approver-56656f9798-zs6vd\" (UID: \"0909c109-9799-4bc9-9d4f-1d97a95ec410\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs6vd" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.753601 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44zzq\" (UniqueName: \"kubernetes.io/projected/eca953dd-cbbc-404a-974f-babb9bf2d0e8-kube-api-access-44zzq\") pod \"controller-manager-879f6c89f-g8d99\" (UID: \"eca953dd-cbbc-404a-974f-babb9bf2d0e8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.766582 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.771913 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w94j7\" (UniqueName: \"kubernetes.io/projected/5c5ace00-d072-440a-bc7b-982b96f636e7-kube-api-access-w94j7\") pod \"apiserver-76f77b778f-tndnf\" (UID: \"5c5ace00-d072-440a-bc7b-982b96f636e7\") " pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.787475 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.806563 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.826835 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.833004 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.846926 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.866221 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.887108 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.898789 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs6vd" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.906627 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.913324 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.927182 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 14 18:44:47 crc kubenswrapper[4897]: W0214 18:44:47.938608 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0909c109_9799_4bc9_9d4f_1d97a95ec410.slice/crio-bd5a64b4fd15cb5f326faa7d455c9214bc7d5bf953d6d6d80cb4d49e2088e576 WatchSource:0}: Error finding container bd5a64b4fd15cb5f326faa7d455c9214bc7d5bf953d6d6d80cb4d49e2088e576: Status 404 returned error can't find the container with id bd5a64b4fd15cb5f326faa7d455c9214bc7d5bf953d6d6d80cb4d49e2088e576 Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.946582 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.966819 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.987417 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-zh576" Feb 14 18:44:47 crc kubenswrapper[4897]: I0214 18:44:47.990154 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.006623 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.025949 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.046452 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.069834 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.085829 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.105956 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g8d99"] Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.119895 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.126466 4897 request.go:700] Waited for 1.000317715s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0 Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.136782 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.140114 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tndnf"] Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.147584 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.159710 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72c5452f-efd7-406e-84de-0275882c823e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gws9q\" (UID: \"72c5452f-efd7-406e-84de-0275882c823e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gws9q" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.166173 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.186283 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.206247 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.227095 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.247064 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.264887 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zh576"] Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.266461 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.286643 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.306552 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 14 18:44:48 crc kubenswrapper[4897]: E0214 18:44:48.325328 4897 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 14 18:44:48 crc kubenswrapper[4897]: E0214 18:44:48.325421 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa970d9-b5d7-49a1-b162-2bed0f528b71-srv-cert podName:faa970d9-b5d7-49a1-b162-2bed0f528b71 nodeName:}" failed. No retries permitted until 2026-02-14 18:44:48.825397963 +0000 UTC m=+141.801806456 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/faa970d9-b5d7-49a1-b162-2bed0f528b71-srv-cert") pod "catalog-operator-68c6474976-jh8w7" (UID: "faa970d9-b5d7-49a1-b162-2bed0f528b71") : failed to sync secret cache: timed out waiting for the condition Feb 14 18:44:48 crc kubenswrapper[4897]: E0214 18:44:48.325529 4897 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Feb 14 18:44:48 crc kubenswrapper[4897]: E0214 18:44:48.325706 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa970d9-b5d7-49a1-b162-2bed0f528b71-profile-collector-cert podName:faa970d9-b5d7-49a1-b162-2bed0f528b71 nodeName:}" failed. No retries permitted until 2026-02-14 18:44:48.825671652 +0000 UTC m=+141.802080145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/faa970d9-b5d7-49a1-b162-2bed0f528b71-profile-collector-cert") pod "catalog-operator-68c6474976-jh8w7" (UID: "faa970d9-b5d7-49a1-b162-2bed0f528b71") : failed to sync secret cache: timed out waiting for the condition Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.327819 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.339787 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72c5452f-efd7-406e-84de-0275882c823e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gws9q\" (UID: \"72c5452f-efd7-406e-84de-0275882c823e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gws9q" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.346483 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.379315 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.385782 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.406079 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.426727 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.447501 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 14 18:44:48 crc kubenswrapper[4897]: W0214 18:44:48.451680 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6a281f2_1a7e_419e_8736_57c1a3bae82e.slice/crio-c41ceb8de696a25b20aceb027422d307b02ab3af22b893878cc905fe273c83bc WatchSource:0}: Error finding container c41ceb8de696a25b20aceb027422d307b02ab3af22b893878cc905fe273c83bc: Status 404 returned error can't find the container with id c41ceb8de696a25b20aceb027422d307b02ab3af22b893878cc905fe273c83bc Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.468455 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.486632 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.506775 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.527501 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.546999 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.567109 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.586262 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.599176 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tndnf" event={"ID":"5c5ace00-d072-440a-bc7b-982b96f636e7","Type":"ContainerStarted","Data":"7b0308777165607f1f59f85edd8681e1cfa9ad2e5806c87d23027c99dede7b4e"} Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.601657 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs6vd" event={"ID":"0909c109-9799-4bc9-9d4f-1d97a95ec410","Type":"ContainerStarted","Data":"5ef05f006a9467ef75b39bc4340efe4585c0e700d9224135d1defe792f18ed5a"} Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.601711 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs6vd" event={"ID":"0909c109-9799-4bc9-9d4f-1d97a95ec410","Type":"ContainerStarted","Data":"bd5a64b4fd15cb5f326faa7d455c9214bc7d5bf953d6d6d80cb4d49e2088e576"} Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.603006 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" event={"ID":"eca953dd-cbbc-404a-974f-babb9bf2d0e8","Type":"ContainerStarted","Data":"6a066f08e081bc34fd0102b3a657573df6d5bc326f0ba5a812d2f5b204a6ac71"} Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.604668 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zh576" event={"ID":"c6a281f2-1a7e-419e-8736-57c1a3bae82e","Type":"ContainerStarted","Data":"c41ceb8de696a25b20aceb027422d307b02ab3af22b893878cc905fe273c83bc"} Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.606444 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.626352 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.646819 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.667581 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.687466 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.706549 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.726239 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.746069 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.766332 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.785970 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.806430 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.826781 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.846003 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.850187 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/faa970d9-b5d7-49a1-b162-2bed0f528b71-profile-collector-cert\") pod \"catalog-operator-68c6474976-jh8w7\" (UID: \"faa970d9-b5d7-49a1-b162-2bed0f528b71\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.850353 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/faa970d9-b5d7-49a1-b162-2bed0f528b71-srv-cert\") pod \"catalog-operator-68c6474976-jh8w7\" (UID: \"faa970d9-b5d7-49a1-b162-2bed0f528b71\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.857861 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/faa970d9-b5d7-49a1-b162-2bed0f528b71-srv-cert\") pod \"catalog-operator-68c6474976-jh8w7\" (UID: \"faa970d9-b5d7-49a1-b162-2bed0f528b71\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.858594 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/faa970d9-b5d7-49a1-b162-2bed0f528b71-profile-collector-cert\") pod \"catalog-operator-68c6474976-jh8w7\" (UID: \"faa970d9-b5d7-49a1-b162-2bed0f528b71\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.866803 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.885884 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.906422 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.927756 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.946978 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 14 18:44:48 crc kubenswrapper[4897]: I0214 18:44:48.979386 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.006648 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.027310 4897 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.046307 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.081672 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlsx8\" (UniqueName: \"kubernetes.io/projected/e05c0cf5-7ca3-47f2-810f-492e73edc19a-kube-api-access-nlsx8\") pod \"etcd-operator-b45778765-msfx9\" (UID: \"e05c0cf5-7ca3-47f2-810f-492e73edc19a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-msfx9" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.101925 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xmx4\" (UniqueName: \"kubernetes.io/projected/0cd062a1-246d-4ad6-b81a-a9f103576a32-kube-api-access-4xmx4\") pod \"console-operator-58897d9998-62b7q\" (UID: \"0cd062a1-246d-4ad6-b81a-a9f103576a32\") " pod="openshift-console-operator/console-operator-58897d9998-62b7q" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.123417 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v7bw\" (UniqueName: \"kubernetes.io/projected/88a85445-8209-4b30-a0e0-c0f14d790fb5-kube-api-access-6v7bw\") pod \"oauth-openshift-558db77b4-c8v6s\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.142239 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56tqh\" (UniqueName: \"kubernetes.io/projected/65112b94-8028-49f5-91fc-b83b49f30017-kube-api-access-56tqh\") pod \"ingress-operator-5b745b69d9-7xmvn\" (UID: \"65112b94-8028-49f5-91fc-b83b49f30017\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7xmvn" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.144985 4897 request.go:700] Waited for 1.927473066s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/default/token Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.160936 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn94b\" (UniqueName: \"kubernetes.io/projected/7ec1f803-3889-4483-87ae-9a38bd020818-kube-api-access-jn94b\") pod \"downloads-7954f5f757-9kvql\" (UID: \"7ec1f803-3889-4483-87ae-9a38bd020818\") " pod="openshift-console/downloads-7954f5f757-9kvql" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.180399 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb57x\" (UniqueName: \"kubernetes.io/projected/b63cb010-df8f-4e29-a7f3-6b68cb03e63a-kube-api-access-sb57x\") pod \"route-controller-manager-6576b87f9c-ws2d2\" (UID: \"b63cb010-df8f-4e29-a7f3-6b68cb03e63a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.200781 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75s5t\" (UniqueName: \"kubernetes.io/projected/e6fba668-d4b4-45fb-89ec-7808a1269d1d-kube-api-access-75s5t\") pod \"dns-operator-744455d44c-7lrwj\" (UID: \"e6fba668-d4b4-45fb-89ec-7808a1269d1d\") " pod="openshift-dns-operator/dns-operator-744455d44c-7lrwj" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.220128 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzwgm\" (UniqueName: \"kubernetes.io/projected/67063058-60ca-4efd-a102-cd90d5e43e56-kube-api-access-kzwgm\") pod \"apiserver-7bbb656c7d-f9lc5\" (UID: \"67063058-60ca-4efd-a102-cd90d5e43e56\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.254787 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b15d9f59-a87a-47ef-a61f-4e791186229d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ppn2g\" (UID: \"b15d9f59-a87a-47ef-a61f-4e791186229d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ppn2g" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.264137 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.269922 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-62b7q" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.274462 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnmvt\" (UniqueName: \"kubernetes.io/projected/a7b90621-706c-47e9-b361-14c9bb002f11-kube-api-access-cnmvt\") pod \"openshift-controller-manager-operator-756b6f6bc6-fzn9r\" (UID: \"a7b90621-706c-47e9-b361-14c9bb002f11\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fzn9r" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.291762 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.294600 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf7c6\" (UniqueName: \"kubernetes.io/projected/103a8a7a-d7e9-4d28-b909-cf3468e483e9-kube-api-access-bf7c6\") pod \"cluster-samples-operator-665b6dd947-xcksp\" (UID: \"103a8a7a-d7e9-4d28-b909-cf3468e483e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xcksp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.301899 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fzn9r" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.305859 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77cd9be5-c96a-494c-9d40-1068555dceda-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mgfqz\" (UID: \"77cd9be5-c96a-494c-9d40-1068555dceda\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mgfqz" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.310612 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-7lrwj" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.318761 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-9kvql" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.334465 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/65112b94-8028-49f5-91fc-b83b49f30017-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7xmvn\" (UID: \"65112b94-8028-49f5-91fc-b83b49f30017\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7xmvn" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.355113 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktbp6\" (UniqueName: \"kubernetes.io/projected/3b9a689e-54e3-48df-a102-500878c35aa2-kube-api-access-ktbp6\") pod \"openshift-config-operator-7777fb866f-klcwn\" (UID: \"3b9a689e-54e3-48df-a102-500878c35aa2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.367460 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-msfx9" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.373077 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xcksp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.378716 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjgf9\" (UniqueName: \"kubernetes.io/projected/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-kube-api-access-cjgf9\") pod \"console-f9d7485db-6jjtk\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.380928 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.387968 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7xmvn" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.390605 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2l7c\" (UniqueName: \"kubernetes.io/projected/b15d9f59-a87a-47ef-a61f-4e791186229d-kube-api-access-v2l7c\") pod \"cluster-image-registry-operator-dc59b4c8b-ppn2g\" (UID: \"b15d9f59-a87a-47ef-a61f-4e791186229d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ppn2g" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.394140 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mgfqz" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.435734 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt5zz\" (UniqueName: \"kubernetes.io/projected/72c5452f-efd7-406e-84de-0275882c823e-kube-api-access-nt5zz\") pod \"kube-storage-version-migrator-operator-b67b599dd-gws9q\" (UID: \"72c5452f-efd7-406e-84de-0275882c823e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gws9q" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.445999 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7z2q\" (UniqueName: \"kubernetes.io/projected/faa970d9-b5d7-49a1-b162-2bed0f528b71-kube-api-access-t7z2q\") pod \"catalog-operator-68c6474976-jh8w7\" (UID: \"faa970d9-b5d7-49a1-b162-2bed0f528b71\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.457700 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/57df6e6f-6814-477c-aff1-19d5eb81e4c1-node-bootstrap-token\") pod \"machine-config-server-fh6qr\" (UID: \"57df6e6f-6814-477c-aff1-19d5eb81e4c1\") " pod="openshift-machine-config-operator/machine-config-server-fh6qr" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.457759 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19dafede-65e3-4652-880d-55d3d86dc12b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kknqp\" (UID: \"19dafede-65e3-4652-880d-55d3d86dc12b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kknqp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.457785 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d62c28f1-696b-4b88-8f46-67abf833ee4c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9n8vm\" (UID: \"d62c28f1-696b-4b88-8f46-67abf833ee4c\") " pod="openshift-marketplace/marketplace-operator-79b997595-9n8vm" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.457845 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjgqk\" (UniqueName: \"kubernetes.io/projected/10c2cb4a-c03b-49ca-a6ca-1b5637923932-kube-api-access-mjgqk\") pod \"olm-operator-6b444d44fb-5wxpc\" (UID: \"10c2cb4a-c03b-49ca-a6ca-1b5637923932\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5wxpc" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.457900 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d62c28f1-696b-4b88-8f46-67abf833ee4c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9n8vm\" (UID: \"d62c28f1-696b-4b88-8f46-67abf833ee4c\") " pod="openshift-marketplace/marketplace-operator-79b997595-9n8vm" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.457920 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11f634b6-64a2-4d22-b194-a9515113a4e7-config\") pod \"service-ca-operator-777779d784-wn9fs\" (UID: \"11f634b6-64a2-4d22-b194-a9515113a4e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wn9fs" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.457939 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzwxp\" (UniqueName: \"kubernetes.io/projected/2fd14f21-0836-40b2-b509-ec296556f45c-kube-api-access-hzwxp\") pod \"authentication-operator-69f744f599-rx2r9\" (UID: \"2fd14f21-0836-40b2-b509-ec296556f45c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.457957 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df96584e-d06a-4906-9c95-3e94936695ef-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xlrkp\" (UID: \"df96584e-d06a-4906-9c95-3e94936695ef\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xlrkp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.457976 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/57df6e6f-6814-477c-aff1-19d5eb81e4c1-certs\") pod \"machine-config-server-fh6qr\" (UID: \"57df6e6f-6814-477c-aff1-19d5eb81e4c1\") " pod="openshift-machine-config-operator/machine-config-server-fh6qr" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.457994 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c8aadef2-477c-4699-9a1b-dd557ad9e273-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.458050 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c34acbe-6a2d-446a-b2e2-5fc5a4130deb-service-ca-bundle\") pod \"router-default-5444994796-c5z8g\" (UID: \"9c34acbe-6a2d-446a-b2e2-5fc5a4130deb\") " pod="openshift-ingress/router-default-5444994796-c5z8g" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.458087 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9c34acbe-6a2d-446a-b2e2-5fc5a4130deb-default-certificate\") pod \"router-default-5444994796-c5z8g\" (UID: \"9c34acbe-6a2d-446a-b2e2-5fc5a4130deb\") " pod="openshift-ingress/router-default-5444994796-c5z8g" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.458153 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0f237a59-0e7e-4ae0-94c9-c6d451224a27-tmpfs\") pod \"packageserver-d55dfcdfc-9pw99\" (UID: \"0f237a59-0e7e-4ae0-94c9-c6d451224a27\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.458181 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d70bf6df-ebee-4193-982f-e9d86147ea35-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wg7rv\" (UID: \"d70bf6df-ebee-4193-982f-e9d86147ea35\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wg7rv" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.458210 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c8aadef2-477c-4699-9a1b-dd557ad9e273-registry-certificates\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.458233 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df96584e-d06a-4906-9c95-3e94936695ef-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xlrkp\" (UID: \"df96584e-d06a-4906-9c95-3e94936695ef\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xlrkp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.458255 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9baea172-0e9d-4866-917e-c5e0a57e1413-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2hqm2\" (UID: \"9baea172-0e9d-4866-917e-c5e0a57e1413\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hqm2" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.458276 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7lwt\" (UniqueName: \"kubernetes.io/projected/85830a53-70c2-433d-a359-025fababa083-kube-api-access-n7lwt\") pod \"collect-profiles-29518230-qxs85\" (UID: \"85830a53-70c2-433d-a359-025fababa083\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518230-qxs85" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.458298 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/758d396c-63a7-4f41-a396-713cb90db5af-proxy-tls\") pod \"machine-config-operator-74547568cd-k5jxh\" (UID: \"758d396c-63a7-4f41-a396-713cb90db5af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5jxh" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.458348 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fd14f21-0836-40b2-b509-ec296556f45c-service-ca-bundle\") pod \"authentication-operator-69f744f599-rx2r9\" (UID: \"2fd14f21-0836-40b2-b509-ec296556f45c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.458365 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85830a53-70c2-433d-a359-025fababa083-config-volume\") pod \"collect-profiles-29518230-qxs85\" (UID: \"85830a53-70c2-433d-a359-025fababa083\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518230-qxs85" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.458381 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f237a59-0e7e-4ae0-94c9-c6d451224a27-webhook-cert\") pod \"packageserver-d55dfcdfc-9pw99\" (UID: \"0f237a59-0e7e-4ae0-94c9-c6d451224a27\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.458421 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e45f17b5-6656-4aef-95c8-b1856ae4f1c4-signing-key\") pod \"service-ca-9c57cc56f-d8kqp\" (UID: \"e45f17b5-6656-4aef-95c8-b1856ae4f1c4\") " pod="openshift-service-ca/service-ca-9c57cc56f-d8kqp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.458466 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd4js\" (UniqueName: \"kubernetes.io/projected/7a4a964d-7591-4a23-bc83-2fda90a1b3da-kube-api-access-fd4js\") pod \"openshift-apiserver-operator-796bbdcf4f-t5h9t\" (UID: \"7a4a964d-7591-4a23-bc83-2fda90a1b3da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t5h9t" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.458960 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19dafede-65e3-4652-880d-55d3d86dc12b-config\") pod \"kube-controller-manager-operator-78b949d7b-kknqp\" (UID: \"19dafede-65e3-4652-880d-55d3d86dc12b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kknqp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.458988 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcf7s\" (UniqueName: \"kubernetes.io/projected/972de147-8a61-4e52-b8a3-2cedb4f22f11-kube-api-access-hcf7s\") pod \"migrator-59844c95c7-l5nd2\" (UID: \"972de147-8a61-4e52-b8a3-2cedb4f22f11\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l5nd2" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459015 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a4a964d-7591-4a23-bc83-2fda90a1b3da-config\") pod \"openshift-apiserver-operator-796bbdcf4f-t5h9t\" (UID: \"7a4a964d-7591-4a23-bc83-2fda90a1b3da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t5h9t" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459056 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/10c2cb4a-c03b-49ca-a6ca-1b5637923932-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5wxpc\" (UID: \"10c2cb4a-c03b-49ca-a6ca-1b5637923932\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5wxpc" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459076 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrpm2\" (UniqueName: \"kubernetes.io/projected/9c34acbe-6a2d-446a-b2e2-5fc5a4130deb-kube-api-access-xrpm2\") pod \"router-default-5444994796-c5z8g\" (UID: \"9c34acbe-6a2d-446a-b2e2-5fc5a4130deb\") " pod="openshift-ingress/router-default-5444994796-c5z8g" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459097 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/88eb139d-9259-4c72-b9db-0f0cd154fda9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-g7lq6\" (UID: \"88eb139d-9259-4c72-b9db-0f0cd154fda9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7lq6" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459137 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttjnh\" (UniqueName: \"kubernetes.io/projected/87f809c6-5e7e-47ec-8fd2-3eca0bd6b045-kube-api-access-ttjnh\") pod \"dns-default-bzvvc\" (UID: \"87f809c6-5e7e-47ec-8fd2-3eca0bd6b045\") " pod="openshift-dns/dns-default-bzvvc" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459156 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dbb71e3-f936-4fb4-b5ba-772aa900f80d-cert\") pod \"ingress-canary-rdcrz\" (UID: \"7dbb71e3-f936-4fb4-b5ba-772aa900f80d\") " pod="openshift-ingress-canary/ingress-canary-rdcrz" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459175 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/758d396c-63a7-4f41-a396-713cb90db5af-auth-proxy-config\") pod \"machine-config-operator-74547568cd-k5jxh\" (UID: \"758d396c-63a7-4f41-a396-713cb90db5af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5jxh" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459206 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10c2cb4a-c03b-49ca-a6ca-1b5637923932-srv-cert\") pod \"olm-operator-6b444d44fb-5wxpc\" (UID: \"10c2cb4a-c03b-49ca-a6ca-1b5637923932\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5wxpc" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459239 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s7wb\" (UniqueName: \"kubernetes.io/projected/15fa65ae-a663-434d-9d2d-2a69a3f7d81c-kube-api-access-7s7wb\") pod \"package-server-manager-789f6589d5-gvc49\" (UID: \"15fa65ae-a663-434d-9d2d-2a69a3f7d81c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459255 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87f809c6-5e7e-47ec-8fd2-3eca0bd6b045-config-volume\") pod \"dns-default-bzvvc\" (UID: \"87f809c6-5e7e-47ec-8fd2-3eca0bd6b045\") " pod="openshift-dns/dns-default-bzvvc" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459269 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e45f17b5-6656-4aef-95c8-b1856ae4f1c4-signing-cabundle\") pod \"service-ca-9c57cc56f-d8kqp\" (UID: \"e45f17b5-6656-4aef-95c8-b1856ae4f1c4\") " pod="openshift-service-ca/service-ca-9c57cc56f-d8kqp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459297 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87f809c6-5e7e-47ec-8fd2-3eca0bd6b045-metrics-tls\") pod \"dns-default-bzvvc\" (UID: \"87f809c6-5e7e-47ec-8fd2-3eca0bd6b045\") " pod="openshift-dns/dns-default-bzvvc" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459315 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9wkv\" (UniqueName: \"kubernetes.io/projected/57df6e6f-6814-477c-aff1-19d5eb81e4c1-kube-api-access-v9wkv\") pod \"machine-config-server-fh6qr\" (UID: \"57df6e6f-6814-477c-aff1-19d5eb81e4c1\") " pod="openshift-machine-config-operator/machine-config-server-fh6qr" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459342 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94pgf\" (UniqueName: \"kubernetes.io/projected/11f634b6-64a2-4d22-b194-a9515113a4e7-kube-api-access-94pgf\") pod \"service-ca-operator-777779d784-wn9fs\" (UID: \"11f634b6-64a2-4d22-b194-a9515113a4e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wn9fs" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459362 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df96584e-d06a-4906-9c95-3e94936695ef-config\") pod \"kube-apiserver-operator-766d6c64bb-xlrkp\" (UID: \"df96584e-d06a-4906-9c95-3e94936695ef\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xlrkp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459387 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8aadef2-477c-4699-9a1b-dd557ad9e273-bound-sa-token\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459405 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11f634b6-64a2-4d22-b194-a9515113a4e7-serving-cert\") pod \"service-ca-operator-777779d784-wn9fs\" (UID: \"11f634b6-64a2-4d22-b194-a9515113a4e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wn9fs" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459498 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c8aadef2-477c-4699-9a1b-dd557ad9e273-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459517 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f237a59-0e7e-4ae0-94c9-c6d451224a27-apiservice-cert\") pod \"packageserver-d55dfcdfc-9pw99\" (UID: \"0f237a59-0e7e-4ae0-94c9-c6d451224a27\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459555 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdzjc\" (UniqueName: \"kubernetes.io/projected/d70bf6df-ebee-4193-982f-e9d86147ea35-kube-api-access-gdzjc\") pod \"multus-admission-controller-857f4d67dd-wg7rv\" (UID: \"d70bf6df-ebee-4193-982f-e9d86147ea35\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wg7rv" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459572 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8aadef2-477c-4699-9a1b-dd557ad9e273-trusted-ca\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459591 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8s6n\" (UniqueName: \"kubernetes.io/projected/d62c28f1-696b-4b88-8f46-67abf833ee4c-kube-api-access-c8s6n\") pod \"marketplace-operator-79b997595-9n8vm\" (UID: \"d62c28f1-696b-4b88-8f46-67abf833ee4c\") " pod="openshift-marketplace/marketplace-operator-79b997595-9n8vm" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459625 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85830a53-70c2-433d-a359-025fababa083-secret-volume\") pod \"collect-profiles-29518230-qxs85\" (UID: \"85830a53-70c2-433d-a359-025fababa083\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518230-qxs85" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459672 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9c34acbe-6a2d-446a-b2e2-5fc5a4130deb-stats-auth\") pod \"router-default-5444994796-c5z8g\" (UID: \"9c34acbe-6a2d-446a-b2e2-5fc5a4130deb\") " pod="openshift-ingress/router-default-5444994796-c5z8g" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459691 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8lq4\" (UniqueName: \"kubernetes.io/projected/0f237a59-0e7e-4ae0-94c9-c6d451224a27-kube-api-access-g8lq4\") pod \"packageserver-d55dfcdfc-9pw99\" (UID: \"0f237a59-0e7e-4ae0-94c9-c6d451224a27\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459745 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459765 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fd14f21-0836-40b2-b509-ec296556f45c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rx2r9\" (UID: \"2fd14f21-0836-40b2-b509-ec296556f45c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459781 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c34acbe-6a2d-446a-b2e2-5fc5a4130deb-metrics-certs\") pod \"router-default-5444994796-c5z8g\" (UID: \"9c34acbe-6a2d-446a-b2e2-5fc5a4130deb\") " pod="openshift-ingress/router-default-5444994796-c5z8g" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459799 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22kbr\" (UniqueName: \"kubernetes.io/projected/9baea172-0e9d-4866-917e-c5e0a57e1413-kube-api-access-22kbr\") pod \"control-plane-machine-set-operator-78cbb6b69f-2hqm2\" (UID: \"9baea172-0e9d-4866-917e-c5e0a57e1413\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hqm2" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459842 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58zds\" (UniqueName: \"kubernetes.io/projected/c8aadef2-477c-4699-9a1b-dd557ad9e273-kube-api-access-58zds\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459859 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fd14f21-0836-40b2-b509-ec296556f45c-serving-cert\") pod \"authentication-operator-69f744f599-rx2r9\" (UID: \"2fd14f21-0836-40b2-b509-ec296556f45c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459888 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5xr9\" (UniqueName: \"kubernetes.io/projected/88eb139d-9259-4c72-b9db-0f0cd154fda9-kube-api-access-f5xr9\") pod \"machine-config-controller-84d6567774-g7lq6\" (UID: \"88eb139d-9259-4c72-b9db-0f0cd154fda9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7lq6" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459913 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xnp6\" (UniqueName: \"kubernetes.io/projected/e45f17b5-6656-4aef-95c8-b1856ae4f1c4-kube-api-access-9xnp6\") pod \"service-ca-9c57cc56f-d8kqp\" (UID: \"e45f17b5-6656-4aef-95c8-b1856ae4f1c4\") " pod="openshift-service-ca/service-ca-9c57cc56f-d8kqp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459931 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a4a964d-7591-4a23-bc83-2fda90a1b3da-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-t5h9t\" (UID: \"7a4a964d-7591-4a23-bc83-2fda90a1b3da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t5h9t" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459946 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/758d396c-63a7-4f41-a396-713cb90db5af-images\") pod \"machine-config-operator-74547568cd-k5jxh\" (UID: \"758d396c-63a7-4f41-a396-713cb90db5af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5jxh" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459980 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c8aadef2-477c-4699-9a1b-dd557ad9e273-registry-tls\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.459995 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/88eb139d-9259-4c72-b9db-0f0cd154fda9-proxy-tls\") pod \"machine-config-controller-84d6567774-g7lq6\" (UID: \"88eb139d-9259-4c72-b9db-0f0cd154fda9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7lq6" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.460044 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19dafede-65e3-4652-880d-55d3d86dc12b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kknqp\" (UID: \"19dafede-65e3-4652-880d-55d3d86dc12b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kknqp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.460067 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf24f\" (UniqueName: \"kubernetes.io/projected/758d396c-63a7-4f41-a396-713cb90db5af-kube-api-access-qf24f\") pod \"machine-config-operator-74547568cd-k5jxh\" (UID: \"758d396c-63a7-4f41-a396-713cb90db5af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5jxh" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.460109 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fd14f21-0836-40b2-b509-ec296556f45c-config\") pod \"authentication-operator-69f744f599-rx2r9\" (UID: \"2fd14f21-0836-40b2-b509-ec296556f45c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.460147 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/15fa65ae-a663-434d-9d2d-2a69a3f7d81c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gvc49\" (UID: \"15fa65ae-a663-434d-9d2d-2a69a3f7d81c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.461371 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl98b\" (UniqueName: \"kubernetes.io/projected/7dbb71e3-f936-4fb4-b5ba-772aa900f80d-kube-api-access-jl98b\") pod \"ingress-canary-rdcrz\" (UID: \"7dbb71e3-f936-4fb4-b5ba-772aa900f80d\") " pod="openshift-ingress-canary/ingress-canary-rdcrz" Feb 14 18:44:49 crc kubenswrapper[4897]: E0214 18:44:49.469376 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:49.969358556 +0000 UTC m=+142.945767039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.476383 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gws9q" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.509129 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ppn2g" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.519185 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.554560 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-62b7q"] Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.555727 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5"] Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564231 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564373 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8aadef2-477c-4699-9a1b-dd557ad9e273-trusted-ca\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564405 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8s6n\" (UniqueName: \"kubernetes.io/projected/d62c28f1-696b-4b88-8f46-67abf833ee4c-kube-api-access-c8s6n\") pod \"marketplace-operator-79b997595-9n8vm\" (UID: \"d62c28f1-696b-4b88-8f46-67abf833ee4c\") " pod="openshift-marketplace/marketplace-operator-79b997595-9n8vm" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564439 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85830a53-70c2-433d-a359-025fababa083-secret-volume\") pod \"collect-profiles-29518230-qxs85\" (UID: \"85830a53-70c2-433d-a359-025fababa083\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518230-qxs85" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564458 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9c34acbe-6a2d-446a-b2e2-5fc5a4130deb-stats-auth\") pod \"router-default-5444994796-c5z8g\" (UID: \"9c34acbe-6a2d-446a-b2e2-5fc5a4130deb\") " pod="openshift-ingress/router-default-5444994796-c5z8g" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564481 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8lq4\" (UniqueName: \"kubernetes.io/projected/0f237a59-0e7e-4ae0-94c9-c6d451224a27-kube-api-access-g8lq4\") pod \"packageserver-d55dfcdfc-9pw99\" (UID: \"0f237a59-0e7e-4ae0-94c9-c6d451224a27\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564506 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fd14f21-0836-40b2-b509-ec296556f45c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rx2r9\" (UID: \"2fd14f21-0836-40b2-b509-ec296556f45c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564521 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c34acbe-6a2d-446a-b2e2-5fc5a4130deb-metrics-certs\") pod \"router-default-5444994796-c5z8g\" (UID: \"9c34acbe-6a2d-446a-b2e2-5fc5a4130deb\") " pod="openshift-ingress/router-default-5444994796-c5z8g" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564538 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22kbr\" (UniqueName: \"kubernetes.io/projected/9baea172-0e9d-4866-917e-c5e0a57e1413-kube-api-access-22kbr\") pod \"control-plane-machine-set-operator-78cbb6b69f-2hqm2\" (UID: \"9baea172-0e9d-4866-917e-c5e0a57e1413\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hqm2" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564563 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58zds\" (UniqueName: \"kubernetes.io/projected/c8aadef2-477c-4699-9a1b-dd557ad9e273-kube-api-access-58zds\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564580 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fd14f21-0836-40b2-b509-ec296556f45c-serving-cert\") pod \"authentication-operator-69f744f599-rx2r9\" (UID: \"2fd14f21-0836-40b2-b509-ec296556f45c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564602 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/68eb569a-ca5d-4eef-a936-fd697b26d0be-registration-dir\") pod \"csi-hostpathplugin-jmbj5\" (UID: \"68eb569a-ca5d-4eef-a936-fd697b26d0be\") " pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564618 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5xr9\" (UniqueName: \"kubernetes.io/projected/88eb139d-9259-4c72-b9db-0f0cd154fda9-kube-api-access-f5xr9\") pod \"machine-config-controller-84d6567774-g7lq6\" (UID: \"88eb139d-9259-4c72-b9db-0f0cd154fda9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7lq6" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564651 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xnp6\" (UniqueName: \"kubernetes.io/projected/e45f17b5-6656-4aef-95c8-b1856ae4f1c4-kube-api-access-9xnp6\") pod \"service-ca-9c57cc56f-d8kqp\" (UID: \"e45f17b5-6656-4aef-95c8-b1856ae4f1c4\") " pod="openshift-service-ca/service-ca-9c57cc56f-d8kqp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564667 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a4a964d-7591-4a23-bc83-2fda90a1b3da-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-t5h9t\" (UID: \"7a4a964d-7591-4a23-bc83-2fda90a1b3da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t5h9t" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564682 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/758d396c-63a7-4f41-a396-713cb90db5af-images\") pod \"machine-config-operator-74547568cd-k5jxh\" (UID: \"758d396c-63a7-4f41-a396-713cb90db5af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5jxh" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564715 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c8aadef2-477c-4699-9a1b-dd557ad9e273-registry-tls\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564728 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/88eb139d-9259-4c72-b9db-0f0cd154fda9-proxy-tls\") pod \"machine-config-controller-84d6567774-g7lq6\" (UID: \"88eb139d-9259-4c72-b9db-0f0cd154fda9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7lq6" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564763 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19dafede-65e3-4652-880d-55d3d86dc12b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kknqp\" (UID: \"19dafede-65e3-4652-880d-55d3d86dc12b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kknqp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564780 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf24f\" (UniqueName: \"kubernetes.io/projected/758d396c-63a7-4f41-a396-713cb90db5af-kube-api-access-qf24f\") pod \"machine-config-operator-74547568cd-k5jxh\" (UID: \"758d396c-63a7-4f41-a396-713cb90db5af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5jxh" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564808 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fd14f21-0836-40b2-b509-ec296556f45c-config\") pod \"authentication-operator-69f744f599-rx2r9\" (UID: \"2fd14f21-0836-40b2-b509-ec296556f45c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564827 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/15fa65ae-a663-434d-9d2d-2a69a3f7d81c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gvc49\" (UID: \"15fa65ae-a663-434d-9d2d-2a69a3f7d81c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564861 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgvwl\" (UniqueName: \"kubernetes.io/projected/68eb569a-ca5d-4eef-a936-fd697b26d0be-kube-api-access-sgvwl\") pod \"csi-hostpathplugin-jmbj5\" (UID: \"68eb569a-ca5d-4eef-a936-fd697b26d0be\") " pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564889 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl98b\" (UniqueName: \"kubernetes.io/projected/7dbb71e3-f936-4fb4-b5ba-772aa900f80d-kube-api-access-jl98b\") pod \"ingress-canary-rdcrz\" (UID: \"7dbb71e3-f936-4fb4-b5ba-772aa900f80d\") " pod="openshift-ingress-canary/ingress-canary-rdcrz" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564907 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19dafede-65e3-4652-880d-55d3d86dc12b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kknqp\" (UID: \"19dafede-65e3-4652-880d-55d3d86dc12b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kknqp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564925 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d62c28f1-696b-4b88-8f46-67abf833ee4c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9n8vm\" (UID: \"d62c28f1-696b-4b88-8f46-67abf833ee4c\") " pod="openshift-marketplace/marketplace-operator-79b997595-9n8vm" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564941 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/57df6e6f-6814-477c-aff1-19d5eb81e4c1-node-bootstrap-token\") pod \"machine-config-server-fh6qr\" (UID: \"57df6e6f-6814-477c-aff1-19d5eb81e4c1\") " pod="openshift-machine-config-operator/machine-config-server-fh6qr" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564959 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjgqk\" (UniqueName: \"kubernetes.io/projected/10c2cb4a-c03b-49ca-a6ca-1b5637923932-kube-api-access-mjgqk\") pod \"olm-operator-6b444d44fb-5wxpc\" (UID: \"10c2cb4a-c03b-49ca-a6ca-1b5637923932\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5wxpc" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.564983 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d62c28f1-696b-4b88-8f46-67abf833ee4c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9n8vm\" (UID: \"d62c28f1-696b-4b88-8f46-67abf833ee4c\") " pod="openshift-marketplace/marketplace-operator-79b997595-9n8vm" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565001 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c8aadef2-477c-4699-9a1b-dd557ad9e273-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565019 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11f634b6-64a2-4d22-b194-a9515113a4e7-config\") pod \"service-ca-operator-777779d784-wn9fs\" (UID: \"11f634b6-64a2-4d22-b194-a9515113a4e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wn9fs" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565051 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzwxp\" (UniqueName: \"kubernetes.io/projected/2fd14f21-0836-40b2-b509-ec296556f45c-kube-api-access-hzwxp\") pod \"authentication-operator-69f744f599-rx2r9\" (UID: \"2fd14f21-0836-40b2-b509-ec296556f45c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565067 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df96584e-d06a-4906-9c95-3e94936695ef-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xlrkp\" (UID: \"df96584e-d06a-4906-9c95-3e94936695ef\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xlrkp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565083 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/57df6e6f-6814-477c-aff1-19d5eb81e4c1-certs\") pod \"machine-config-server-fh6qr\" (UID: \"57df6e6f-6814-477c-aff1-19d5eb81e4c1\") " pod="openshift-machine-config-operator/machine-config-server-fh6qr" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565099 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/68eb569a-ca5d-4eef-a936-fd697b26d0be-mountpoint-dir\") pod \"csi-hostpathplugin-jmbj5\" (UID: \"68eb569a-ca5d-4eef-a936-fd697b26d0be\") " pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565126 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c34acbe-6a2d-446a-b2e2-5fc5a4130deb-service-ca-bundle\") pod \"router-default-5444994796-c5z8g\" (UID: \"9c34acbe-6a2d-446a-b2e2-5fc5a4130deb\") " pod="openshift-ingress/router-default-5444994796-c5z8g" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565144 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9c34acbe-6a2d-446a-b2e2-5fc5a4130deb-default-certificate\") pod \"router-default-5444994796-c5z8g\" (UID: \"9c34acbe-6a2d-446a-b2e2-5fc5a4130deb\") " pod="openshift-ingress/router-default-5444994796-c5z8g" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565160 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0f237a59-0e7e-4ae0-94c9-c6d451224a27-tmpfs\") pod \"packageserver-d55dfcdfc-9pw99\" (UID: \"0f237a59-0e7e-4ae0-94c9-c6d451224a27\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565190 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d70bf6df-ebee-4193-982f-e9d86147ea35-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wg7rv\" (UID: \"d70bf6df-ebee-4193-982f-e9d86147ea35\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wg7rv" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565208 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/68eb569a-ca5d-4eef-a936-fd697b26d0be-csi-data-dir\") pod \"csi-hostpathplugin-jmbj5\" (UID: \"68eb569a-ca5d-4eef-a936-fd697b26d0be\") " pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565232 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c8aadef2-477c-4699-9a1b-dd557ad9e273-registry-certificates\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565257 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df96584e-d06a-4906-9c95-3e94936695ef-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xlrkp\" (UID: \"df96584e-d06a-4906-9c95-3e94936695ef\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xlrkp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565276 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9baea172-0e9d-4866-917e-c5e0a57e1413-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2hqm2\" (UID: \"9baea172-0e9d-4866-917e-c5e0a57e1413\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hqm2" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565293 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7lwt\" (UniqueName: \"kubernetes.io/projected/85830a53-70c2-433d-a359-025fababa083-kube-api-access-n7lwt\") pod \"collect-profiles-29518230-qxs85\" (UID: \"85830a53-70c2-433d-a359-025fababa083\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518230-qxs85" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565306 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/758d396c-63a7-4f41-a396-713cb90db5af-proxy-tls\") pod \"machine-config-operator-74547568cd-k5jxh\" (UID: \"758d396c-63a7-4f41-a396-713cb90db5af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5jxh" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565337 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fd14f21-0836-40b2-b509-ec296556f45c-service-ca-bundle\") pod \"authentication-operator-69f744f599-rx2r9\" (UID: \"2fd14f21-0836-40b2-b509-ec296556f45c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565352 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85830a53-70c2-433d-a359-025fababa083-config-volume\") pod \"collect-profiles-29518230-qxs85\" (UID: \"85830a53-70c2-433d-a359-025fababa083\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518230-qxs85" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565368 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f237a59-0e7e-4ae0-94c9-c6d451224a27-webhook-cert\") pod \"packageserver-d55dfcdfc-9pw99\" (UID: \"0f237a59-0e7e-4ae0-94c9-c6d451224a27\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565411 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e45f17b5-6656-4aef-95c8-b1856ae4f1c4-signing-key\") pod \"service-ca-9c57cc56f-d8kqp\" (UID: \"e45f17b5-6656-4aef-95c8-b1856ae4f1c4\") " pod="openshift-service-ca/service-ca-9c57cc56f-d8kqp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565428 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/68eb569a-ca5d-4eef-a936-fd697b26d0be-socket-dir\") pod \"csi-hostpathplugin-jmbj5\" (UID: \"68eb569a-ca5d-4eef-a936-fd697b26d0be\") " pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565445 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd4js\" (UniqueName: \"kubernetes.io/projected/7a4a964d-7591-4a23-bc83-2fda90a1b3da-kube-api-access-fd4js\") pod \"openshift-apiserver-operator-796bbdcf4f-t5h9t\" (UID: \"7a4a964d-7591-4a23-bc83-2fda90a1b3da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t5h9t" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565463 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19dafede-65e3-4652-880d-55d3d86dc12b-config\") pod \"kube-controller-manager-operator-78b949d7b-kknqp\" (UID: \"19dafede-65e3-4652-880d-55d3d86dc12b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kknqp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565480 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcf7s\" (UniqueName: \"kubernetes.io/projected/972de147-8a61-4e52-b8a3-2cedb4f22f11-kube-api-access-hcf7s\") pod \"migrator-59844c95c7-l5nd2\" (UID: \"972de147-8a61-4e52-b8a3-2cedb4f22f11\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l5nd2" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565497 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a4a964d-7591-4a23-bc83-2fda90a1b3da-config\") pod \"openshift-apiserver-operator-796bbdcf4f-t5h9t\" (UID: \"7a4a964d-7591-4a23-bc83-2fda90a1b3da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t5h9t" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565512 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/10c2cb4a-c03b-49ca-a6ca-1b5637923932-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5wxpc\" (UID: \"10c2cb4a-c03b-49ca-a6ca-1b5637923932\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5wxpc" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565527 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrpm2\" (UniqueName: \"kubernetes.io/projected/9c34acbe-6a2d-446a-b2e2-5fc5a4130deb-kube-api-access-xrpm2\") pod \"router-default-5444994796-c5z8g\" (UID: \"9c34acbe-6a2d-446a-b2e2-5fc5a4130deb\") " pod="openshift-ingress/router-default-5444994796-c5z8g" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565543 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/88eb139d-9259-4c72-b9db-0f0cd154fda9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-g7lq6\" (UID: \"88eb139d-9259-4c72-b9db-0f0cd154fda9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7lq6" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565575 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttjnh\" (UniqueName: \"kubernetes.io/projected/87f809c6-5e7e-47ec-8fd2-3eca0bd6b045-kube-api-access-ttjnh\") pod \"dns-default-bzvvc\" (UID: \"87f809c6-5e7e-47ec-8fd2-3eca0bd6b045\") " pod="openshift-dns/dns-default-bzvvc" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565599 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dbb71e3-f936-4fb4-b5ba-772aa900f80d-cert\") pod \"ingress-canary-rdcrz\" (UID: \"7dbb71e3-f936-4fb4-b5ba-772aa900f80d\") " pod="openshift-ingress-canary/ingress-canary-rdcrz" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565616 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/758d396c-63a7-4f41-a396-713cb90db5af-auth-proxy-config\") pod \"machine-config-operator-74547568cd-k5jxh\" (UID: \"758d396c-63a7-4f41-a396-713cb90db5af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5jxh" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565631 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10c2cb4a-c03b-49ca-a6ca-1b5637923932-srv-cert\") pod \"olm-operator-6b444d44fb-5wxpc\" (UID: \"10c2cb4a-c03b-49ca-a6ca-1b5637923932\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5wxpc" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565647 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s7wb\" (UniqueName: \"kubernetes.io/projected/15fa65ae-a663-434d-9d2d-2a69a3f7d81c-kube-api-access-7s7wb\") pod \"package-server-manager-789f6589d5-gvc49\" (UID: \"15fa65ae-a663-434d-9d2d-2a69a3f7d81c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565663 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87f809c6-5e7e-47ec-8fd2-3eca0bd6b045-config-volume\") pod \"dns-default-bzvvc\" (UID: \"87f809c6-5e7e-47ec-8fd2-3eca0bd6b045\") " pod="openshift-dns/dns-default-bzvvc" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565677 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e45f17b5-6656-4aef-95c8-b1856ae4f1c4-signing-cabundle\") pod \"service-ca-9c57cc56f-d8kqp\" (UID: \"e45f17b5-6656-4aef-95c8-b1856ae4f1c4\") " pod="openshift-service-ca/service-ca-9c57cc56f-d8kqp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565692 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94pgf\" (UniqueName: \"kubernetes.io/projected/11f634b6-64a2-4d22-b194-a9515113a4e7-kube-api-access-94pgf\") pod \"service-ca-operator-777779d784-wn9fs\" (UID: \"11f634b6-64a2-4d22-b194-a9515113a4e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wn9fs" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565707 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87f809c6-5e7e-47ec-8fd2-3eca0bd6b045-metrics-tls\") pod \"dns-default-bzvvc\" (UID: \"87f809c6-5e7e-47ec-8fd2-3eca0bd6b045\") " pod="openshift-dns/dns-default-bzvvc" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565722 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9wkv\" (UniqueName: \"kubernetes.io/projected/57df6e6f-6814-477c-aff1-19d5eb81e4c1-kube-api-access-v9wkv\") pod \"machine-config-server-fh6qr\" (UID: \"57df6e6f-6814-477c-aff1-19d5eb81e4c1\") " pod="openshift-machine-config-operator/machine-config-server-fh6qr" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565738 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df96584e-d06a-4906-9c95-3e94936695ef-config\") pod \"kube-apiserver-operator-766d6c64bb-xlrkp\" (UID: \"df96584e-d06a-4906-9c95-3e94936695ef\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xlrkp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565755 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8aadef2-477c-4699-9a1b-dd557ad9e273-bound-sa-token\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565778 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11f634b6-64a2-4d22-b194-a9515113a4e7-serving-cert\") pod \"service-ca-operator-777779d784-wn9fs\" (UID: \"11f634b6-64a2-4d22-b194-a9515113a4e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wn9fs" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565794 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/68eb569a-ca5d-4eef-a936-fd697b26d0be-plugins-dir\") pod \"csi-hostpathplugin-jmbj5\" (UID: \"68eb569a-ca5d-4eef-a936-fd697b26d0be\") " pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565830 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c8aadef2-477c-4699-9a1b-dd557ad9e273-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565848 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdzjc\" (UniqueName: \"kubernetes.io/projected/d70bf6df-ebee-4193-982f-e9d86147ea35-kube-api-access-gdzjc\") pod \"multus-admission-controller-857f4d67dd-wg7rv\" (UID: \"d70bf6df-ebee-4193-982f-e9d86147ea35\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wg7rv" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.565865 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f237a59-0e7e-4ae0-94c9-c6d451224a27-apiservice-cert\") pod \"packageserver-d55dfcdfc-9pw99\" (UID: \"0f237a59-0e7e-4ae0-94c9-c6d451224a27\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.567084 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c34acbe-6a2d-446a-b2e2-5fc5a4130deb-service-ca-bundle\") pod \"router-default-5444994796-c5z8g\" (UID: \"9c34acbe-6a2d-446a-b2e2-5fc5a4130deb\") " pod="openshift-ingress/router-default-5444994796-c5z8g" Feb 14 18:44:49 crc kubenswrapper[4897]: E0214 18:44:49.567253 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:50.06722671 +0000 UTC m=+143.043635193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.568820 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c8aadef2-477c-4699-9a1b-dd557ad9e273-registry-certificates\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.568924 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8aadef2-477c-4699-9a1b-dd557ad9e273-trusted-ca\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.572464 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fd14f21-0836-40b2-b509-ec296556f45c-service-ca-bundle\") pod \"authentication-operator-69f744f599-rx2r9\" (UID: \"2fd14f21-0836-40b2-b509-ec296556f45c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.575373 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19dafede-65e3-4652-880d-55d3d86dc12b-config\") pod \"kube-controller-manager-operator-78b949d7b-kknqp\" (UID: \"19dafede-65e3-4652-880d-55d3d86dc12b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kknqp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.576001 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85830a53-70c2-433d-a359-025fababa083-config-volume\") pod \"collect-profiles-29518230-qxs85\" (UID: \"85830a53-70c2-433d-a359-025fababa083\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518230-qxs85" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.576144 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a4a964d-7591-4a23-bc83-2fda90a1b3da-config\") pod \"openshift-apiserver-operator-796bbdcf4f-t5h9t\" (UID: \"7a4a964d-7591-4a23-bc83-2fda90a1b3da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t5h9t" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.576866 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/88eb139d-9259-4c72-b9db-0f0cd154fda9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-g7lq6\" (UID: \"88eb139d-9259-4c72-b9db-0f0cd154fda9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7lq6" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.579721 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11f634b6-64a2-4d22-b194-a9515113a4e7-config\") pod \"service-ca-operator-777779d784-wn9fs\" (UID: \"11f634b6-64a2-4d22-b194-a9515113a4e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wn9fs" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.580227 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dbb71e3-f936-4fb4-b5ba-772aa900f80d-cert\") pod \"ingress-canary-rdcrz\" (UID: \"7dbb71e3-f936-4fb4-b5ba-772aa900f80d\") " pod="openshift-ingress-canary/ingress-canary-rdcrz" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.582173 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.582690 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/758d396c-63a7-4f41-a396-713cb90db5af-auth-proxy-config\") pod \"machine-config-operator-74547568cd-k5jxh\" (UID: \"758d396c-63a7-4f41-a396-713cb90db5af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5jxh" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.583726 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df96584e-d06a-4906-9c95-3e94936695ef-config\") pod \"kube-apiserver-operator-766d6c64bb-xlrkp\" (UID: \"df96584e-d06a-4906-9c95-3e94936695ef\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xlrkp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.587139 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e45f17b5-6656-4aef-95c8-b1856ae4f1c4-signing-key\") pod \"service-ca-9c57cc56f-d8kqp\" (UID: \"e45f17b5-6656-4aef-95c8-b1856ae4f1c4\") " pod="openshift-service-ca/service-ca-9c57cc56f-d8kqp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.587884 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/57df6e6f-6814-477c-aff1-19d5eb81e4c1-node-bootstrap-token\") pod \"machine-config-server-fh6qr\" (UID: \"57df6e6f-6814-477c-aff1-19d5eb81e4c1\") " pod="openshift-machine-config-operator/machine-config-server-fh6qr" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.588424 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9baea172-0e9d-4866-917e-c5e0a57e1413-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2hqm2\" (UID: \"9baea172-0e9d-4866-917e-c5e0a57e1413\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hqm2" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.588740 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/758d396c-63a7-4f41-a396-713cb90db5af-images\") pod \"machine-config-operator-74547568cd-k5jxh\" (UID: \"758d396c-63a7-4f41-a396-713cb90db5af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5jxh" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.588937 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fd14f21-0836-40b2-b509-ec296556f45c-config\") pod \"authentication-operator-69f744f599-rx2r9\" (UID: \"2fd14f21-0836-40b2-b509-ec296556f45c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.589386 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c8aadef2-477c-4699-9a1b-dd557ad9e273-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.590931 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87f809c6-5e7e-47ec-8fd2-3eca0bd6b045-config-volume\") pod \"dns-default-bzvvc\" (UID: \"87f809c6-5e7e-47ec-8fd2-3eca0bd6b045\") " pod="openshift-dns/dns-default-bzvvc" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.591887 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d62c28f1-696b-4b88-8f46-67abf833ee4c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9n8vm\" (UID: \"d62c28f1-696b-4b88-8f46-67abf833ee4c\") " pod="openshift-marketplace/marketplace-operator-79b997595-9n8vm" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.592325 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fd14f21-0836-40b2-b509-ec296556f45c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rx2r9\" (UID: \"2fd14f21-0836-40b2-b509-ec296556f45c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.592768 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0f237a59-0e7e-4ae0-94c9-c6d451224a27-tmpfs\") pod \"packageserver-d55dfcdfc-9pw99\" (UID: \"0f237a59-0e7e-4ae0-94c9-c6d451224a27\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.593868 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e45f17b5-6656-4aef-95c8-b1856ae4f1c4-signing-cabundle\") pod \"service-ca-9c57cc56f-d8kqp\" (UID: \"e45f17b5-6656-4aef-95c8-b1856ae4f1c4\") " pod="openshift-service-ca/service-ca-9c57cc56f-d8kqp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.596530 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df96584e-d06a-4906-9c95-3e94936695ef-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xlrkp\" (UID: \"df96584e-d06a-4906-9c95-3e94936695ef\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xlrkp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.596637 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9c34acbe-6a2d-446a-b2e2-5fc5a4130deb-default-certificate\") pod \"router-default-5444994796-c5z8g\" (UID: \"9c34acbe-6a2d-446a-b2e2-5fc5a4130deb\") " pod="openshift-ingress/router-default-5444994796-c5z8g" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.596773 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.596850 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d70bf6df-ebee-4193-982f-e9d86147ea35-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wg7rv\" (UID: \"d70bf6df-ebee-4193-982f-e9d86147ea35\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wg7rv" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.603391 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fd14f21-0836-40b2-b509-ec296556f45c-serving-cert\") pod \"authentication-operator-69f744f599-rx2r9\" (UID: \"2fd14f21-0836-40b2-b509-ec296556f45c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.609602 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f237a59-0e7e-4ae0-94c9-c6d451224a27-apiservice-cert\") pod \"packageserver-d55dfcdfc-9pw99\" (UID: \"0f237a59-0e7e-4ae0-94c9-c6d451224a27\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.612206 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a4a964d-7591-4a23-bc83-2fda90a1b3da-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-t5h9t\" (UID: \"7a4a964d-7591-4a23-bc83-2fda90a1b3da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t5h9t" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.612466 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85830a53-70c2-433d-a359-025fababa083-secret-volume\") pod \"collect-profiles-29518230-qxs85\" (UID: \"85830a53-70c2-433d-a359-025fababa083\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518230-qxs85" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.612769 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/10c2cb4a-c03b-49ca-a6ca-1b5637923932-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5wxpc\" (UID: \"10c2cb4a-c03b-49ca-a6ca-1b5637923932\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5wxpc" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.613460 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d62c28f1-696b-4b88-8f46-67abf833ee4c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9n8vm\" (UID: \"d62c28f1-696b-4b88-8f46-67abf833ee4c\") " pod="openshift-marketplace/marketplace-operator-79b997595-9n8vm" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.613519 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9c34acbe-6a2d-446a-b2e2-5fc5a4130deb-stats-auth\") pod \"router-default-5444994796-c5z8g\" (UID: \"9c34acbe-6a2d-446a-b2e2-5fc5a4130deb\") " pod="openshift-ingress/router-default-5444994796-c5z8g" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.613631 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f237a59-0e7e-4ae0-94c9-c6d451224a27-webhook-cert\") pod \"packageserver-d55dfcdfc-9pw99\" (UID: \"0f237a59-0e7e-4ae0-94c9-c6d451224a27\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.613831 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/15fa65ae-a663-434d-9d2d-2a69a3f7d81c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gvc49\" (UID: \"15fa65ae-a663-434d-9d2d-2a69a3f7d81c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.614183 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8s6n\" (UniqueName: \"kubernetes.io/projected/d62c28f1-696b-4b88-8f46-67abf833ee4c-kube-api-access-c8s6n\") pod \"marketplace-operator-79b997595-9n8vm\" (UID: \"d62c28f1-696b-4b88-8f46-67abf833ee4c\") " pod="openshift-marketplace/marketplace-operator-79b997595-9n8vm" Feb 14 18:44:49 crc kubenswrapper[4897]: W0214 18:44:49.614406 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cd062a1_246d_4ad6_b81a_a9f103576a32.slice/crio-d74c2b59dd0e9fe102150ac5b8e18347d7c2d381ade0cea1aeab28561f51fad8 WatchSource:0}: Error finding container d74c2b59dd0e9fe102150ac5b8e18347d7c2d381ade0cea1aeab28561f51fad8: Status 404 returned error can't find the container with id d74c2b59dd0e9fe102150ac5b8e18347d7c2d381ade0cea1aeab28561f51fad8 Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.615163 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/758d396c-63a7-4f41-a396-713cb90db5af-proxy-tls\") pod \"machine-config-operator-74547568cd-k5jxh\" (UID: \"758d396c-63a7-4f41-a396-713cb90db5af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5jxh" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.616222 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10c2cb4a-c03b-49ca-a6ca-1b5637923932-srv-cert\") pod \"olm-operator-6b444d44fb-5wxpc\" (UID: \"10c2cb4a-c03b-49ca-a6ca-1b5637923932\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5wxpc" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.618729 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/88eb139d-9259-4c72-b9db-0f0cd154fda9-proxy-tls\") pod \"machine-config-controller-84d6567774-g7lq6\" (UID: \"88eb139d-9259-4c72-b9db-0f0cd154fda9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7lq6" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.620589 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c34acbe-6a2d-446a-b2e2-5fc5a4130deb-metrics-certs\") pod \"router-default-5444994796-c5z8g\" (UID: \"9c34acbe-6a2d-446a-b2e2-5fc5a4130deb\") " pod="openshift-ingress/router-default-5444994796-c5z8g" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.622298 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/57df6e6f-6814-477c-aff1-19d5eb81e4c1-certs\") pod \"machine-config-server-fh6qr\" (UID: \"57df6e6f-6814-477c-aff1-19d5eb81e4c1\") " pod="openshift-machine-config-operator/machine-config-server-fh6qr" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.623018 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87f809c6-5e7e-47ec-8fd2-3eca0bd6b045-metrics-tls\") pod \"dns-default-bzvvc\" (UID: \"87f809c6-5e7e-47ec-8fd2-3eca0bd6b045\") " pod="openshift-dns/dns-default-bzvvc" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.623653 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c8aadef2-477c-4699-9a1b-dd557ad9e273-registry-tls\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.624411 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11f634b6-64a2-4d22-b194-a9515113a4e7-serving-cert\") pod \"service-ca-operator-777779d784-wn9fs\" (UID: \"11f634b6-64a2-4d22-b194-a9515113a4e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wn9fs" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.631271 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd4js\" (UniqueName: \"kubernetes.io/projected/7a4a964d-7591-4a23-bc83-2fda90a1b3da-kube-api-access-fd4js\") pod \"openshift-apiserver-operator-796bbdcf4f-t5h9t\" (UID: \"7a4a964d-7591-4a23-bc83-2fda90a1b3da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t5h9t" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.631282 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19dafede-65e3-4652-880d-55d3d86dc12b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kknqp\" (UID: \"19dafede-65e3-4652-880d-55d3d86dc12b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kknqp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.633251 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zh576" event={"ID":"c6a281f2-1a7e-419e-8736-57c1a3bae82e","Type":"ContainerStarted","Data":"6fb8b48e765b7ea4df904e163f32f092b0464180d2a841c2d2477b7561df4293"} Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.634426 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zh576" event={"ID":"c6a281f2-1a7e-419e-8736-57c1a3bae82e","Type":"ContainerStarted","Data":"9c337d8fb32a44b510434b5b3f8f6ce6e09e0ce9d69681d5f8b2e4d8599307a7"} Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.633934 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c8aadef2-477c-4699-9a1b-dd557ad9e273-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.635938 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7lwt\" (UniqueName: \"kubernetes.io/projected/85830a53-70c2-433d-a359-025fababa083-kube-api-access-n7lwt\") pod \"collect-profiles-29518230-qxs85\" (UID: \"85830a53-70c2-433d-a359-025fababa083\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518230-qxs85" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.651356 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcf7s\" (UniqueName: \"kubernetes.io/projected/972de147-8a61-4e52-b8a3-2cedb4f22f11-kube-api-access-hcf7s\") pod \"migrator-59844c95c7-l5nd2\" (UID: \"972de147-8a61-4e52-b8a3-2cedb4f22f11\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l5nd2" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.661828 4897 generic.go:334] "Generic (PLEG): container finished" podID="5c5ace00-d072-440a-bc7b-982b96f636e7" containerID="ebfc6cedd3158359b773f4b50d8b3130cf9ddaff6cbed2ed9c276a3aca248ff6" exitCode=0 Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.662363 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tndnf" event={"ID":"5c5ace00-d072-440a-bc7b-982b96f636e7","Type":"ContainerDied","Data":"ebfc6cedd3158359b773f4b50d8b3130cf9ddaff6cbed2ed9c276a3aca248ff6"} Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.665384 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94pgf\" (UniqueName: \"kubernetes.io/projected/11f634b6-64a2-4d22-b194-a9515113a4e7-kube-api-access-94pgf\") pod \"service-ca-operator-777779d784-wn9fs\" (UID: \"11f634b6-64a2-4d22-b194-a9515113a4e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wn9fs" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.666873 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.666918 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/68eb569a-ca5d-4eef-a936-fd697b26d0be-registration-dir\") pod \"csi-hostpathplugin-jmbj5\" (UID: \"68eb569a-ca5d-4eef-a936-fd697b26d0be\") " pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.666974 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgvwl\" (UniqueName: \"kubernetes.io/projected/68eb569a-ca5d-4eef-a936-fd697b26d0be-kube-api-access-sgvwl\") pod \"csi-hostpathplugin-jmbj5\" (UID: \"68eb569a-ca5d-4eef-a936-fd697b26d0be\") " pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.667023 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/68eb569a-ca5d-4eef-a936-fd697b26d0be-mountpoint-dir\") pod \"csi-hostpathplugin-jmbj5\" (UID: \"68eb569a-ca5d-4eef-a936-fd697b26d0be\") " pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.667093 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/68eb569a-ca5d-4eef-a936-fd697b26d0be-csi-data-dir\") pod \"csi-hostpathplugin-jmbj5\" (UID: \"68eb569a-ca5d-4eef-a936-fd697b26d0be\") " pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.667133 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/68eb569a-ca5d-4eef-a936-fd697b26d0be-socket-dir\") pod \"csi-hostpathplugin-jmbj5\" (UID: \"68eb569a-ca5d-4eef-a936-fd697b26d0be\") " pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.667205 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/68eb569a-ca5d-4eef-a936-fd697b26d0be-plugins-dir\") pod \"csi-hostpathplugin-jmbj5\" (UID: \"68eb569a-ca5d-4eef-a936-fd697b26d0be\") " pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.667397 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/68eb569a-ca5d-4eef-a936-fd697b26d0be-plugins-dir\") pod \"csi-hostpathplugin-jmbj5\" (UID: \"68eb569a-ca5d-4eef-a936-fd697b26d0be\") " pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" Feb 14 18:44:49 crc kubenswrapper[4897]: E0214 18:44:49.667687 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:50.167674589 +0000 UTC m=+143.144083072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.668043 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/68eb569a-ca5d-4eef-a936-fd697b26d0be-registration-dir\") pod \"csi-hostpathplugin-jmbj5\" (UID: \"68eb569a-ca5d-4eef-a936-fd697b26d0be\") " pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.669298 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/68eb569a-ca5d-4eef-a936-fd697b26d0be-socket-dir\") pod \"csi-hostpathplugin-jmbj5\" (UID: \"68eb569a-ca5d-4eef-a936-fd697b26d0be\") " pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.669574 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/68eb569a-ca5d-4eef-a936-fd697b26d0be-mountpoint-dir\") pod \"csi-hostpathplugin-jmbj5\" (UID: \"68eb569a-ca5d-4eef-a936-fd697b26d0be\") " pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.669686 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/68eb569a-ca5d-4eef-a936-fd697b26d0be-csi-data-dir\") pod \"csi-hostpathplugin-jmbj5\" (UID: \"68eb569a-ca5d-4eef-a936-fd697b26d0be\") " pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.673563 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" event={"ID":"eca953dd-cbbc-404a-974f-babb9bf2d0e8","Type":"ContainerStarted","Data":"f588e1e1c8043949c4ea0ca1d83d86c01fd9f314c3f5609dd1b29643e9e07100"} Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.674659 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.677702 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs6vd" event={"ID":"0909c109-9799-4bc9-9d4f-1d97a95ec410","Type":"ContainerStarted","Data":"fb481675083ac805d67e52b6cbc9209d9dd8342f12a42aac0291543b6f8a5e92"} Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.679602 4897 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-g8d99 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.679658 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" podUID="eca953dd-cbbc-404a-974f-babb9bf2d0e8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.700767 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrpm2\" (UniqueName: \"kubernetes.io/projected/9c34acbe-6a2d-446a-b2e2-5fc5a4130deb-kube-api-access-xrpm2\") pod \"router-default-5444994796-c5z8g\" (UID: \"9c34acbe-6a2d-446a-b2e2-5fc5a4130deb\") " pod="openshift-ingress/router-default-5444994796-c5z8g" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.710521 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttjnh\" (UniqueName: \"kubernetes.io/projected/87f809c6-5e7e-47ec-8fd2-3eca0bd6b045-kube-api-access-ttjnh\") pod \"dns-default-bzvvc\" (UID: \"87f809c6-5e7e-47ec-8fd2-3eca0bd6b045\") " pod="openshift-dns/dns-default-bzvvc" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.714829 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9n8vm" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.729120 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-c5z8g" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.730290 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzwxp\" (UniqueName: \"kubernetes.io/projected/2fd14f21-0836-40b2-b509-ec296556f45c-kube-api-access-hzwxp\") pod \"authentication-operator-69f744f599-rx2r9\" (UID: \"2fd14f21-0836-40b2-b509-ec296556f45c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.746724 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wn9fs" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.750299 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df96584e-d06a-4906-9c95-3e94936695ef-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xlrkp\" (UID: \"df96584e-d06a-4906-9c95-3e94936695ef\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xlrkp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.764991 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8lq4\" (UniqueName: \"kubernetes.io/projected/0f237a59-0e7e-4ae0-94c9-c6d451224a27-kube-api-access-g8lq4\") pod \"packageserver-d55dfcdfc-9pw99\" (UID: \"0f237a59-0e7e-4ae0-94c9-c6d451224a27\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.768569 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l5nd2" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.768982 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:49 crc kubenswrapper[4897]: E0214 18:44:49.769109 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:50.26908832 +0000 UTC m=+143.245496803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.769958 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:49 crc kubenswrapper[4897]: E0214 18:44:49.771323 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:50.271305113 +0000 UTC m=+143.247713776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.782121 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9wkv\" (UniqueName: \"kubernetes.io/projected/57df6e6f-6814-477c-aff1-19d5eb81e4c1-kube-api-access-v9wkv\") pod \"machine-config-server-fh6qr\" (UID: \"57df6e6f-6814-477c-aff1-19d5eb81e4c1\") " pod="openshift-machine-config-operator/machine-config-server-fh6qr" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.792219 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.799790 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t5h9t" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.813697 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19dafede-65e3-4652-880d-55d3d86dc12b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kknqp\" (UID: \"19dafede-65e3-4652-880d-55d3d86dc12b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kknqp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.822370 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf24f\" (UniqueName: \"kubernetes.io/projected/758d396c-63a7-4f41-a396-713cb90db5af-kube-api-access-qf24f\") pod \"machine-config-operator-74547568cd-k5jxh\" (UID: \"758d396c-63a7-4f41-a396-713cb90db5af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5jxh" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.828355 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fh6qr" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.837507 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kknqp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.847577 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5jxh" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.855240 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8aadef2-477c-4699-9a1b-dd557ad9e273-bound-sa-token\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.859760 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518230-qxs85" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.872347 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:49 crc kubenswrapper[4897]: E0214 18:44:49.874515 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:50.374465351 +0000 UTC m=+143.350873834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.880643 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fzn9r"] Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.887830 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7lrwj"] Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.889685 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjgqk\" (UniqueName: \"kubernetes.io/projected/10c2cb4a-c03b-49ca-a6ca-1b5637923932-kube-api-access-mjgqk\") pod \"olm-operator-6b444d44fb-5wxpc\" (UID: \"10c2cb4a-c03b-49ca-a6ca-1b5637923932\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5wxpc" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.892903 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xnp6\" (UniqueName: \"kubernetes.io/projected/e45f17b5-6656-4aef-95c8-b1856ae4f1c4-kube-api-access-9xnp6\") pod \"service-ca-9c57cc56f-d8kqp\" (UID: \"e45f17b5-6656-4aef-95c8-b1856ae4f1c4\") " pod="openshift-service-ca/service-ca-9c57cc56f-d8kqp" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.897903 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.935595 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdzjc\" (UniqueName: \"kubernetes.io/projected/d70bf6df-ebee-4193-982f-e9d86147ea35-kube-api-access-gdzjc\") pod \"multus-admission-controller-857f4d67dd-wg7rv\" (UID: \"d70bf6df-ebee-4193-982f-e9d86147ea35\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wg7rv" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.935788 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bzvvc" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.950441 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c8v6s"] Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.955295 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-9kvql"] Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.962015 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s7wb\" (UniqueName: \"kubernetes.io/projected/15fa65ae-a663-434d-9d2d-2a69a3f7d81c-kube-api-access-7s7wb\") pod \"package-server-manager-789f6589d5-gvc49\" (UID: \"15fa65ae-a663-434d-9d2d-2a69a3f7d81c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.971981 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl98b\" (UniqueName: \"kubernetes.io/projected/7dbb71e3-f936-4fb4-b5ba-772aa900f80d-kube-api-access-jl98b\") pod \"ingress-canary-rdcrz\" (UID: \"7dbb71e3-f936-4fb4-b5ba-772aa900f80d\") " pod="openshift-ingress-canary/ingress-canary-rdcrz" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.975515 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:49 crc kubenswrapper[4897]: E0214 18:44:49.975924 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:50.475911403 +0000 UTC m=+143.452319886 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.982872 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22kbr\" (UniqueName: \"kubernetes.io/projected/9baea172-0e9d-4866-917e-c5e0a57e1413-kube-api-access-22kbr\") pod \"control-plane-machine-set-operator-78cbb6b69f-2hqm2\" (UID: \"9baea172-0e9d-4866-917e-c5e0a57e1413\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hqm2" Feb 14 18:44:49 crc kubenswrapper[4897]: I0214 18:44:49.995488 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7xmvn"] Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.001805 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-msfx9"] Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.003614 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xlrkp" Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.003620 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5xr9\" (UniqueName: \"kubernetes.io/projected/88eb139d-9259-4c72-b9db-0f0cd154fda9-kube-api-access-f5xr9\") pod \"machine-config-controller-84d6567774-g7lq6\" (UID: \"88eb139d-9259-4c72-b9db-0f0cd154fda9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7lq6" Feb 14 18:44:50 crc kubenswrapper[4897]: W0214 18:44:50.014838 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88a85445_8209_4b30_a0e0_c0f14d790fb5.slice/crio-4c498ae963b7f2ee5451cb19e9552698d3f2efb61c474f5c3c7c0741b18a696d WatchSource:0}: Error finding container 4c498ae963b7f2ee5451cb19e9552698d3f2efb61c474f5c3c7c0741b18a696d: Status 404 returned error can't find the container with id 4c498ae963b7f2ee5451cb19e9552698d3f2efb61c474f5c3c7c0741b18a696d Feb 14 18:44:50 crc kubenswrapper[4897]: W0214 18:44:50.017277 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ec1f803_3889_4483_87ae_9a38bd020818.slice/crio-ba11b363a03b52a0f60edf2e15f1698dc210a79bf46953f01c1b6dc4ca4e119a WatchSource:0}: Error finding container ba11b363a03b52a0f60edf2e15f1698dc210a79bf46953f01c1b6dc4ca4e119a: Status 404 returned error can't find the container with id ba11b363a03b52a0f60edf2e15f1698dc210a79bf46953f01c1b6dc4ca4e119a Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.021439 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hqm2" Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.030411 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58zds\" (UniqueName: \"kubernetes.io/projected/c8aadef2-477c-4699-9a1b-dd557ad9e273-kube-api-access-58zds\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:50 crc kubenswrapper[4897]: W0214 18:44:50.031306 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57df6e6f_6814_477c_aff1_19d5eb81e4c1.slice/crio-dce586a82dcfa4d67412c4acb12c4a7973740045945896ecf04322bfd8691e82 WatchSource:0}: Error finding container dce586a82dcfa4d67412c4acb12c4a7973740045945896ecf04322bfd8691e82: Status 404 returned error can't find the container with id dce586a82dcfa4d67412c4acb12c4a7973740045945896ecf04322bfd8691e82 Feb 14 18:44:50 crc kubenswrapper[4897]: W0214 18:44:50.038275 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65112b94_8028_49f5_91fc_b83b49f30017.slice/crio-cfd56766e90d33e4993505c9f64278631b31037b4875fce2b192197354eeda7d WatchSource:0}: Error finding container cfd56766e90d33e4993505c9f64278631b31037b4875fce2b192197354eeda7d: Status 404 returned error can't find the container with id cfd56766e90d33e4993505c9f64278631b31037b4875fce2b192197354eeda7d Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.052963 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgvwl\" (UniqueName: \"kubernetes.io/projected/68eb569a-ca5d-4eef-a936-fd697b26d0be-kube-api-access-sgvwl\") pod \"csi-hostpathplugin-jmbj5\" (UID: \"68eb569a-ca5d-4eef-a936-fd697b26d0be\") " pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.053589 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-d8kqp" Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.059744 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-klcwn"] Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.063126 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7lq6" Feb 14 18:44:50 crc kubenswrapper[4897]: W0214 18:44:50.071794 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode05c0cf5_7ca3_47f2_810f_492e73edc19a.slice/crio-d6b40d78a3ffcf1ad92ebe6038c966652234e7b343d7f2f46c478439380e3b9a WatchSource:0}: Error finding container d6b40d78a3ffcf1ad92ebe6038c966652234e7b343d7f2f46c478439380e3b9a: Status 404 returned error can't find the container with id d6b40d78a3ffcf1ad92ebe6038c966652234e7b343d7f2f46c478439380e3b9a Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.077507 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:50 crc kubenswrapper[4897]: E0214 18:44:50.078475 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:50.578457671 +0000 UTC m=+143.554866164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.087852 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wg7rv" Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.109223 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5wxpc" Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.123999 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xcksp"] Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.136335 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6jjtk"] Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.158427 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2"] Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.171518 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.174111 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mgfqz"] Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.179649 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:50 crc kubenswrapper[4897]: E0214 18:44:50.180647 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:50.680626227 +0000 UTC m=+143.657034730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.195423 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wn9fs"] Feb 14 18:44:50 crc kubenswrapper[4897]: W0214 18:44:50.207224 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod044e39f7_5f0c_4bd9_ad2b_6bab235abf9a.slice/crio-58c9ea43b70550e808154cbbe88bce0cb96d7581c712e635752e72c5f313ec06 WatchSource:0}: Error finding container 58c9ea43b70550e808154cbbe88bce0cb96d7581c712e635752e72c5f313ec06: Status 404 returned error can't find the container with id 58c9ea43b70550e808154cbbe88bce0cb96d7581c712e635752e72c5f313ec06 Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.224303 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rdcrz" Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.237272 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ppn2g"] Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.254127 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gws9q"] Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.255257 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7"] Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.265003 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.281205 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:50 crc kubenswrapper[4897]: E0214 18:44:50.281452 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:50.781414057 +0000 UTC m=+143.757822540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.281699 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:50 crc kubenswrapper[4897]: E0214 18:44:50.282014 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:50.782006807 +0000 UTC m=+143.758415290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.325280 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9n8vm"] Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.336304 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-l5nd2"] Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.383797 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:50 crc kubenswrapper[4897]: E0214 18:44:50.384423 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:50.884384269 +0000 UTC m=+143.860792762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.389208 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:50 crc kubenswrapper[4897]: E0214 18:44:50.389669 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:50.889652032 +0000 UTC m=+143.866060515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:50 crc kubenswrapper[4897]: W0214 18:44:50.406349 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod972de147_8a61_4e52_b8a3_2cedb4f22f11.slice/crio-4e1d03d7496b051a2ce96e5045052aabaffe39ed47d1f463921a0f8f8d96ddfb WatchSource:0}: Error finding container 4e1d03d7496b051a2ce96e5045052aabaffe39ed47d1f463921a0f8f8d96ddfb: Status 404 returned error can't find the container with id 4e1d03d7496b051a2ce96e5045052aabaffe39ed47d1f463921a0f8f8d96ddfb Feb 14 18:44:50 crc kubenswrapper[4897]: W0214 18:44:50.411984 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd62c28f1_696b_4b88_8f46_67abf833ee4c.slice/crio-34cee1d3547e92af91fa3962b5a8ab70eb58890ced4e523fb874277431ea7665 WatchSource:0}: Error finding container 34cee1d3547e92af91fa3962b5a8ab70eb58890ced4e523fb874277431ea7665: Status 404 returned error can't find the container with id 34cee1d3547e92af91fa3962b5a8ab70eb58890ced4e523fb874277431ea7665 Feb 14 18:44:50 crc kubenswrapper[4897]: E0214 18:44:50.492862 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:50.992838621 +0000 UTC m=+143.969247104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.494155 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.494446 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:50 crc kubenswrapper[4897]: E0214 18:44:50.494882 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:50.994873898 +0000 UTC m=+143.971282381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.534305 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t5h9t"] Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.596044 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:50 crc kubenswrapper[4897]: E0214 18:44:50.596387 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:51.096373531 +0000 UTC m=+144.072782014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.621850 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" podStartSLOduration=123.621833408 podStartE2EDuration="2m3.621833408s" podCreationTimestamp="2026-02-14 18:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:50.586078854 +0000 UTC m=+143.562487367" watchObservedRunningTime="2026-02-14 18:44:50.621833408 +0000 UTC m=+143.598241891" Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.640848 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-k5jxh"] Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.696985 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:50 crc kubenswrapper[4897]: E0214 18:44:50.697739 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:51.197726531 +0000 UTC m=+144.174135014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.713846 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518230-qxs85"] Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.721489 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gws9q" event={"ID":"72c5452f-efd7-406e-84de-0275882c823e","Type":"ContainerStarted","Data":"8d9af09e7da06923270aea6e65fd7681185433029fce862501c7bb81a3465609"} Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.724238 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tndnf" event={"ID":"5c5ace00-d072-440a-bc7b-982b96f636e7","Type":"ContainerStarted","Data":"621bb53b80d45c84245f6e1ace44079b9c0c5d018b53403bcf23f90366b31580"} Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.726651 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ppn2g" event={"ID":"b15d9f59-a87a-47ef-a61f-4e791186229d","Type":"ContainerStarted","Data":"99413ccb8c224521472f287d5a8b97c6a083a15f06a66c5e644c9b5ad5780a55"} Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.732729 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" event={"ID":"faa970d9-b5d7-49a1-b162-2bed0f528b71","Type":"ContainerStarted","Data":"c8655e9aa8948b370fb5b3cc7c0a3f1666fbe72fde269942b415977de4d4db7e"} Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.733558 4897 csr.go:261] certificate signing request csr-w2x92 is approved, waiting to be issued Feb 14 18:44:50 crc kubenswrapper[4897]: W0214 18:44:50.738249 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod758d396c_63a7_4f41_a396_713cb90db5af.slice/crio-e8e4e3673da6e81d104d8d83a6c11cb185b41eb3714d7f5a445c479d8aa58b25 WatchSource:0}: Error finding container e8e4e3673da6e81d104d8d83a6c11cb185b41eb3714d7f5a445c479d8aa58b25: Status 404 returned error can't find the container with id e8e4e3673da6e81d104d8d83a6c11cb185b41eb3714d7f5a445c479d8aa58b25 Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.740523 4897 csr.go:257] certificate signing request csr-w2x92 is issued Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.741632 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t5h9t" event={"ID":"7a4a964d-7591-4a23-bc83-2fda90a1b3da","Type":"ContainerStarted","Data":"e445a769e9a3bdae40fe26970b2cbc40a586b6649347e4dc1a55d0676374c746"} Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.755272 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6jjtk" event={"ID":"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a","Type":"ContainerStarted","Data":"58c9ea43b70550e808154cbbe88bce0cb96d7581c712e635752e72c5f313ec06"} Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.772096 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fzn9r" event={"ID":"a7b90621-706c-47e9-b361-14c9bb002f11","Type":"ContainerStarted","Data":"bd93301d4c67c835fd96f21e45d1ea4baa4e118c183cfd359b90a2340bfc6bca"} Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.779937 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9kvql" event={"ID":"7ec1f803-3889-4483-87ae-9a38bd020818","Type":"ContainerStarted","Data":"ba11b363a03b52a0f60edf2e15f1698dc210a79bf46953f01c1b6dc4ca4e119a"} Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.791866 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-62b7q" event={"ID":"0cd062a1-246d-4ad6-b81a-a9f103576a32","Type":"ContainerStarted","Data":"7b3a5727ae9bc5b3d107f5c86405f5bcda06d06037b3d03a97b080d98c8fa2ce"} Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.792418 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-62b7q" event={"ID":"0cd062a1-246d-4ad6-b81a-a9f103576a32","Type":"ContainerStarted","Data":"d74c2b59dd0e9fe102150ac5b8e18347d7c2d381ade0cea1aeab28561f51fad8"} Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.794011 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-62b7q" Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.798123 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" event={"ID":"88a85445-8209-4b30-a0e0-c0f14d790fb5","Type":"ContainerStarted","Data":"4c498ae963b7f2ee5451cb19e9552698d3f2efb61c474f5c3c7c0741b18a696d"} Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.798610 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:50 crc kubenswrapper[4897]: E0214 18:44:50.798811 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:51.29878562 +0000 UTC m=+144.275194113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.798958 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:50 crc kubenswrapper[4897]: E0214 18:44:50.799315 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:51.299301747 +0000 UTC m=+144.275710240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.799548 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-c5z8g" event={"ID":"9c34acbe-6a2d-446a-b2e2-5fc5a4130deb","Type":"ContainerStarted","Data":"d933226f4d63a07e451f8a378c978db1eca0e13a3e5220d9f4b91a1a76177239"} Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.799577 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-c5z8g" event={"ID":"9c34acbe-6a2d-446a-b2e2-5fc5a4130deb","Type":"ContainerStarted","Data":"427e71afd19cda6a51d1eb4336a720f414cd17cecb901d2402b475234e16faec"} Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.801512 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" event={"ID":"b63cb010-df8f-4e29-a7f3-6b68cb03e63a","Type":"ContainerStarted","Data":"17557ee6fab1f9dab1b078daf1fe67862c442b04be2bb62cdaf0f396cff542e3"} Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.802685 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l5nd2" event={"ID":"972de147-8a61-4e52-b8a3-2cedb4f22f11","Type":"ContainerStarted","Data":"4e1d03d7496b051a2ce96e5045052aabaffe39ed47d1f463921a0f8f8d96ddfb"} Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.807000 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xcksp" event={"ID":"103a8a7a-d7e9-4d28-b909-cf3468e483e9","Type":"ContainerStarted","Data":"bc4a21a8b6466fce2f959c86ab1a0991db5ba3beecc630ae1577f67e10d54d86"} Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.808912 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9n8vm" event={"ID":"d62c28f1-696b-4b88-8f46-67abf833ee4c","Type":"ContainerStarted","Data":"34cee1d3547e92af91fa3962b5a8ab70eb58890ced4e523fb874277431ea7665"} Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.818050 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rx2r9"] Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.818192 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kknqp"] Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.818474 4897 patch_prober.go:28] interesting pod/console-operator-58897d9998-62b7q container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.818515 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-62b7q" podUID="0cd062a1-246d-4ad6-b81a-a9f103576a32" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.833164 4897 generic.go:334] "Generic (PLEG): container finished" podID="67063058-60ca-4efd-a102-cd90d5e43e56" containerID="45b8418d72c8f1b6acb0d6bc5bbb277ad35a73f8afe104455b34771a2151e1c8" exitCode=0 Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.833260 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" event={"ID":"67063058-60ca-4efd-a102-cd90d5e43e56","Type":"ContainerDied","Data":"45b8418d72c8f1b6acb0d6bc5bbb277ad35a73f8afe104455b34771a2151e1c8"} Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.833292 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" event={"ID":"67063058-60ca-4efd-a102-cd90d5e43e56","Type":"ContainerStarted","Data":"c86eb58a4aa668d4b03d1365973d2af401ee24ea2f188984d93e97cc97c908f5"} Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.839954 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wn9fs" event={"ID":"11f634b6-64a2-4d22-b194-a9515113a4e7","Type":"ContainerStarted","Data":"72ae0236e3d9e2555c1270287bd732f63bddeef82cd075190e40f818b7531e09"} Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.857369 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hqm2"] Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.883383 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mgfqz" event={"ID":"77cd9be5-c96a-494c-9d40-1068555dceda","Type":"ContainerStarted","Data":"632249760c8f59d859add4edb1ffe7a813e25f25b76d0a3cc90c9c2d33006513"} Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.885909 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-msfx9" event={"ID":"e05c0cf5-7ca3-47f2-810f-492e73edc19a","Type":"ContainerStarted","Data":"d6b40d78a3ffcf1ad92ebe6038c966652234e7b343d7f2f46c478439380e3b9a"} Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.887123 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7lrwj" event={"ID":"e6fba668-d4b4-45fb-89ec-7808a1269d1d","Type":"ContainerStarted","Data":"2f823e16cd218859ec1374d8d3a67967b6010ed2b7ed57b2aab6fb11182c5f9c"} Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.889215 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" event={"ID":"3b9a689e-54e3-48df-a102-500878c35aa2","Type":"ContainerStarted","Data":"eb9ab62e50acb4a4f49c09f8b52e94487d88dd6859d94ba92ff9135963dbd844"} Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.890559 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fh6qr" event={"ID":"57df6e6f-6814-477c-aff1-19d5eb81e4c1","Type":"ContainerStarted","Data":"dce586a82dcfa4d67412c4acb12c4a7973740045945896ecf04322bfd8691e82"} Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.900097 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:50 crc kubenswrapper[4897]: E0214 18:44:50.900889 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:51.400860903 +0000 UTC m=+144.377269386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.902507 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:50 crc kubenswrapper[4897]: E0214 18:44:50.903393 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:51.403378576 +0000 UTC m=+144.379787049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:50 crc kubenswrapper[4897]: W0214 18:44:50.918023 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19dafede_65e3_4652_880d_55d3d86dc12b.slice/crio-77626213caa106729aa3742c9562dfea68c17b898875e3afdb53c845d6e09d67 WatchSource:0}: Error finding container 77626213caa106729aa3742c9562dfea68c17b898875e3afdb53c845d6e09d67: Status 404 returned error can't find the container with id 77626213caa106729aa3742c9562dfea68c17b898875e3afdb53c845d6e09d67 Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.954583 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7xmvn" event={"ID":"65112b94-8028-49f5-91fc-b83b49f30017","Type":"ContainerStarted","Data":"cfd56766e90d33e4993505c9f64278631b31037b4875fce2b192197354eeda7d"} Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.979880 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.984253 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-d8kqp"] Feb 14 18:44:50 crc kubenswrapper[4897]: I0214 18:44:50.995661 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99"] Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.003558 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:51 crc kubenswrapper[4897]: E0214 18:44:51.003768 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:51.503746252 +0000 UTC m=+144.480154735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.004344 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:51 crc kubenswrapper[4897]: E0214 18:44:51.004985 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:51.504976852 +0000 UTC m=+144.481385335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.057255 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-g7lq6"] Feb 14 18:44:51 crc kubenswrapper[4897]: W0214 18:44:51.058599 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9baea172_0e9d_4866_917e_c5e0a57e1413.slice/crio-c205b07348096f03e6360604e338f9f5218f1707d192e76fc9a7b2b1e8926cb8 WatchSource:0}: Error finding container c205b07348096f03e6360604e338f9f5218f1707d192e76fc9a7b2b1e8926cb8: Status 404 returned error can't find the container with id c205b07348096f03e6360604e338f9f5218f1707d192e76fc9a7b2b1e8926cb8 Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.106835 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:51 crc kubenswrapper[4897]: E0214 18:44:51.116113 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:51.616089582 +0000 UTC m=+144.592498065 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:51 crc kubenswrapper[4897]: W0214 18:44:51.128946 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f237a59_0e7e_4ae0_94c9_c6d451224a27.slice/crio-51c7d933d145e424ef224faf909a01317f1c9f40b2827222b6a2807d38ffa3fe WatchSource:0}: Error finding container 51c7d933d145e424ef224faf909a01317f1c9f40b2827222b6a2807d38ffa3fe: Status 404 returned error can't find the container with id 51c7d933d145e424ef224faf909a01317f1c9f40b2827222b6a2807d38ffa3fe Feb 14 18:44:51 crc kubenswrapper[4897]: W0214 18:44:51.129123 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode45f17b5_6656_4aef_95c8_b1856ae4f1c4.slice/crio-599e96833888941a3ef27b6aea0238e57b02eded158c44bdfef5044923430269 WatchSource:0}: Error finding container 599e96833888941a3ef27b6aea0238e57b02eded158c44bdfef5044923430269: Status 404 returned error can't find the container with id 599e96833888941a3ef27b6aea0238e57b02eded158c44bdfef5044923430269 Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.154863 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zs6vd" podStartSLOduration=124.154845074 podStartE2EDuration="2m4.154845074s" podCreationTimestamp="2026-02-14 18:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:51.150423889 +0000 UTC m=+144.126832382" watchObservedRunningTime="2026-02-14 18:44:51.154845074 +0000 UTC m=+144.131253557" Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.186424 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bzvvc"] Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.212743 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:51 crc kubenswrapper[4897]: E0214 18:44:51.213020 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:51.713008725 +0000 UTC m=+144.689417208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.213690 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xlrkp"] Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.225114 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5wxpc"] Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.229243 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rdcrz"] Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.283842 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49"] Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.313773 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:51 crc kubenswrapper[4897]: E0214 18:44:51.314529 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:51.814482048 +0000 UTC m=+144.790890531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.315926 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wg7rv"] Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.416217 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:51 crc kubenswrapper[4897]: E0214 18:44:51.416721 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:51.916708745 +0000 UTC m=+144.893117228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.480094 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jmbj5"] Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.520129 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:51 crc kubenswrapper[4897]: E0214 18:44:51.520388 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:52.020365169 +0000 UTC m=+144.996773652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.520524 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:51 crc kubenswrapper[4897]: E0214 18:44:51.520881 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:52.020874486 +0000 UTC m=+144.997282969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.621687 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:51 crc kubenswrapper[4897]: E0214 18:44:51.622089 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:52.121994177 +0000 UTC m=+145.098402660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.623689 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:51 crc kubenswrapper[4897]: E0214 18:44:51.624052 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:52.124018713 +0000 UTC m=+145.100427196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.680151 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-zh576" podStartSLOduration=123.680124186 podStartE2EDuration="2m3.680124186s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:51.678139261 +0000 UTC m=+144.654547754" watchObservedRunningTime="2026-02-14 18:44:51.680124186 +0000 UTC m=+144.656532669" Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.724655 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:51 crc kubenswrapper[4897]: W0214 18:44:51.725687 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68eb569a_ca5d_4eef_a936_fd697b26d0be.slice/crio-b7fb6cb3dccf83e2493c8c752b715ef233af55f9699a9c42c6d9e4ee1f8420b8 WatchSource:0}: Error finding container b7fb6cb3dccf83e2493c8c752b715ef233af55f9699a9c42c6d9e4ee1f8420b8: Status 404 returned error can't find the container with id b7fb6cb3dccf83e2493c8c752b715ef233af55f9699a9c42c6d9e4ee1f8420b8 Feb 14 18:44:51 crc kubenswrapper[4897]: E0214 18:44:51.725834 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:52.225813227 +0000 UTC m=+145.202221710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.739465 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-c5z8g" Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.746993 4897 patch_prober.go:28] interesting pod/router-default-5444994796-c5z8g container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.750929 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-14 18:39:50 +0000 UTC, rotation deadline is 2026-11-16 01:53:05.845350622 +0000 UTC Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.750987 4897 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6583h8m14.094365888s for next certificate rotation Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.754971 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c5z8g" podUID="9c34acbe-6a2d-446a-b2e2-5fc5a4130deb" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.828047 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:51 crc kubenswrapper[4897]: E0214 18:44:51.828863 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:52.328843791 +0000 UTC m=+145.305252274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.911308 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-62b7q" podStartSLOduration=124.911289189 podStartE2EDuration="2m4.911289189s" podCreationTimestamp="2026-02-14 18:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:51.909007813 +0000 UTC m=+144.885416296" watchObservedRunningTime="2026-02-14 18:44:51.911289189 +0000 UTC m=+144.887697672" Feb 14 18:44:51 crc kubenswrapper[4897]: I0214 18:44:51.934641 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:51 crc kubenswrapper[4897]: E0214 18:44:51.935255 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:52.435238795 +0000 UTC m=+145.411647278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.035904 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:52 crc kubenswrapper[4897]: E0214 18:44:52.036236 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:52.536221962 +0000 UTC m=+145.512630445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.039950 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-msfx9" event={"ID":"e05c0cf5-7ca3-47f2-810f-492e73edc19a","Type":"ContainerStarted","Data":"ac5635aed86f44856fe0b4b4705fd11b83ef1c5df0cc5a26155c49d46167eb2b"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.044594 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hqm2" event={"ID":"9baea172-0e9d-4866-917e-c5e0a57e1413","Type":"ContainerStarted","Data":"c205b07348096f03e6360604e338f9f5218f1707d192e76fc9a7b2b1e8926cb8"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.046888 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ppn2g" event={"ID":"b15d9f59-a87a-47ef-a61f-4e791186229d","Type":"ContainerStarted","Data":"98376a3d0b28ef551fd8c3dc9e1dbfba84ee90966b7d9d7755bb3e7d4bd3690e"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.048735 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7lq6" event={"ID":"88eb139d-9259-4c72-b9db-0f0cd154fda9","Type":"ContainerStarted","Data":"e7e959f93dab04ecc50c90f13eed12d61e13b525527a5156c637a4383759543a"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.050251 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5jxh" event={"ID":"758d396c-63a7-4f41-a396-713cb90db5af","Type":"ContainerStarted","Data":"e8e4e3673da6e81d104d8d83a6c11cb185b41eb3714d7f5a445c479d8aa58b25"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.056103 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-d8kqp" event={"ID":"e45f17b5-6656-4aef-95c8-b1856ae4f1c4","Type":"ContainerStarted","Data":"599e96833888941a3ef27b6aea0238e57b02eded158c44bdfef5044923430269"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.057160 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5wxpc" event={"ID":"10c2cb4a-c03b-49ca-a6ca-1b5637923932","Type":"ContainerStarted","Data":"456adddc3f1b5e278a9cdb4a1f85e82ccedc1dc0c531feec224afcf2418c2748"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.058662 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7xmvn" event={"ID":"65112b94-8028-49f5-91fc-b83b49f30017","Type":"ContainerStarted","Data":"c2329655b02d0213d825dc3c05bda831f22417c675863af841ac7e26ab95cd98"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.062602 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" event={"ID":"0f237a59-0e7e-4ae0-94c9-c6d451224a27","Type":"ContainerStarted","Data":"51c7d933d145e424ef224faf909a01317f1c9f40b2827222b6a2807d38ffa3fe"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.065473 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fzn9r" event={"ID":"a7b90621-706c-47e9-b361-14c9bb002f11","Type":"ContainerStarted","Data":"32f2e01bb87ddb408fafbbf293eb90a3bcec6b9c818a91481ddcedc2e41e1081"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.079015 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" event={"ID":"2fd14f21-0836-40b2-b509-ec296556f45c","Type":"ContainerStarted","Data":"8980f38851b734fd983d92d9d38a844e776f0eb017736324404f406b6feb0cc9"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.090188 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tndnf" event={"ID":"5c5ace00-d072-440a-bc7b-982b96f636e7","Type":"ContainerStarted","Data":"27c95bac04bc0217b2c5a0a438d2fd6891e7de735d48e70bd57ab5ab96b5d5bc"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.090231 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-c5z8g" podStartSLOduration=124.090179964 podStartE2EDuration="2m4.090179964s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:52.083260307 +0000 UTC m=+145.059668800" watchObservedRunningTime="2026-02-14 18:44:52.090179964 +0000 UTC m=+145.066588437" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.097825 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9n8vm" event={"ID":"d62c28f1-696b-4b88-8f46-67abf833ee4c","Type":"ContainerStarted","Data":"8be8227f8b296e15096f060fc7e74c2d49a8db27f71ad8b98a4b6bfeecc2f5d6"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.098967 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9n8vm" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.100382 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7lrwj" event={"ID":"e6fba668-d4b4-45fb-89ec-7808a1269d1d","Type":"ContainerStarted","Data":"0d8266d51ae92f604bd96a1565b52d7f45033c55ef9a85f0075578a6b8fd754a"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.101086 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xlrkp" event={"ID":"df96584e-d06a-4906-9c95-3e94936695ef","Type":"ContainerStarted","Data":"687d89451b7b62640890cfef82b8ec8c65be114d357c0540f7bb07db4b8ca15b"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.102821 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9kvql" event={"ID":"7ec1f803-3889-4483-87ae-9a38bd020818","Type":"ContainerStarted","Data":"22603977a86562629b8a5d09db269da5fd7192b43292c0f2a081e43c94a3d10c"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.103712 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-9kvql" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.108588 4897 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9n8vm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.108638 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9n8vm" podUID="d62c28f1-696b-4b88-8f46-67abf833ee4c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.110566 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-msfx9" podStartSLOduration=124.110557063 podStartE2EDuration="2m4.110557063s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:52.109990495 +0000 UTC m=+145.086398988" watchObservedRunningTime="2026-02-14 18:44:52.110557063 +0000 UTC m=+145.086965556" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.112064 4897 patch_prober.go:28] interesting pod/downloads-7954f5f757-9kvql container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.112103 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9kvql" podUID="7ec1f803-3889-4483-87ae-9a38bd020818" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.136977 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:52 crc kubenswrapper[4897]: E0214 18:44:52.138774 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:52.638758549 +0000 UTC m=+145.615167032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.165583 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kknqp" event={"ID":"19dafede-65e3-4652-880d-55d3d86dc12b","Type":"ContainerStarted","Data":"77626213caa106729aa3742c9562dfea68c17b898875e3afdb53c845d6e09d67"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.167706 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wg7rv" event={"ID":"d70bf6df-ebee-4193-982f-e9d86147ea35","Type":"ContainerStarted","Data":"04fcc523f99f3b9cf656849f418c36f9b93e5099bfdbe964cdb606d912764fbe"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.179487 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ppn2g" podStartSLOduration=124.179469967 podStartE2EDuration="2m4.179469967s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:52.179132176 +0000 UTC m=+145.155540659" watchObservedRunningTime="2026-02-14 18:44:52.179469967 +0000 UTC m=+145.155878450" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.191612 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" event={"ID":"faa970d9-b5d7-49a1-b162-2bed0f528b71","Type":"ContainerStarted","Data":"a96ee81cf06e3ed1c601b628c75da40c6ce9217d6b0638b32f9a1988b12d5537"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.192585 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.201502 4897 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jh8w7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.201540 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" podUID="faa970d9-b5d7-49a1-b162-2bed0f528b71" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.214727 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fzn9r" podStartSLOduration=125.214713754 podStartE2EDuration="2m5.214713754s" podCreationTimestamp="2026-02-14 18:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:52.212530033 +0000 UTC m=+145.188938516" watchObservedRunningTime="2026-02-14 18:44:52.214713754 +0000 UTC m=+145.191122237" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.215529 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wn9fs" event={"ID":"11f634b6-64a2-4d22-b194-a9515113a4e7","Type":"ContainerStarted","Data":"10c513620461590392bf9bb07882b745d632d759e8e4b3bc4839aed63e8ac052"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.237221 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" event={"ID":"15fa65ae-a663-434d-9d2d-2a69a3f7d81c","Type":"ContainerStarted","Data":"cffbd9c276abe2b0467f9cf0159eb8b53dd4f0766d90376272b0962a8fb3ca3e"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.238320 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:52 crc kubenswrapper[4897]: E0214 18:44:52.239902 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:52.739888101 +0000 UTC m=+145.716296584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.248438 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" event={"ID":"b63cb010-df8f-4e29-a7f3-6b68cb03e63a","Type":"ContainerStarted","Data":"eab8376e2eaf1c707ce818d228e29b8faa792a64f6b0039826a2d196d649afa8"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.249310 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.275084 4897 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-ws2d2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.275154 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" podUID="b63cb010-df8f-4e29-a7f3-6b68cb03e63a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.277511 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xcksp" event={"ID":"103a8a7a-d7e9-4d28-b909-cf3468e483e9","Type":"ContainerStarted","Data":"ef59cfede4e222d5331adcf18086e6fa0b94ba297707da12258aa42990611855"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.289817 4897 generic.go:334] "Generic (PLEG): container finished" podID="3b9a689e-54e3-48df-a102-500878c35aa2" containerID="b42a1c1686f9700191b65b3183f40ac41921538be9a4c03d449d59bad3ef4a70" exitCode=0 Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.290147 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" event={"ID":"3b9a689e-54e3-48df-a102-500878c35aa2","Type":"ContainerDied","Data":"b42a1c1686f9700191b65b3183f40ac41921538be9a4c03d449d59bad3ef4a70"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.314710 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-9n8vm" podStartSLOduration=124.314687217 podStartE2EDuration="2m4.314687217s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:52.253161737 +0000 UTC m=+145.229570220" watchObservedRunningTime="2026-02-14 18:44:52.314687217 +0000 UTC m=+145.291095700" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.315451 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gws9q" event={"ID":"72c5452f-efd7-406e-84de-0275882c823e","Type":"ContainerStarted","Data":"b3e948a9731d8cf21519a670ac0c47d98660364f68b93f4159cad913f12d0aa4"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.315770 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-tndnf" podStartSLOduration=124.315561406 podStartE2EDuration="2m4.315561406s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:52.312713553 +0000 UTC m=+145.289122046" watchObservedRunningTime="2026-02-14 18:44:52.315561406 +0000 UTC m=+145.291969889" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.319971 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" event={"ID":"88a85445-8209-4b30-a0e0-c0f14d790fb5","Type":"ContainerStarted","Data":"f57d9e8e6b33f49c13b09fab54911760226509c9df432c9b7bbde2f4b41a1c28"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.321745 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.330861 4897 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-c8v6s container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.330903 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" podUID="88a85445-8209-4b30-a0e0-c0f14d790fb5" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.339211 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:52 crc kubenswrapper[4897]: E0214 18:44:52.340123 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:52.840098202 +0000 UTC m=+145.816506685 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.341981 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-9kvql" podStartSLOduration=125.341964694 podStartE2EDuration="2m5.341964694s" podCreationTimestamp="2026-02-14 18:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:52.340262158 +0000 UTC m=+145.316670641" watchObservedRunningTime="2026-02-14 18:44:52.341964694 +0000 UTC m=+145.318373177" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.360162 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518230-qxs85" event={"ID":"85830a53-70c2-433d-a359-025fababa083","Type":"ContainerStarted","Data":"b5a38b3b3b6207c66a4b782cb967b33841d18d482013a274a80c38fa29dbeb05"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.361681 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fh6qr" event={"ID":"57df6e6f-6814-477c-aff1-19d5eb81e4c1","Type":"ContainerStarted","Data":"704243e18955a7c14f0f6c13effcc8219b4bfe849f2746044e0fbe3e5f685855"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.362800 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rdcrz" event={"ID":"7dbb71e3-f936-4fb4-b5ba-772aa900f80d","Type":"ContainerStarted","Data":"f1affe0fa4ab671ffd3b8febe325df18828cba6468d59f19fbc81e8e672752f3"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.363598 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bzvvc" event={"ID":"87f809c6-5e7e-47ec-8fd2-3eca0bd6b045","Type":"ContainerStarted","Data":"c0b94bb85a3d5fb5400c642ede9052b091105d1cba284de49f8faa61e29654e8"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.364764 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l5nd2" event={"ID":"972de147-8a61-4e52-b8a3-2cedb4f22f11","Type":"ContainerStarted","Data":"a38019f537c1e74014d8bf6bbc47f452808624565c7576b5640fa07a6d0ee90b"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.367629 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" event={"ID":"68eb569a-ca5d-4eef-a936-fd697b26d0be","Type":"ContainerStarted","Data":"b7fb6cb3dccf83e2493c8c752b715ef233af55f9699a9c42c6d9e4ee1f8420b8"} Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.370974 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" podStartSLOduration=124.370962146 podStartE2EDuration="2m4.370962146s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:52.369649213 +0000 UTC m=+145.346057706" watchObservedRunningTime="2026-02-14 18:44:52.370962146 +0000 UTC m=+145.347370629" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.412991 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" podStartSLOduration=125.412974626 podStartE2EDuration="2m5.412974626s" podCreationTimestamp="2026-02-14 18:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:52.395569185 +0000 UTC m=+145.371977678" watchObservedRunningTime="2026-02-14 18:44:52.412974626 +0000 UTC m=+145.389383109" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.427265 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-fh6qr" podStartSLOduration=6.4272454549999996 podStartE2EDuration="6.427245455s" podCreationTimestamp="2026-02-14 18:44:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:52.425188417 +0000 UTC m=+145.401596900" watchObservedRunningTime="2026-02-14 18:44:52.427245455 +0000 UTC m=+145.403653928" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.440969 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:52 crc kubenswrapper[4897]: E0214 18:44:52.444597 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:52.944583534 +0000 UTC m=+145.920992017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.500729 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" podStartSLOduration=124.500713258 podStartE2EDuration="2m4.500713258s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:52.500621135 +0000 UTC m=+145.477029618" watchObservedRunningTime="2026-02-14 18:44:52.500713258 +0000 UTC m=+145.477121741" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.501540 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wn9fs" podStartSLOduration=124.501532895 podStartE2EDuration="2m4.501532895s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:52.469157771 +0000 UTC m=+145.445566254" watchObservedRunningTime="2026-02-14 18:44:52.501532895 +0000 UTC m=+145.477941378" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.528864 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t5h9t" podStartSLOduration=124.528849802 podStartE2EDuration="2m4.528849802s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:52.528725398 +0000 UTC m=+145.505133871" watchObservedRunningTime="2026-02-14 18:44:52.528849802 +0000 UTC m=+145.505258285" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.557366 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gws9q" podStartSLOduration=124.557350378 podStartE2EDuration="2m4.557350378s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:52.556754538 +0000 UTC m=+145.533163031" watchObservedRunningTime="2026-02-14 18:44:52.557350378 +0000 UTC m=+145.533758861" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.562056 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:52 crc kubenswrapper[4897]: E0214 18:44:52.562681 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:53.062651172 +0000 UTC m=+146.039059655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.601749 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-6jjtk" podStartSLOduration=125.601733076 podStartE2EDuration="2m5.601733076s" podCreationTimestamp="2026-02-14 18:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:52.598459808 +0000 UTC m=+145.574868301" watchObservedRunningTime="2026-02-14 18:44:52.601733076 +0000 UTC m=+145.578141569" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.665203 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:52 crc kubenswrapper[4897]: E0214 18:44:52.665711 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:53.165696486 +0000 UTC m=+146.142104969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.745976 4897 patch_prober.go:28] interesting pod/router-default-5444994796-c5z8g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 18:44:52 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 14 18:44:52 crc kubenswrapper[4897]: [+]process-running ok Feb 14 18:44:52 crc kubenswrapper[4897]: healthz check failed Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.746041 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c5z8g" podUID="9c34acbe-6a2d-446a-b2e2-5fc5a4130deb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.768614 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:52 crc kubenswrapper[4897]: E0214 18:44:52.769060 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:53.26901801 +0000 UTC m=+146.245426493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.873179 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:52 crc kubenswrapper[4897]: E0214 18:44:52.873936 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:53.373912065 +0000 UTC m=+146.350320538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.914308 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.914354 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.916185 4897 patch_prober.go:28] interesting pod/apiserver-76f77b778f-tndnf container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.916246 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-tndnf" podUID="5c5ace00-d072-440a-bc7b-982b96f636e7" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.974194 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:52 crc kubenswrapper[4897]: E0214 18:44:52.974323 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:53.474303472 +0000 UTC m=+146.450711955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:52 crc kubenswrapper[4897]: I0214 18:44:52.974680 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:52 crc kubenswrapper[4897]: E0214 18:44:52.975141 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:53.4751253 +0000 UTC m=+146.451533783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.067100 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-62b7q" Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.075330 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:53 crc kubenswrapper[4897]: E0214 18:44:53.075460 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:53.575441344 +0000 UTC m=+146.551849827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.075554 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:53 crc kubenswrapper[4897]: E0214 18:44:53.075852 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:53.575844218 +0000 UTC m=+146.552252701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.176509 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:53 crc kubenswrapper[4897]: E0214 18:44:53.177015 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:53.676991459 +0000 UTC m=+146.653399942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.177144 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:53 crc kubenswrapper[4897]: E0214 18:44:53.178050 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:53.678013063 +0000 UTC m=+146.654421536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.278573 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:53 crc kubenswrapper[4897]: E0214 18:44:53.278925 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:53.778911567 +0000 UTC m=+146.755320050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.372082 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bzvvc" event={"ID":"87f809c6-5e7e-47ec-8fd2-3eca0bd6b045","Type":"ContainerStarted","Data":"567919dbf23cb1a67e65740fada28066eea3f03e9b22cb3402c352a521d2521d"} Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.374109 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t5h9t" event={"ID":"7a4a964d-7591-4a23-bc83-2fda90a1b3da","Type":"ContainerStarted","Data":"c7aeaf4d8fdbc76164c17bf502ad8ed9d9a8e83ae461fec7941af1b7118d5620"} Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.377270 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" event={"ID":"3b9a689e-54e3-48df-a102-500878c35aa2","Type":"ContainerStarted","Data":"b4158e9aae62651f009339a55ec07df80d0c733231921cf08d84055037eca4bf"} Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.378888 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kknqp" event={"ID":"19dafede-65e3-4652-880d-55d3d86dc12b","Type":"ContainerStarted","Data":"6b37355edc0e2ab991909de403ec6830a4f87262adb860cdca85f051eb04dcbb"} Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.379715 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:53 crc kubenswrapper[4897]: E0214 18:44:53.380042 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:53.880017338 +0000 UTC m=+146.856425821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.380988 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6jjtk" event={"ID":"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a","Type":"ContainerStarted","Data":"8d9d60ff40b373ccdbe9fec097e8d7fda7b22ec2e68a3639fcbdb9747b7058f4"} Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.383904 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7xmvn" event={"ID":"65112b94-8028-49f5-91fc-b83b49f30017","Type":"ContainerStarted","Data":"310b8368e7e1abc7c069605baf97e08d2a99816ece27a9c30a781655d2445b71"} Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.387686 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l5nd2" event={"ID":"972de147-8a61-4e52-b8a3-2cedb4f22f11","Type":"ContainerStarted","Data":"5a1517fe5e1e412540b755267feea2e6e8f15a40066c612bd0f44c287bc15dbe"} Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.389612 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xcksp" event={"ID":"103a8a7a-d7e9-4d28-b909-cf3468e483e9","Type":"ContainerStarted","Data":"229246c0f4f30514d7c4972556337480e5cb1e39a997362f45e7d25a55858fb7"} Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.392462 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" event={"ID":"0f237a59-0e7e-4ae0-94c9-c6d451224a27","Type":"ContainerStarted","Data":"75fceb5d5e8fc027787b7299a8a4d700095bfcd2971ba6e358969b48557bcc33"} Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.392868 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.394232 4897 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9pw99 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.394289 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" podUID="0f237a59-0e7e-4ae0-94c9-c6d451224a27" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.395183 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7lrwj" event={"ID":"e6fba668-d4b4-45fb-89ec-7808a1269d1d","Type":"ContainerStarted","Data":"e25bd7f6a475c1213d02e80c4b029e2d57d374b6c17779622c0c2cba7dbd12d7"} Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.396594 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" event={"ID":"15fa65ae-a663-434d-9d2d-2a69a3f7d81c","Type":"ContainerStarted","Data":"36b3b14e07c51aa95763999de47f73cdc338b1fe730b2946b6adc3bc737c7c4d"} Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.397840 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xlrkp" event={"ID":"df96584e-d06a-4906-9c95-3e94936695ef","Type":"ContainerStarted","Data":"dea0c9edc211fccea55822c46bd7b5658c7ffa471b1905af1d2d13f32b3a0452"} Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.399864 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rdcrz" event={"ID":"7dbb71e3-f936-4fb4-b5ba-772aa900f80d","Type":"ContainerStarted","Data":"c36cde3e91eece84cca11cb0e1a4d0216ef2c12e158f41db1066bf825d44d055"} Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.401989 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7lq6" event={"ID":"88eb139d-9259-4c72-b9db-0f0cd154fda9","Type":"ContainerStarted","Data":"6cee4eb8ab9bacf7cf82ce1c2f3a6a805e9fab84fa9ed7a77c6ba301fb087b79"} Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.402126 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7lq6" event={"ID":"88eb139d-9259-4c72-b9db-0f0cd154fda9","Type":"ContainerStarted","Data":"dff661aa7dd934bbcbcb98c1f97fbcb32646059e8963df34285f29453cbeda80"} Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.403328 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hqm2" event={"ID":"9baea172-0e9d-4866-917e-c5e0a57e1413","Type":"ContainerStarted","Data":"7f408b79d169938c78f7a3116bfc9363dded389b2624f5b40255e7e3fcd6e231"} Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.405434 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5wxpc" event={"ID":"10c2cb4a-c03b-49ca-a6ca-1b5637923932","Type":"ContainerStarted","Data":"b80799e9711b01be0bd98f450bd1429d84cbfaadd05ec144b0042eebd2346eb7"} Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.406552 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518230-qxs85" event={"ID":"85830a53-70c2-433d-a359-025fababa083","Type":"ContainerStarted","Data":"f8cd91ba1c8c6fb76daf258862386009b48967e82b99e1e0fbdf4a9bc00a4e60"} Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.409316 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wg7rv" event={"ID":"d70bf6df-ebee-4193-982f-e9d86147ea35","Type":"ContainerStarted","Data":"1e06dc45bfc8a046c86622b3a41417b9c864472f4f7bb9d5607d2ddedadf1ac2"} Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.414246 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" event={"ID":"67063058-60ca-4efd-a102-cd90d5e43e56","Type":"ContainerStarted","Data":"5922f407d20775fa74a398e71b1b8329154291f70c4413e4bd1e39edb53d7ba3"} Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.418552 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" event={"ID":"2fd14f21-0836-40b2-b509-ec296556f45c","Type":"ContainerStarted","Data":"fddf108cd303253b44fc2052b2e20b9f244304238688e02c64c1121f26c775ce"} Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.420636 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5jxh" event={"ID":"758d396c-63a7-4f41-a396-713cb90db5af","Type":"ContainerStarted","Data":"8537a1e932bd869b28675a60dce2a4141588cf069e6eaf4ed333c6b212672b21"} Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.420748 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5jxh" event={"ID":"758d396c-63a7-4f41-a396-713cb90db5af","Type":"ContainerStarted","Data":"cbd427dd2608f50fed39edcafc1dab616f448f238156728be87145a49408fe63"} Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.422854 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-d8kqp" event={"ID":"e45f17b5-6656-4aef-95c8-b1856ae4f1c4","Type":"ContainerStarted","Data":"7c319128e5f431343213fcc93efeb6f699c9b2294d3875f368ef2ea8838bc50f"} Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.425693 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mgfqz" event={"ID":"77cd9be5-c96a-494c-9d40-1068555dceda","Type":"ContainerStarted","Data":"1857b271c5636f61bcad7807d8c8c739840f9361e25016487e4ba27ae21afea8"} Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.425729 4897 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jh8w7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.425921 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" podUID="faa970d9-b5d7-49a1-b162-2bed0f528b71" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.426960 4897 patch_prober.go:28] interesting pod/downloads-7954f5f757-9kvql container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.427085 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9kvql" podUID="7ec1f803-3889-4483-87ae-9a38bd020818" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.426990 4897 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9n8vm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.427282 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9n8vm" podUID="d62c28f1-696b-4b88-8f46-67abf833ee4c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.427621 4897 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-c8v6s container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.427686 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" podUID="88a85445-8209-4b30-a0e0-c0f14d790fb5" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.429012 4897 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-ws2d2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.429060 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" podUID="b63cb010-df8f-4e29-a7f3-6b68cb03e63a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.447370 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kknqp" podStartSLOduration=125.447339899 podStartE2EDuration="2m5.447339899s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:53.413547819 +0000 UTC m=+146.389956302" watchObservedRunningTime="2026-02-14 18:44:53.447339899 +0000 UTC m=+146.423748382" Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.481225 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:53 crc kubenswrapper[4897]: E0214 18:44:53.500550 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:54.000514915 +0000 UTC m=+146.976923398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.507854 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xcksp" podStartSLOduration=126.507827135 podStartE2EDuration="2m6.507827135s" podCreationTimestamp="2026-02-14 18:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:53.450353388 +0000 UTC m=+146.426761891" watchObservedRunningTime="2026-02-14 18:44:53.507827135 +0000 UTC m=+146.484235618" Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.543735 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" podStartSLOduration=125.543714983 podStartE2EDuration="2m5.543714983s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:53.500718562 +0000 UTC m=+146.477127065" watchObservedRunningTime="2026-02-14 18:44:53.543714983 +0000 UTC m=+146.520123466" Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.545838 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7xmvn" podStartSLOduration=125.545821713 podStartE2EDuration="2m5.545821713s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:53.544298583 +0000 UTC m=+146.520707136" watchObservedRunningTime="2026-02-14 18:44:53.545821713 +0000 UTC m=+146.522230196" Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.601944 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l5nd2" podStartSLOduration=125.601927936 podStartE2EDuration="2m5.601927936s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:53.601329866 +0000 UTC m=+146.577738409" watchObservedRunningTime="2026-02-14 18:44:53.601927936 +0000 UTC m=+146.578336419" Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.611175 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:53 crc kubenswrapper[4897]: E0214 18:44:53.611555 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:54.111538301 +0000 UTC m=+147.087946784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.649579 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-rdcrz" podStartSLOduration=6.649561461 podStartE2EDuration="6.649561461s" podCreationTimestamp="2026-02-14 18:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:53.627360272 +0000 UTC m=+146.603768755" watchObservedRunningTime="2026-02-14 18:44:53.649561461 +0000 UTC m=+146.625969934" Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.650692 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-7lrwj" podStartSLOduration=125.650687087 podStartE2EDuration="2m5.650687087s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:53.649047774 +0000 UTC m=+146.625456277" watchObservedRunningTime="2026-02-14 18:44:53.650687087 +0000 UTC m=+146.627095560" Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.669235 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xlrkp" podStartSLOduration=125.669220446 podStartE2EDuration="2m5.669220446s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:53.667716706 +0000 UTC m=+146.644125179" watchObservedRunningTime="2026-02-14 18:44:53.669220446 +0000 UTC m=+146.645628929" Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.705983 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2hqm2" podStartSLOduration=125.705966963 podStartE2EDuration="2m5.705966963s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:53.704832986 +0000 UTC m=+146.681241479" watchObservedRunningTime="2026-02-14 18:44:53.705966963 +0000 UTC m=+146.682375446" Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.712606 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:53 crc kubenswrapper[4897]: E0214 18:44:53.712769 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:54.212745436 +0000 UTC m=+147.189153979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.712826 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:53 crc kubenswrapper[4897]: E0214 18:44:53.713323 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:54.213308894 +0000 UTC m=+147.189717377 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.731113 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29518230-qxs85" podStartSLOduration=125.731096309 podStartE2EDuration="2m5.731096309s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:53.730073815 +0000 UTC m=+146.706482298" watchObservedRunningTime="2026-02-14 18:44:53.731096309 +0000 UTC m=+146.707504792" Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.734449 4897 patch_prober.go:28] interesting pod/router-default-5444994796-c5z8g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 18:44:53 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 14 18:44:53 crc kubenswrapper[4897]: [+]process-running ok Feb 14 18:44:53 crc kubenswrapper[4897]: healthz check failed Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.734496 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c5z8g" podUID="9c34acbe-6a2d-446a-b2e2-5fc5a4130deb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.784706 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" podStartSLOduration=125.784691848 podStartE2EDuration="2m5.784691848s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:53.781986729 +0000 UTC m=+146.758395232" watchObservedRunningTime="2026-02-14 18:44:53.784691848 +0000 UTC m=+146.761100341" Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.807923 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" podStartSLOduration=125.807906451 podStartE2EDuration="2m5.807906451s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:53.805222333 +0000 UTC m=+146.781630826" watchObservedRunningTime="2026-02-14 18:44:53.807906451 +0000 UTC m=+146.784314934" Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.813856 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:53 crc kubenswrapper[4897]: E0214 18:44:53.814001 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:54.313984891 +0000 UTC m=+147.290393374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.814175 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:53 crc kubenswrapper[4897]: E0214 18:44:53.814507 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:54.314498658 +0000 UTC m=+147.290907141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.871499 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-d8kqp" podStartSLOduration=125.871481579 podStartE2EDuration="2m5.871481579s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:53.848934108 +0000 UTC m=+146.825342601" watchObservedRunningTime="2026-02-14 18:44:53.871481579 +0000 UTC m=+146.847890052" Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.915400 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:53 crc kubenswrapper[4897]: E0214 18:44:53.915546 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:54.415522596 +0000 UTC m=+147.391931079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:53 crc kubenswrapper[4897]: I0214 18:44:53.916122 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:53 crc kubenswrapper[4897]: E0214 18:44:53.916473 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:54.416461066 +0000 UTC m=+147.392869549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.016937 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:54 crc kubenswrapper[4897]: E0214 18:44:54.017173 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:54.517145353 +0000 UTC m=+147.493553836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.017243 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:54 crc kubenswrapper[4897]: E0214 18:44:54.017564 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:54.517556026 +0000 UTC m=+147.493964509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.118315 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:54 crc kubenswrapper[4897]: E0214 18:44:54.118552 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:54.618510032 +0000 UTC m=+147.594918515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.119060 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:54 crc kubenswrapper[4897]: E0214 18:44:54.119389 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:54.619375311 +0000 UTC m=+147.595783794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.220227 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:54 crc kubenswrapper[4897]: E0214 18:44:54.220404 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:54.720379368 +0000 UTC m=+147.696787861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.220561 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:54 crc kubenswrapper[4897]: E0214 18:44:54.220933 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:54.720922716 +0000 UTC m=+147.697331249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.264708 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.264751 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.265970 4897 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-f9lc5 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.266021 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" podUID="67063058-60ca-4efd-a102-cd90d5e43e56" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.321908 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:54 crc kubenswrapper[4897]: E0214 18:44:54.322053 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:54.822021386 +0000 UTC m=+147.798429869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.322586 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:54 crc kubenswrapper[4897]: E0214 18:44:54.322847 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:54.822837823 +0000 UTC m=+147.799246306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.423739 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:54 crc kubenswrapper[4897]: E0214 18:44:54.423958 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:54.923921863 +0000 UTC m=+147.900330346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.424223 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:54 crc kubenswrapper[4897]: E0214 18:44:54.424593 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:54.924578025 +0000 UTC m=+147.900986578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.433557 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bzvvc" event={"ID":"87f809c6-5e7e-47ec-8fd2-3eca0bd6b045","Type":"ContainerStarted","Data":"cbf8de89b19da2ffd7d68e227d2e23ee8084dcd33b3f21ae403a7276aaeda7db"} Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.433634 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-bzvvc" Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.435443 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wg7rv" event={"ID":"d70bf6df-ebee-4193-982f-e9d86147ea35","Type":"ContainerStarted","Data":"6af34146dba73a4a80b68f2f248534841f2749ba04b863288fc4c57ae999f4b2"} Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.437603 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" event={"ID":"15fa65ae-a663-434d-9d2d-2a69a3f7d81c","Type":"ContainerStarted","Data":"4470c0c809cae7d32f693d080d3b0047bfdb6b608e81a20d43877a4bdc32e360"} Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.438147 4897 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jh8w7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.438189 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" podUID="faa970d9-b5d7-49a1-b162-2bed0f528b71" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.438230 4897 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9pw99 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.438262 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" podUID="0f237a59-0e7e-4ae0-94c9-c6d451224a27" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.438592 4897 patch_prober.go:28] interesting pod/downloads-7954f5f757-9kvql container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.438608 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9kvql" podUID="7ec1f803-3889-4483-87ae-9a38bd020818" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.438949 4897 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9n8vm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.438969 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9n8vm" podUID="d62c28f1-696b-4b88-8f46-67abf833ee4c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.439078 4897 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-ws2d2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.439109 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" podUID="b63cb010-df8f-4e29-a7f3-6b68cb03e63a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.456695 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mgfqz" podStartSLOduration=126.456676929 podStartE2EDuration="2m6.456676929s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:53.876160083 +0000 UTC m=+146.852568566" watchObservedRunningTime="2026-02-14 18:44:54.456676929 +0000 UTC m=+147.433085412" Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.459347 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-bzvvc" podStartSLOduration=7.459336056 podStartE2EDuration="7.459336056s" podCreationTimestamp="2026-02-14 18:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:54.455212171 +0000 UTC m=+147.431620654" watchObservedRunningTime="2026-02-14 18:44:54.459336056 +0000 UTC m=+147.435744539" Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.477214 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-wg7rv" podStartSLOduration=126.477189363 podStartE2EDuration="2m6.477189363s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:54.477072679 +0000 UTC m=+147.453481182" watchObservedRunningTime="2026-02-14 18:44:54.477189363 +0000 UTC m=+147.453597846" Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.505301 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7lq6" podStartSLOduration=126.505272775 podStartE2EDuration="2m6.505272775s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:54.500270111 +0000 UTC m=+147.476678604" watchObservedRunningTime="2026-02-14 18:44:54.505272775 +0000 UTC m=+147.481681248" Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.524908 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:54 crc kubenswrapper[4897]: E0214 18:44:54.527183 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:55.027153264 +0000 UTC m=+148.003561747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.552548 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" podStartSLOduration=126.552531937 podStartE2EDuration="2m6.552531937s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:54.550512031 +0000 UTC m=+147.526920514" watchObservedRunningTime="2026-02-14 18:44:54.552531937 +0000 UTC m=+147.528940420" Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.605284 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5wxpc" podStartSLOduration=126.605260629 podStartE2EDuration="2m6.605260629s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:54.576892677 +0000 UTC m=+147.553301170" watchObservedRunningTime="2026-02-14 18:44:54.605260629 +0000 UTC m=+147.581669112" Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.628683 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:54 crc kubenswrapper[4897]: E0214 18:44:54.629137 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:55.129116713 +0000 UTC m=+148.105525276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.640500 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" podStartSLOduration=127.640480686 podStartE2EDuration="2m7.640480686s" podCreationTimestamp="2026-02-14 18:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:54.638982487 +0000 UTC m=+147.615390990" watchObservedRunningTime="2026-02-14 18:44:54.640480686 +0000 UTC m=+147.616889169" Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.640791 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5jxh" podStartSLOduration=126.640784766 podStartE2EDuration="2m6.640784766s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:54.605727494 +0000 UTC m=+147.582135977" watchObservedRunningTime="2026-02-14 18:44:54.640784766 +0000 UTC m=+147.617193249" Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.730018 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:54 crc kubenswrapper[4897]: E0214 18:44:54.730244 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:55.230218194 +0000 UTC m=+148.206626677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.730306 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:54 crc kubenswrapper[4897]: E0214 18:44:54.730663 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:55.230651948 +0000 UTC m=+148.207060431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.732968 4897 patch_prober.go:28] interesting pod/router-default-5444994796-c5z8g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 18:44:54 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 14 18:44:54 crc kubenswrapper[4897]: [+]process-running ok Feb 14 18:44:54 crc kubenswrapper[4897]: healthz check failed Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.733042 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c5z8g" podUID="9c34acbe-6a2d-446a-b2e2-5fc5a4130deb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.831923 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:54 crc kubenswrapper[4897]: E0214 18:44:54.832316 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:55.332296926 +0000 UTC m=+148.308705409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:54 crc kubenswrapper[4897]: I0214 18:44:54.933099 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:54 crc kubenswrapper[4897]: E0214 18:44:54.933500 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:55.433481109 +0000 UTC m=+148.409889662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.034554 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:55 crc kubenswrapper[4897]: E0214 18:44:55.034756 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:55.534734075 +0000 UTC m=+148.511142558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.034842 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:55 crc kubenswrapper[4897]: E0214 18:44:55.035125 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:55.535115078 +0000 UTC m=+148.511523561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.136081 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:55 crc kubenswrapper[4897]: E0214 18:44:55.136268 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:55.636243349 +0000 UTC m=+148.612651832 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.136421 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:55 crc kubenswrapper[4897]: E0214 18:44:55.136800 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:55.636791606 +0000 UTC m=+148.613200089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.237454 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:55 crc kubenswrapper[4897]: E0214 18:44:55.237664 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:55.737625418 +0000 UTC m=+148.714033901 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.237821 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:55 crc kubenswrapper[4897]: E0214 18:44:55.238147 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:55.738137635 +0000 UTC m=+148.714546118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.338385 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:55 crc kubenswrapper[4897]: E0214 18:44:55.338731 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:55.838714218 +0000 UTC m=+148.815122701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.439795 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:55 crc kubenswrapper[4897]: E0214 18:44:55.440153 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:55.94013681 +0000 UTC m=+148.916545293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.442420 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" event={"ID":"68eb569a-ca5d-4eef-a936-fd697b26d0be","Type":"ContainerStarted","Data":"de2ec601eedfe31a00146b7e7b73953b95c18e0f30f95355f1c2deafad5897ef"} Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.442712 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.541277 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:55 crc kubenswrapper[4897]: E0214 18:44:55.541444 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:56.041424597 +0000 UTC m=+149.017833080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.541664 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:55 crc kubenswrapper[4897]: E0214 18:44:55.542578 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:56.042568665 +0000 UTC m=+149.018977148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.597622 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.643291 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:55 crc kubenswrapper[4897]: E0214 18:44:55.643561 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:56.143518969 +0000 UTC m=+149.119927452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.644090 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:55 crc kubenswrapper[4897]: E0214 18:44:55.644429 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:56.144413419 +0000 UTC m=+149.120821902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.735100 4897 patch_prober.go:28] interesting pod/router-default-5444994796-c5z8g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 18:44:55 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 14 18:44:55 crc kubenswrapper[4897]: [+]process-running ok Feb 14 18:44:55 crc kubenswrapper[4897]: healthz check failed Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.735185 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c5z8g" podUID="9c34acbe-6a2d-446a-b2e2-5fc5a4130deb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.745168 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:55 crc kubenswrapper[4897]: E0214 18:44:55.745312 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:56.245282812 +0000 UTC m=+149.221691295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.745847 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:55 crc kubenswrapper[4897]: E0214 18:44:55.746325 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:56.246312996 +0000 UTC m=+149.222721479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.746443 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.746493 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.749618 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.756664 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.847375 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.847531 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:44:55 crc kubenswrapper[4897]: E0214 18:44:55.847641 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:56.347587442 +0000 UTC m=+149.323995925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.847952 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.848071 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:55 crc kubenswrapper[4897]: E0214 18:44:55.848406 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:56.348390498 +0000 UTC m=+149.324798981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.852782 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.865189 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.918297 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.934724 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.949424 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:55 crc kubenswrapper[4897]: E0214 18:44:55.949898 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:56.449874382 +0000 UTC m=+149.426282865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:55 crc kubenswrapper[4897]: I0214 18:44:55.950358 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 18:44:56 crc kubenswrapper[4897]: I0214 18:44:56.051572 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:56 crc kubenswrapper[4897]: E0214 18:44:56.052219 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:56.552194052 +0000 UTC m=+149.528602535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:56 crc kubenswrapper[4897]: I0214 18:44:56.152639 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:56 crc kubenswrapper[4897]: E0214 18:44:56.153181 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:56.653159368 +0000 UTC m=+149.629567851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:56 crc kubenswrapper[4897]: I0214 18:44:56.255252 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:56 crc kubenswrapper[4897]: E0214 18:44:56.255543 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:56.755532671 +0000 UTC m=+149.731941154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:56 crc kubenswrapper[4897]: I0214 18:44:56.357288 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:56 crc kubenswrapper[4897]: E0214 18:44:56.357880 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:56.857864901 +0000 UTC m=+149.834273384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:56 crc kubenswrapper[4897]: I0214 18:44:56.442701 4897 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9pw99 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 18:44:56 crc kubenswrapper[4897]: I0214 18:44:56.442762 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" podUID="0f237a59-0e7e-4ae0-94c9-c6d451224a27" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 18:44:56 crc kubenswrapper[4897]: I0214 18:44:56.459956 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:56 crc kubenswrapper[4897]: E0214 18:44:56.460224 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:56.960212113 +0000 UTC m=+149.936620586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:56 crc kubenswrapper[4897]: I0214 18:44:56.477096 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5554db4b8eeb13a5d3d4fe458f144c27d475ebcd9c3d1be5ee28d4448793f063"} Feb 14 18:44:56 crc kubenswrapper[4897]: I0214 18:44:56.497013 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" event={"ID":"68eb569a-ca5d-4eef-a936-fd697b26d0be","Type":"ContainerStarted","Data":"c8e0c8007a628d4349c115d79b5e12498a4985b777a46246f95aa023157bb1e0"} Feb 14 18:44:56 crc kubenswrapper[4897]: I0214 18:44:56.561201 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:56 crc kubenswrapper[4897]: E0214 18:44:56.562265 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:57.062249344 +0000 UTC m=+150.038657827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:56 crc kubenswrapper[4897]: I0214 18:44:56.663762 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:56 crc kubenswrapper[4897]: E0214 18:44:56.664095 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:57.164083759 +0000 UTC m=+150.140492242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:56 crc kubenswrapper[4897]: I0214 18:44:56.742176 4897 patch_prober.go:28] interesting pod/router-default-5444994796-c5z8g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 18:44:56 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 14 18:44:56 crc kubenswrapper[4897]: [+]process-running ok Feb 14 18:44:56 crc kubenswrapper[4897]: healthz check failed Feb 14 18:44:56 crc kubenswrapper[4897]: I0214 18:44:56.742518 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c5z8g" podUID="9c34acbe-6a2d-446a-b2e2-5fc5a4130deb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 18:44:56 crc kubenswrapper[4897]: I0214 18:44:56.764576 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:56 crc kubenswrapper[4897]: E0214 18:44:56.764815 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:57.264774766 +0000 UTC m=+150.241183249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:56 crc kubenswrapper[4897]: I0214 18:44:56.764923 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:56 crc kubenswrapper[4897]: E0214 18:44:56.765916 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:57.265900654 +0000 UTC m=+150.242309137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:56 crc kubenswrapper[4897]: I0214 18:44:56.826506 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4c74d"] Feb 14 18:44:56 crc kubenswrapper[4897]: I0214 18:44:56.832290 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4c74d" Feb 14 18:44:56 crc kubenswrapper[4897]: I0214 18:44:56.836942 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 14 18:44:56 crc kubenswrapper[4897]: I0214 18:44:56.867106 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:56 crc kubenswrapper[4897]: E0214 18:44:56.867713 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:57.367690076 +0000 UTC m=+150.344098559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:56 crc kubenswrapper[4897]: I0214 18:44:56.911633 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4c74d"] Feb 14 18:44:56 crc kubenswrapper[4897]: I0214 18:44:56.968533 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp9mp\" (UniqueName: \"kubernetes.io/projected/037c41d9-7976-43c9-baa6-57aec44c28de-kube-api-access-wp9mp\") pod \"certified-operators-4c74d\" (UID: \"037c41d9-7976-43c9-baa6-57aec44c28de\") " pod="openshift-marketplace/certified-operators-4c74d" Feb 14 18:44:56 crc kubenswrapper[4897]: I0214 18:44:56.968590 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:56 crc kubenswrapper[4897]: I0214 18:44:56.968660 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/037c41d9-7976-43c9-baa6-57aec44c28de-utilities\") pod \"certified-operators-4c74d\" (UID: \"037c41d9-7976-43c9-baa6-57aec44c28de\") " pod="openshift-marketplace/certified-operators-4c74d" Feb 14 18:44:56 crc kubenswrapper[4897]: I0214 18:44:56.968681 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/037c41d9-7976-43c9-baa6-57aec44c28de-catalog-content\") pod \"certified-operators-4c74d\" (UID: \"037c41d9-7976-43c9-baa6-57aec44c28de\") " pod="openshift-marketplace/certified-operators-4c74d" Feb 14 18:44:56 crc kubenswrapper[4897]: E0214 18:44:56.968960 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:57.468949282 +0000 UTC m=+150.445357765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.013518 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rph5f"] Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.016923 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rph5f" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.024220 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.035162 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rph5f"] Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.069475 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:57 crc kubenswrapper[4897]: E0214 18:44:57.069618 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:57.569578357 +0000 UTC m=+150.545986840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.069752 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d360c9a9-d428-4ca4-9379-e052a6e60b22-utilities\") pod \"community-operators-rph5f\" (UID: \"d360c9a9-d428-4ca4-9379-e052a6e60b22\") " pod="openshift-marketplace/community-operators-rph5f" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.069799 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp9mp\" (UniqueName: \"kubernetes.io/projected/037c41d9-7976-43c9-baa6-57aec44c28de-kube-api-access-wp9mp\") pod \"certified-operators-4c74d\" (UID: \"037c41d9-7976-43c9-baa6-57aec44c28de\") " pod="openshift-marketplace/certified-operators-4c74d" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.069826 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvq8l\" (UniqueName: \"kubernetes.io/projected/d360c9a9-d428-4ca4-9379-e052a6e60b22-kube-api-access-xvq8l\") pod \"community-operators-rph5f\" (UID: \"d360c9a9-d428-4ca4-9379-e052a6e60b22\") " pod="openshift-marketplace/community-operators-rph5f" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.069845 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d360c9a9-d428-4ca4-9379-e052a6e60b22-catalog-content\") pod \"community-operators-rph5f\" (UID: \"d360c9a9-d428-4ca4-9379-e052a6e60b22\") " pod="openshift-marketplace/community-operators-rph5f" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.069900 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.069941 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/037c41d9-7976-43c9-baa6-57aec44c28de-utilities\") pod \"certified-operators-4c74d\" (UID: \"037c41d9-7976-43c9-baa6-57aec44c28de\") " pod="openshift-marketplace/certified-operators-4c74d" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.069960 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/037c41d9-7976-43c9-baa6-57aec44c28de-catalog-content\") pod \"certified-operators-4c74d\" (UID: \"037c41d9-7976-43c9-baa6-57aec44c28de\") " pod="openshift-marketplace/certified-operators-4c74d" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.070365 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/037c41d9-7976-43c9-baa6-57aec44c28de-catalog-content\") pod \"certified-operators-4c74d\" (UID: \"037c41d9-7976-43c9-baa6-57aec44c28de\") " pod="openshift-marketplace/certified-operators-4c74d" Feb 14 18:44:57 crc kubenswrapper[4897]: E0214 18:44:57.070506 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:57.570497507 +0000 UTC m=+150.546905990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.070560 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/037c41d9-7976-43c9-baa6-57aec44c28de-utilities\") pod \"certified-operators-4c74d\" (UID: \"037c41d9-7976-43c9-baa6-57aec44c28de\") " pod="openshift-marketplace/certified-operators-4c74d" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.097061 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp9mp\" (UniqueName: \"kubernetes.io/projected/037c41d9-7976-43c9-baa6-57aec44c28de-kube-api-access-wp9mp\") pod \"certified-operators-4c74d\" (UID: \"037c41d9-7976-43c9-baa6-57aec44c28de\") " pod="openshift-marketplace/certified-operators-4c74d" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.172896 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.173080 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvq8l\" (UniqueName: \"kubernetes.io/projected/d360c9a9-d428-4ca4-9379-e052a6e60b22-kube-api-access-xvq8l\") pod \"community-operators-rph5f\" (UID: \"d360c9a9-d428-4ca4-9379-e052a6e60b22\") " pod="openshift-marketplace/community-operators-rph5f" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.173109 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d360c9a9-d428-4ca4-9379-e052a6e60b22-catalog-content\") pod \"community-operators-rph5f\" (UID: \"d360c9a9-d428-4ca4-9379-e052a6e60b22\") " pod="openshift-marketplace/community-operators-rph5f" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.173173 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d360c9a9-d428-4ca4-9379-e052a6e60b22-utilities\") pod \"community-operators-rph5f\" (UID: \"d360c9a9-d428-4ca4-9379-e052a6e60b22\") " pod="openshift-marketplace/community-operators-rph5f" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.173572 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d360c9a9-d428-4ca4-9379-e052a6e60b22-utilities\") pod \"community-operators-rph5f\" (UID: \"d360c9a9-d428-4ca4-9379-e052a6e60b22\") " pod="openshift-marketplace/community-operators-rph5f" Feb 14 18:44:57 crc kubenswrapper[4897]: E0214 18:44:57.173641 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:57.673627904 +0000 UTC m=+150.650036387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.174142 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d360c9a9-d428-4ca4-9379-e052a6e60b22-catalog-content\") pod \"community-operators-rph5f\" (UID: \"d360c9a9-d428-4ca4-9379-e052a6e60b22\") " pod="openshift-marketplace/community-operators-rph5f" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.197067 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvq8l\" (UniqueName: \"kubernetes.io/projected/d360c9a9-d428-4ca4-9379-e052a6e60b22-kube-api-access-xvq8l\") pod \"community-operators-rph5f\" (UID: \"d360c9a9-d428-4ca4-9379-e052a6e60b22\") " pod="openshift-marketplace/community-operators-rph5f" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.197315 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4c74d" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.210790 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-84zhb"] Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.211904 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84zhb" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.228242 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-84zhb"] Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.274455 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xs62\" (UniqueName: \"kubernetes.io/projected/696766b1-de35-447a-8f84-537044aa0f34-kube-api-access-7xs62\") pod \"certified-operators-84zhb\" (UID: \"696766b1-de35-447a-8f84-537044aa0f34\") " pod="openshift-marketplace/certified-operators-84zhb" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.274837 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/696766b1-de35-447a-8f84-537044aa0f34-utilities\") pod \"certified-operators-84zhb\" (UID: \"696766b1-de35-447a-8f84-537044aa0f34\") " pod="openshift-marketplace/certified-operators-84zhb" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.274868 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/696766b1-de35-447a-8f84-537044aa0f34-catalog-content\") pod \"certified-operators-84zhb\" (UID: \"696766b1-de35-447a-8f84-537044aa0f34\") " pod="openshift-marketplace/certified-operators-84zhb" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.274900 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:57 crc kubenswrapper[4897]: E0214 18:44:57.275168 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:57.775155759 +0000 UTC m=+150.751564242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.366990 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rph5f" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.376384 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:57 crc kubenswrapper[4897]: E0214 18:44:57.376541 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:57.876516128 +0000 UTC m=+150.852924601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.376623 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/696766b1-de35-447a-8f84-537044aa0f34-utilities\") pod \"certified-operators-84zhb\" (UID: \"696766b1-de35-447a-8f84-537044aa0f34\") " pod="openshift-marketplace/certified-operators-84zhb" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.376680 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/696766b1-de35-447a-8f84-537044aa0f34-catalog-content\") pod \"certified-operators-84zhb\" (UID: \"696766b1-de35-447a-8f84-537044aa0f34\") " pod="openshift-marketplace/certified-operators-84zhb" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.376711 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.376742 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xs62\" (UniqueName: \"kubernetes.io/projected/696766b1-de35-447a-8f84-537044aa0f34-kube-api-access-7xs62\") pod \"certified-operators-84zhb\" (UID: \"696766b1-de35-447a-8f84-537044aa0f34\") " pod="openshift-marketplace/certified-operators-84zhb" Feb 14 18:44:57 crc kubenswrapper[4897]: E0214 18:44:57.377329 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:57.877318484 +0000 UTC m=+150.853726967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.377632 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/696766b1-de35-447a-8f84-537044aa0f34-catalog-content\") pod \"certified-operators-84zhb\" (UID: \"696766b1-de35-447a-8f84-537044aa0f34\") " pod="openshift-marketplace/certified-operators-84zhb" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.377722 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/696766b1-de35-447a-8f84-537044aa0f34-utilities\") pod \"certified-operators-84zhb\" (UID: \"696766b1-de35-447a-8f84-537044aa0f34\") " pod="openshift-marketplace/certified-operators-84zhb" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.406963 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xs62\" (UniqueName: \"kubernetes.io/projected/696766b1-de35-447a-8f84-537044aa0f34-kube-api-access-7xs62\") pod \"certified-operators-84zhb\" (UID: \"696766b1-de35-447a-8f84-537044aa0f34\") " pod="openshift-marketplace/certified-operators-84zhb" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.410720 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h6rm4"] Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.412087 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6rm4" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.428765 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h6rm4"] Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.477769 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.478081 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a07b450-333c-4f3f-8c4d-4b9bd35b7d74-catalog-content\") pod \"community-operators-h6rm4\" (UID: \"5a07b450-333c-4f3f-8c4d-4b9bd35b7d74\") " pod="openshift-marketplace/community-operators-h6rm4" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.478138 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnpjk\" (UniqueName: \"kubernetes.io/projected/5a07b450-333c-4f3f-8c4d-4b9bd35b7d74-kube-api-access-qnpjk\") pod \"community-operators-h6rm4\" (UID: \"5a07b450-333c-4f3f-8c4d-4b9bd35b7d74\") " pod="openshift-marketplace/community-operators-h6rm4" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.478195 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a07b450-333c-4f3f-8c4d-4b9bd35b7d74-utilities\") pod \"community-operators-h6rm4\" (UID: \"5a07b450-333c-4f3f-8c4d-4b9bd35b7d74\") " pod="openshift-marketplace/community-operators-h6rm4" Feb 14 18:44:57 crc kubenswrapper[4897]: E0214 18:44:57.478290 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:57.97827508 +0000 UTC m=+150.954683563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.525822 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d605b97d281d14935f7d6dff0cfd2d6fef9b71ecb41b2bd8f0ff8f64d71abafa"} Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.527101 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.530088 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"92b456c0e2f1fb470309849ea558ff3bd62e9377d03633ab174b582e95cbd6ef"} Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.530110 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0c94e28ce7758826243138ccc4c0fb4e8cff8bf74ea951b6a8822f48bea8f404"} Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.536830 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" event={"ID":"68eb569a-ca5d-4eef-a936-fd697b26d0be","Type":"ContainerStarted","Data":"40bbdb6d9b275fbe6fce0607b2d094640ccf3abd01882714170c100ba6dc1434"} Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.540943 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"56f7ffea5d45a45e9ffc15e845940bf2cec6e539ae5bc3eebd563f30becaa64f"} Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.541152 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"4d9c16d505ac074f0a9d17c93b6775927d901229412aa5a03a99280bbdea0834"} Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.557196 4897 generic.go:334] "Generic (PLEG): container finished" podID="85830a53-70c2-433d-a359-025fababa083" containerID="f8cd91ba1c8c6fb76daf258862386009b48967e82b99e1e0fbdf4a9bc00a4e60" exitCode=0 Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.557246 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518230-qxs85" event={"ID":"85830a53-70c2-433d-a359-025fababa083","Type":"ContainerDied","Data":"f8cd91ba1c8c6fb76daf258862386009b48967e82b99e1e0fbdf4a9bc00a4e60"} Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.568660 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84zhb" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.580814 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnpjk\" (UniqueName: \"kubernetes.io/projected/5a07b450-333c-4f3f-8c4d-4b9bd35b7d74-kube-api-access-qnpjk\") pod \"community-operators-h6rm4\" (UID: \"5a07b450-333c-4f3f-8c4d-4b9bd35b7d74\") " pod="openshift-marketplace/community-operators-h6rm4" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.580874 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.580901 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a07b450-333c-4f3f-8c4d-4b9bd35b7d74-utilities\") pod \"community-operators-h6rm4\" (UID: \"5a07b450-333c-4f3f-8c4d-4b9bd35b7d74\") " pod="openshift-marketplace/community-operators-h6rm4" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.581149 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a07b450-333c-4f3f-8c4d-4b9bd35b7d74-catalog-content\") pod \"community-operators-h6rm4\" (UID: \"5a07b450-333c-4f3f-8c4d-4b9bd35b7d74\") " pod="openshift-marketplace/community-operators-h6rm4" Feb 14 18:44:57 crc kubenswrapper[4897]: E0214 18:44:57.582307 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:58.082280656 +0000 UTC m=+151.058689139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.582321 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a07b450-333c-4f3f-8c4d-4b9bd35b7d74-utilities\") pod \"community-operators-h6rm4\" (UID: \"5a07b450-333c-4f3f-8c4d-4b9bd35b7d74\") " pod="openshift-marketplace/community-operators-h6rm4" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.582419 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a07b450-333c-4f3f-8c4d-4b9bd35b7d74-catalog-content\") pod \"community-operators-h6rm4\" (UID: \"5a07b450-333c-4f3f-8c4d-4b9bd35b7d74\") " pod="openshift-marketplace/community-operators-h6rm4" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.603343 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4c74d"] Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.620820 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnpjk\" (UniqueName: \"kubernetes.io/projected/5a07b450-333c-4f3f-8c4d-4b9bd35b7d74-kube-api-access-qnpjk\") pod \"community-operators-h6rm4\" (UID: \"5a07b450-333c-4f3f-8c4d-4b9bd35b7d74\") " pod="openshift-marketplace/community-operators-h6rm4" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.666902 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rph5f"] Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.682054 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:57 crc kubenswrapper[4897]: E0214 18:44:57.682194 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:58.182159956 +0000 UTC m=+151.158568439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.682460 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:57 crc kubenswrapper[4897]: E0214 18:44:57.684370 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:58.184354378 +0000 UTC m=+151.160762861 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.696214 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" podStartSLOduration=10.696197407 podStartE2EDuration="10.696197407s" podCreationTimestamp="2026-02-14 18:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:57.674131603 +0000 UTC m=+150.650540086" watchObservedRunningTime="2026-02-14 18:44:57.696197407 +0000 UTC m=+150.672605890" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.743421 4897 patch_prober.go:28] interesting pod/router-default-5444994796-c5z8g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 18:44:57 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 14 18:44:57 crc kubenswrapper[4897]: [+]process-running ok Feb 14 18:44:57 crc kubenswrapper[4897]: healthz check failed Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.743492 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c5z8g" podUID="9c34acbe-6a2d-446a-b2e2-5fc5a4130deb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.767408 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6rm4" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.783743 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:57 crc kubenswrapper[4897]: E0214 18:44:57.783896 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:58.283867567 +0000 UTC m=+151.260276050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.784287 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:57 crc kubenswrapper[4897]: E0214 18:44:57.784764 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:58.284747966 +0000 UTC m=+151.261156459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.860297 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-84zhb"] Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.885154 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:57 crc kubenswrapper[4897]: E0214 18:44:57.885582 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:58.385566308 +0000 UTC m=+151.361974791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.924255 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.931091 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-tndnf" Feb 14 18:44:57 crc kubenswrapper[4897]: I0214 18:44:57.988918 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:57 crc kubenswrapper[4897]: E0214 18:44:57.990347 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:58.490334908 +0000 UTC m=+151.466743471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.039175 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h6rm4"] Feb 14 18:44:58 crc kubenswrapper[4897]: W0214 18:44:58.046120 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a07b450_333c_4f3f_8c4d_4b9bd35b7d74.slice/crio-04d8ef554810c70e1851f6cd3e7a10efb226bbffa990026363f3043ceeff0b22 WatchSource:0}: Error finding container 04d8ef554810c70e1851f6cd3e7a10efb226bbffa990026363f3043ceeff0b22: Status 404 returned error can't find the container with id 04d8ef554810c70e1851f6cd3e7a10efb226bbffa990026363f3043ceeff0b22 Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.071669 4897 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.089585 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:58 crc kubenswrapper[4897]: E0214 18:44:58.089776 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:58.589749704 +0000 UTC m=+151.566158187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.089924 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:58 crc kubenswrapper[4897]: E0214 18:44:58.090198 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:58.590190437 +0000 UTC m=+151.566598920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:58 crc kubenswrapper[4897]: E0214 18:44:58.190612 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:58.690591076 +0000 UTC m=+151.666999569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.190653 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.190990 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:58 crc kubenswrapper[4897]: E0214 18:44:58.191317 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:58.691307909 +0000 UTC m=+151.667716402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.292511 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:58 crc kubenswrapper[4897]: E0214 18:44:58.293101 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:58.793082752 +0000 UTC m=+151.769491245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.382910 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.383965 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.389765 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.429742 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.429790 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.430089 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 14 18:44:58 crc kubenswrapper[4897]: E0214 18:44:58.430209 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:58.930195335 +0000 UTC m=+151.906603818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.530997 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:58 crc kubenswrapper[4897]: E0214 18:44:58.531250 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:59.031204683 +0000 UTC m=+152.007613166 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.531341 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.531581 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.531644 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 18:44:58 crc kubenswrapper[4897]: E0214 18:44:58.531665 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:59.031648727 +0000 UTC m=+152.008057210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.563571 4897 generic.go:334] "Generic (PLEG): container finished" podID="5a07b450-333c-4f3f-8c4d-4b9bd35b7d74" containerID="bee5b88c3c6c44098aed3f41c7bcd73da5c9ce0da8e84552712bb7824754b58b" exitCode=0 Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.563639 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6rm4" event={"ID":"5a07b450-333c-4f3f-8c4d-4b9bd35b7d74","Type":"ContainerDied","Data":"bee5b88c3c6c44098aed3f41c7bcd73da5c9ce0da8e84552712bb7824754b58b"} Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.563667 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6rm4" event={"ID":"5a07b450-333c-4f3f-8c4d-4b9bd35b7d74","Type":"ContainerStarted","Data":"04d8ef554810c70e1851f6cd3e7a10efb226bbffa990026363f3043ceeff0b22"} Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.566243 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.566438 4897 generic.go:334] "Generic (PLEG): container finished" podID="037c41d9-7976-43c9-baa6-57aec44c28de" containerID="af6222ce2bdd1205d9c09546c23a6ea1ffda59cc2c23a4bdf06eb856fb6f8d7e" exitCode=0 Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.566490 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4c74d" event={"ID":"037c41d9-7976-43c9-baa6-57aec44c28de","Type":"ContainerDied","Data":"af6222ce2bdd1205d9c09546c23a6ea1ffda59cc2c23a4bdf06eb856fb6f8d7e"} Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.566546 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4c74d" event={"ID":"037c41d9-7976-43c9-baa6-57aec44c28de","Type":"ContainerStarted","Data":"2ae33caf3fc86ea8248e57588cbc604a255b0a6f68037dd5fe9f850e31d29842"} Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.569970 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" event={"ID":"68eb569a-ca5d-4eef-a936-fd697b26d0be","Type":"ContainerStarted","Data":"4be271d7f2d4d3054ad62f0f8f7d4a55b91949129e5789ef56c5cef09f45f6e3"} Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.574240 4897 generic.go:334] "Generic (PLEG): container finished" podID="696766b1-de35-447a-8f84-537044aa0f34" containerID="858bc61f7bb6de22b22ee68c63ad04754b6e31ca6e5b45016fe83d84d7f6dc7e" exitCode=0 Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.574603 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84zhb" event={"ID":"696766b1-de35-447a-8f84-537044aa0f34","Type":"ContainerDied","Data":"858bc61f7bb6de22b22ee68c63ad04754b6e31ca6e5b45016fe83d84d7f6dc7e"} Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.574653 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84zhb" event={"ID":"696766b1-de35-447a-8f84-537044aa0f34","Type":"ContainerStarted","Data":"e4491d4fff704b77d948b6406477e0f8338ff38fb4e93821cf8ebd309084c212"} Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.581640 4897 generic.go:334] "Generic (PLEG): container finished" podID="d360c9a9-d428-4ca4-9379-e052a6e60b22" containerID="06c0a7140f775be4f1022e407c8942dbb90d63eacb3b6af8c236b44e2b16ccab" exitCode=0 Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.581742 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rph5f" event={"ID":"d360c9a9-d428-4ca4-9379-e052a6e60b22","Type":"ContainerDied","Data":"06c0a7140f775be4f1022e407c8942dbb90d63eacb3b6af8c236b44e2b16ccab"} Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.581783 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rph5f" event={"ID":"d360c9a9-d428-4ca4-9379-e052a6e60b22","Type":"ContainerStarted","Data":"71890c1e90a3c31dfbff477b26b57ec30f807fc2b82bb4d01ed3d7070a7aed7d"} Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.609943 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.639267 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:58 crc kubenswrapper[4897]: E0214 18:44:58.639451 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:59.139426747 +0000 UTC m=+152.115835220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.639527 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.639583 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.639613 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.639699 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 18:44:58 crc kubenswrapper[4897]: E0214 18:44:58.639854 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:59.139845451 +0000 UTC m=+152.116253934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.672073 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.732698 4897 patch_prober.go:28] interesting pod/router-default-5444994796-c5z8g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 18:44:58 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 14 18:44:58 crc kubenswrapper[4897]: [+]process-running ok Feb 14 18:44:58 crc kubenswrapper[4897]: healthz check failed Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.732871 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c5z8g" podUID="9c34acbe-6a2d-446a-b2e2-5fc5a4130deb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.740273 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.740612 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 18:44:58 crc kubenswrapper[4897]: E0214 18:44:58.740682 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:59.240663692 +0000 UTC m=+152.217072175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.752459 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:58 crc kubenswrapper[4897]: E0214 18:44:58.753361 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 18:44:59.253347279 +0000 UTC m=+152.229755762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fq4zf" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.854564 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518230-qxs85" Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.876438 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85830a53-70c2-433d-a359-025fababa083-secret-volume\") pod \"85830a53-70c2-433d-a359-025fababa083\" (UID: \"85830a53-70c2-433d-a359-025fababa083\") " Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.876501 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85830a53-70c2-433d-a359-025fababa083-config-volume\") pod \"85830a53-70c2-433d-a359-025fababa083\" (UID: \"85830a53-70c2-433d-a359-025fababa083\") " Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.876685 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.876707 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7lwt\" (UniqueName: \"kubernetes.io/projected/85830a53-70c2-433d-a359-025fababa083-kube-api-access-n7lwt\") pod \"85830a53-70c2-433d-a359-025fababa083\" (UID: \"85830a53-70c2-433d-a359-025fababa083\") " Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.879146 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85830a53-70c2-433d-a359-025fababa083-config-volume" (OuterVolumeSpecName: "config-volume") pod "85830a53-70c2-433d-a359-025fababa083" (UID: "85830a53-70c2-433d-a359-025fababa083"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:44:58 crc kubenswrapper[4897]: E0214 18:44:58.879243 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 18:44:59.379226933 +0000 UTC m=+152.355635416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.891268 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85830a53-70c2-433d-a359-025fababa083-kube-api-access-n7lwt" (OuterVolumeSpecName: "kube-api-access-n7lwt") pod "85830a53-70c2-433d-a359-025fababa083" (UID: "85830a53-70c2-433d-a359-025fababa083"). InnerVolumeSpecName "kube-api-access-n7lwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.894784 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85830a53-70c2-433d-a359-025fababa083-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "85830a53-70c2-433d-a359-025fababa083" (UID: "85830a53-70c2-433d-a359-025fababa083"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.952541 4897 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-14T18:44:58.07169426Z","Handler":null,"Name":""} Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.955868 4897 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.955928 4897 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.978855 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.979015 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7lwt\" (UniqueName: \"kubernetes.io/projected/85830a53-70c2-433d-a359-025fababa083-kube-api-access-n7lwt\") on node \"crc\" DevicePath \"\"" Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.979057 4897 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85830a53-70c2-433d-a359-025fababa083-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.979068 4897 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85830a53-70c2-433d-a359-025fababa083-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.987320 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.989016 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 18:44:58 crc kubenswrapper[4897]: I0214 18:44:58.989061 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:59 crc kubenswrapper[4897]: W0214 18:44:59.001210 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod00f3f4a6_8fc6_493b_95eb_5aca7a51b9f9.slice/crio-9a4d60f15ca9443eb9039e13e971e8fe6588c7322190485a3a291d13f175985e WatchSource:0}: Error finding container 9a4d60f15ca9443eb9039e13e971e8fe6588c7322190485a3a291d13f175985e: Status 404 returned error can't find the container with id 9a4d60f15ca9443eb9039e13e971e8fe6588c7322190485a3a291d13f175985e Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.041211 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-twzlp"] Feb 14 18:44:59 crc kubenswrapper[4897]: E0214 18:44:59.041813 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85830a53-70c2-433d-a359-025fababa083" containerName="collect-profiles" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.041825 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="85830a53-70c2-433d-a359-025fababa083" containerName="collect-profiles" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.042085 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="85830a53-70c2-433d-a359-025fababa083" containerName="collect-profiles" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.043414 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fq4zf\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.043636 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twzlp" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.057101 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.067675 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-twzlp"] Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.094833 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.095511 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cdafc37-f772-4b48-b1cf-29759861b373-utilities\") pod \"redhat-marketplace-twzlp\" (UID: \"6cdafc37-f772-4b48-b1cf-29759861b373\") " pod="openshift-marketplace/redhat-marketplace-twzlp" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.095542 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r86b6\" (UniqueName: \"kubernetes.io/projected/6cdafc37-f772-4b48-b1cf-29759861b373-kube-api-access-r86b6\") pod \"redhat-marketplace-twzlp\" (UID: \"6cdafc37-f772-4b48-b1cf-29759861b373\") " pod="openshift-marketplace/redhat-marketplace-twzlp" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.095560 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cdafc37-f772-4b48-b1cf-29759861b373-catalog-content\") pod \"redhat-marketplace-twzlp\" (UID: \"6cdafc37-f772-4b48-b1cf-29759861b373\") " pod="openshift-marketplace/redhat-marketplace-twzlp" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.149562 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.197174 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cdafc37-f772-4b48-b1cf-29759861b373-utilities\") pod \"redhat-marketplace-twzlp\" (UID: \"6cdafc37-f772-4b48-b1cf-29759861b373\") " pod="openshift-marketplace/redhat-marketplace-twzlp" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.197245 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r86b6\" (UniqueName: \"kubernetes.io/projected/6cdafc37-f772-4b48-b1cf-29759861b373-kube-api-access-r86b6\") pod \"redhat-marketplace-twzlp\" (UID: \"6cdafc37-f772-4b48-b1cf-29759861b373\") " pod="openshift-marketplace/redhat-marketplace-twzlp" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.197265 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cdafc37-f772-4b48-b1cf-29759861b373-catalog-content\") pod \"redhat-marketplace-twzlp\" (UID: \"6cdafc37-f772-4b48-b1cf-29759861b373\") " pod="openshift-marketplace/redhat-marketplace-twzlp" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.197648 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cdafc37-f772-4b48-b1cf-29759861b373-utilities\") pod \"redhat-marketplace-twzlp\" (UID: \"6cdafc37-f772-4b48-b1cf-29759861b373\") " pod="openshift-marketplace/redhat-marketplace-twzlp" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.198022 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cdafc37-f772-4b48-b1cf-29759861b373-catalog-content\") pod \"redhat-marketplace-twzlp\" (UID: \"6cdafc37-f772-4b48-b1cf-29759861b373\") " pod="openshift-marketplace/redhat-marketplace-twzlp" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.226157 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r86b6\" (UniqueName: \"kubernetes.io/projected/6cdafc37-f772-4b48-b1cf-29759861b373-kube-api-access-r86b6\") pod \"redhat-marketplace-twzlp\" (UID: \"6cdafc37-f772-4b48-b1cf-29759861b373\") " pod="openshift-marketplace/redhat-marketplace-twzlp" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.278187 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.286308 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f9lc5" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.297773 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.319364 4897 patch_prober.go:28] interesting pod/downloads-7954f5f757-9kvql container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.319405 4897 patch_prober.go:28] interesting pod/downloads-7954f5f757-9kvql container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.319441 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9kvql" podUID="7ec1f803-3889-4483-87ae-9a38bd020818" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.319443 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9kvql" podUID="7ec1f803-3889-4483-87ae-9a38bd020818" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.337121 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.390614 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.417333 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6cf2f"] Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.426379 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twzlp" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.431395 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6cf2f" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.446145 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6cf2f"] Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.504954 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt4gh\" (UniqueName: \"kubernetes.io/projected/c35d0c45-bd4b-4e9c-bd85-e121f336a572-kube-api-access-wt4gh\") pod \"redhat-marketplace-6cf2f\" (UID: \"c35d0c45-bd4b-4e9c-bd85-e121f336a572\") " pod="openshift-marketplace/redhat-marketplace-6cf2f" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.505012 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35d0c45-bd4b-4e9c-bd85-e121f336a572-catalog-content\") pod \"redhat-marketplace-6cf2f\" (UID: \"c35d0c45-bd4b-4e9c-bd85-e121f336a572\") " pod="openshift-marketplace/redhat-marketplace-6cf2f" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.505096 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35d0c45-bd4b-4e9c-bd85-e121f336a572-utilities\") pod \"redhat-marketplace-6cf2f\" (UID: \"c35d0c45-bd4b-4e9c-bd85-e121f336a572\") " pod="openshift-marketplace/redhat-marketplace-6cf2f" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.524623 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.584589 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.585142 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.586311 4897 patch_prober.go:28] interesting pod/console-f9d7485db-6jjtk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.586354 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6jjtk" podUID="044e39f7-5f0c-4bd9-ad2b-6bab235abf9a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.594247 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518230-qxs85" event={"ID":"85830a53-70c2-433d-a359-025fababa083","Type":"ContainerDied","Data":"b5a38b3b3b6207c66a4b782cb967b33841d18d482013a274a80c38fa29dbeb05"} Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.594282 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5a38b3b3b6207c66a4b782cb967b33841d18d482013a274a80c38fa29dbeb05" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.594380 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518230-qxs85" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.605950 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt4gh\" (UniqueName: \"kubernetes.io/projected/c35d0c45-bd4b-4e9c-bd85-e121f336a572-kube-api-access-wt4gh\") pod \"redhat-marketplace-6cf2f\" (UID: \"c35d0c45-bd4b-4e9c-bd85-e121f336a572\") " pod="openshift-marketplace/redhat-marketplace-6cf2f" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.606001 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35d0c45-bd4b-4e9c-bd85-e121f336a572-catalog-content\") pod \"redhat-marketplace-6cf2f\" (UID: \"c35d0c45-bd4b-4e9c-bd85-e121f336a572\") " pod="openshift-marketplace/redhat-marketplace-6cf2f" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.606060 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35d0c45-bd4b-4e9c-bd85-e121f336a572-utilities\") pod \"redhat-marketplace-6cf2f\" (UID: \"c35d0c45-bd4b-4e9c-bd85-e121f336a572\") " pod="openshift-marketplace/redhat-marketplace-6cf2f" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.608279 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9","Type":"ContainerStarted","Data":"ccb1072eab92d222dc4595f92b4c0d22858329f0bc039673b533250dc80a5923"} Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.608326 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9","Type":"ContainerStarted","Data":"9a4d60f15ca9443eb9039e13e971e8fe6588c7322190485a3a291d13f175985e"} Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.608460 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35d0c45-bd4b-4e9c-bd85-e121f336a572-catalog-content\") pod \"redhat-marketplace-6cf2f\" (UID: \"c35d0c45-bd4b-4e9c-bd85-e121f336a572\") " pod="openshift-marketplace/redhat-marketplace-6cf2f" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.608842 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35d0c45-bd4b-4e9c-bd85-e121f336a572-utilities\") pod \"redhat-marketplace-6cf2f\" (UID: \"c35d0c45-bd4b-4e9c-bd85-e121f336a572\") " pod="openshift-marketplace/redhat-marketplace-6cf2f" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.642903 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt4gh\" (UniqueName: \"kubernetes.io/projected/c35d0c45-bd4b-4e9c-bd85-e121f336a572-kube-api-access-wt4gh\") pod \"redhat-marketplace-6cf2f\" (UID: \"c35d0c45-bd4b-4e9c-bd85-e121f336a572\") " pod="openshift-marketplace/redhat-marketplace-6cf2f" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.725382 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9n8vm" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.731292 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-c5z8g" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.733581 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.7335643219999999 podStartE2EDuration="1.733564322s" podCreationTimestamp="2026-02-14 18:44:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:44:59.633251597 +0000 UTC m=+152.609660080" watchObservedRunningTime="2026-02-14 18:44:59.733564322 +0000 UTC m=+152.709972805" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.735534 4897 patch_prober.go:28] interesting pod/router-default-5444994796-c5z8g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 18:44:59 crc kubenswrapper[4897]: [-]has-synced failed: reason withheld Feb 14 18:44:59 crc kubenswrapper[4897]: [+]process-running ok Feb 14 18:44:59 crc kubenswrapper[4897]: healthz check failed Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.735709 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c5z8g" podUID="9c34acbe-6a2d-446a-b2e2-5fc5a4130deb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.789331 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6cf2f" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.816939 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.909065 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.974831 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-twzlp"] Feb 14 18:44:59 crc kubenswrapper[4897]: I0214 18:44:59.996254 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fq4zf"] Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.006569 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bgv5g"] Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.028405 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bgv5g"] Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.028503 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bgv5g" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.031249 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.115495 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7v46\" (UniqueName: \"kubernetes.io/projected/7a553b46-b32c-435f-8e30-338b174cd444-kube-api-access-d7v46\") pod \"redhat-operators-bgv5g\" (UID: \"7a553b46-b32c-435f-8e30-338b174cd444\") " pod="openshift-marketplace/redhat-operators-bgv5g" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.115540 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a553b46-b32c-435f-8e30-338b174cd444-catalog-content\") pod \"redhat-operators-bgv5g\" (UID: \"7a553b46-b32c-435f-8e30-338b174cd444\") " pod="openshift-marketplace/redhat-operators-bgv5g" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.115582 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a553b46-b32c-435f-8e30-338b174cd444-utilities\") pod \"redhat-operators-bgv5g\" (UID: \"7a553b46-b32c-435f-8e30-338b174cd444\") " pod="openshift-marketplace/redhat-operators-bgv5g" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.115835 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5wxpc" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.130934 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5wxpc" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.135980 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518245-94xzh"] Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.137356 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518245-94xzh" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.145750 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.145980 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.147385 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518245-94xzh"] Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.216468 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66b06eb7-15a4-4237-9a72-9c3464f1cff1-config-volume\") pod \"collect-profiles-29518245-94xzh\" (UID: \"66b06eb7-15a4-4237-9a72-9c3464f1cff1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518245-94xzh" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.216511 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66b06eb7-15a4-4237-9a72-9c3464f1cff1-secret-volume\") pod \"collect-profiles-29518245-94xzh\" (UID: \"66b06eb7-15a4-4237-9a72-9c3464f1cff1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518245-94xzh" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.216535 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7v46\" (UniqueName: \"kubernetes.io/projected/7a553b46-b32c-435f-8e30-338b174cd444-kube-api-access-d7v46\") pod \"redhat-operators-bgv5g\" (UID: \"7a553b46-b32c-435f-8e30-338b174cd444\") " pod="openshift-marketplace/redhat-operators-bgv5g" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.216562 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a553b46-b32c-435f-8e30-338b174cd444-catalog-content\") pod \"redhat-operators-bgv5g\" (UID: \"7a553b46-b32c-435f-8e30-338b174cd444\") " pod="openshift-marketplace/redhat-operators-bgv5g" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.217390 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a553b46-b32c-435f-8e30-338b174cd444-catalog-content\") pod \"redhat-operators-bgv5g\" (UID: \"7a553b46-b32c-435f-8e30-338b174cd444\") " pod="openshift-marketplace/redhat-operators-bgv5g" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.217398 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a553b46-b32c-435f-8e30-338b174cd444-utilities\") pod \"redhat-operators-bgv5g\" (UID: \"7a553b46-b32c-435f-8e30-338b174cd444\") " pod="openshift-marketplace/redhat-operators-bgv5g" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.217676 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtdxv\" (UniqueName: \"kubernetes.io/projected/66b06eb7-15a4-4237-9a72-9c3464f1cff1-kube-api-access-vtdxv\") pod \"collect-profiles-29518245-94xzh\" (UID: \"66b06eb7-15a4-4237-9a72-9c3464f1cff1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518245-94xzh" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.217757 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a553b46-b32c-435f-8e30-338b174cd444-utilities\") pod \"redhat-operators-bgv5g\" (UID: \"7a553b46-b32c-435f-8e30-338b174cd444\") " pod="openshift-marketplace/redhat-operators-bgv5g" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.254758 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7v46\" (UniqueName: \"kubernetes.io/projected/7a553b46-b32c-435f-8e30-338b174cd444-kube-api-access-d7v46\") pod \"redhat-operators-bgv5g\" (UID: \"7a553b46-b32c-435f-8e30-338b174cd444\") " pod="openshift-marketplace/redhat-operators-bgv5g" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.307507 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6cf2f"] Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.321183 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66b06eb7-15a4-4237-9a72-9c3464f1cff1-config-volume\") pod \"collect-profiles-29518245-94xzh\" (UID: \"66b06eb7-15a4-4237-9a72-9c3464f1cff1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518245-94xzh" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.321236 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66b06eb7-15a4-4237-9a72-9c3464f1cff1-secret-volume\") pod \"collect-profiles-29518245-94xzh\" (UID: \"66b06eb7-15a4-4237-9a72-9c3464f1cff1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518245-94xzh" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.321312 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtdxv\" (UniqueName: \"kubernetes.io/projected/66b06eb7-15a4-4237-9a72-9c3464f1cff1-kube-api-access-vtdxv\") pod \"collect-profiles-29518245-94xzh\" (UID: \"66b06eb7-15a4-4237-9a72-9c3464f1cff1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518245-94xzh" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.324462 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66b06eb7-15a4-4237-9a72-9c3464f1cff1-config-volume\") pod \"collect-profiles-29518245-94xzh\" (UID: \"66b06eb7-15a4-4237-9a72-9c3464f1cff1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518245-94xzh" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.328886 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66b06eb7-15a4-4237-9a72-9c3464f1cff1-secret-volume\") pod \"collect-profiles-29518245-94xzh\" (UID: \"66b06eb7-15a4-4237-9a72-9c3464f1cff1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518245-94xzh" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.337568 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtdxv\" (UniqueName: \"kubernetes.io/projected/66b06eb7-15a4-4237-9a72-9c3464f1cff1-kube-api-access-vtdxv\") pod \"collect-profiles-29518245-94xzh\" (UID: \"66b06eb7-15a4-4237-9a72-9c3464f1cff1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518245-94xzh" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.388334 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bgv5g" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.414906 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wjckf"] Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.419815 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wjckf" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.433490 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wjckf"] Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.481977 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518245-94xzh" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.523678 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72nlk\" (UniqueName: \"kubernetes.io/projected/7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e-kube-api-access-72nlk\") pod \"redhat-operators-wjckf\" (UID: \"7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e\") " pod="openshift-marketplace/redhat-operators-wjckf" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.523769 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e-utilities\") pod \"redhat-operators-wjckf\" (UID: \"7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e\") " pod="openshift-marketplace/redhat-operators-wjckf" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.523872 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e-catalog-content\") pod \"redhat-operators-wjckf\" (UID: \"7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e\") " pod="openshift-marketplace/redhat-operators-wjckf" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.625864 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e-catalog-content\") pod \"redhat-operators-wjckf\" (UID: \"7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e\") " pod="openshift-marketplace/redhat-operators-wjckf" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.625964 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72nlk\" (UniqueName: \"kubernetes.io/projected/7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e-kube-api-access-72nlk\") pod \"redhat-operators-wjckf\" (UID: \"7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e\") " pod="openshift-marketplace/redhat-operators-wjckf" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.626049 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e-utilities\") pod \"redhat-operators-wjckf\" (UID: \"7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e\") " pod="openshift-marketplace/redhat-operators-wjckf" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.626968 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e-catalog-content\") pod \"redhat-operators-wjckf\" (UID: \"7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e\") " pod="openshift-marketplace/redhat-operators-wjckf" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.627639 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cf2f" event={"ID":"c35d0c45-bd4b-4e9c-bd85-e121f336a572","Type":"ContainerStarted","Data":"8f1932c1fc5f56edd5f4578ad960bda441efba90565d7ff24c78b3ed7570f5ef"} Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.629185 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e-utilities\") pod \"redhat-operators-wjckf\" (UID: \"7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e\") " pod="openshift-marketplace/redhat-operators-wjckf" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.636359 4897 generic.go:334] "Generic (PLEG): container finished" podID="6cdafc37-f772-4b48-b1cf-29759861b373" containerID="979383bee7b1c2732fe35b4cf4a78878c174cc9665367933947038e1404d4482" exitCode=0 Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.636606 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twzlp" event={"ID":"6cdafc37-f772-4b48-b1cf-29759861b373","Type":"ContainerDied","Data":"979383bee7b1c2732fe35b4cf4a78878c174cc9665367933947038e1404d4482"} Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.636678 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twzlp" event={"ID":"6cdafc37-f772-4b48-b1cf-29759861b373","Type":"ContainerStarted","Data":"b1566aeb2f80d4a19e2c18b65e59709e8e04ca9d8eaeefc09275b7e00dbc712f"} Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.650962 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72nlk\" (UniqueName: \"kubernetes.io/projected/7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e-kube-api-access-72nlk\") pod \"redhat-operators-wjckf\" (UID: \"7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e\") " pod="openshift-marketplace/redhat-operators-wjckf" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.657385 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" event={"ID":"c8aadef2-477c-4699-9a1b-dd557ad9e273","Type":"ContainerStarted","Data":"232f3c737ff8b8ee99153a62d77f08996a90061918e84f647703f787e430ee25"} Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.657439 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" event={"ID":"c8aadef2-477c-4699-9a1b-dd557ad9e273","Type":"ContainerStarted","Data":"9129ed5e2f54103df5f6c7696ef97cafcda0704b98e910f51685a1d49f7aa462"} Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.657608 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.669963 4897 generic.go:334] "Generic (PLEG): container finished" podID="00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9" containerID="ccb1072eab92d222dc4595f92b4c0d22858329f0bc039673b533250dc80a5923" exitCode=0 Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.670501 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9","Type":"ContainerDied","Data":"ccb1072eab92d222dc4595f92b4c0d22858329f0bc039673b533250dc80a5923"} Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.713226 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" podStartSLOduration=132.713207478 podStartE2EDuration="2m12.713207478s" podCreationTimestamp="2026-02-14 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:45:00.709156045 +0000 UTC m=+153.685564548" watchObservedRunningTime="2026-02-14 18:45:00.713207478 +0000 UTC m=+153.689615961" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.736392 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-c5z8g" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.739950 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-c5z8g" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.776973 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wjckf" Feb 14 18:45:00 crc kubenswrapper[4897]: I0214 18:45:00.835225 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bgv5g"] Feb 14 18:45:00 crc kubenswrapper[4897]: W0214 18:45:00.849269 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a553b46_b32c_435f_8e30_338b174cd444.slice/crio-2709ebdcf9cefea9c61df9ec193c2b027b86835ab7464fe2a9ea8307cf26bdbd WatchSource:0}: Error finding container 2709ebdcf9cefea9c61df9ec193c2b027b86835ab7464fe2a9ea8307cf26bdbd: Status 404 returned error can't find the container with id 2709ebdcf9cefea9c61df9ec193c2b027b86835ab7464fe2a9ea8307cf26bdbd Feb 14 18:45:01 crc kubenswrapper[4897]: I0214 18:45:01.015665 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518245-94xzh"] Feb 14 18:45:01 crc kubenswrapper[4897]: I0214 18:45:01.088018 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wjckf"] Feb 14 18:45:01 crc kubenswrapper[4897]: W0214 18:45:01.110763 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7287ac6e_a9b8_45f8_8b29_f2e46fe20d1e.slice/crio-7f69c9ee4821b91b02e0a459c416e8f899c0f5bc97670f1f00d7e02682a3fc23 WatchSource:0}: Error finding container 7f69c9ee4821b91b02e0a459c416e8f899c0f5bc97670f1f00d7e02682a3fc23: Status 404 returned error can't find the container with id 7f69c9ee4821b91b02e0a459c416e8f899c0f5bc97670f1f00d7e02682a3fc23 Feb 14 18:45:01 crc kubenswrapper[4897]: I0214 18:45:01.690338 4897 generic.go:334] "Generic (PLEG): container finished" podID="7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e" containerID="dbcab298bba1e66af2b06f2640d62eb84d03aa33c3da16e35f5c5963ec824359" exitCode=0 Feb 14 18:45:01 crc kubenswrapper[4897]: I0214 18:45:01.691144 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjckf" event={"ID":"7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e","Type":"ContainerDied","Data":"dbcab298bba1e66af2b06f2640d62eb84d03aa33c3da16e35f5c5963ec824359"} Feb 14 18:45:01 crc kubenswrapper[4897]: I0214 18:45:01.691217 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjckf" event={"ID":"7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e","Type":"ContainerStarted","Data":"7f69c9ee4821b91b02e0a459c416e8f899c0f5bc97670f1f00d7e02682a3fc23"} Feb 14 18:45:01 crc kubenswrapper[4897]: I0214 18:45:01.697924 4897 generic.go:334] "Generic (PLEG): container finished" podID="66b06eb7-15a4-4237-9a72-9c3464f1cff1" containerID="a7350ca45e490c895e24f3af30e20355949f58244b141c2f6ce196da928d8e82" exitCode=0 Feb 14 18:45:01 crc kubenswrapper[4897]: I0214 18:45:01.697982 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518245-94xzh" event={"ID":"66b06eb7-15a4-4237-9a72-9c3464f1cff1","Type":"ContainerDied","Data":"a7350ca45e490c895e24f3af30e20355949f58244b141c2f6ce196da928d8e82"} Feb 14 18:45:01 crc kubenswrapper[4897]: I0214 18:45:01.698007 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518245-94xzh" event={"ID":"66b06eb7-15a4-4237-9a72-9c3464f1cff1","Type":"ContainerStarted","Data":"a015eebbdac1cdb88b81ab63f13906b06a42183a20d9a90950f06015b5981de9"} Feb 14 18:45:01 crc kubenswrapper[4897]: I0214 18:45:01.700093 4897 generic.go:334] "Generic (PLEG): container finished" podID="7a553b46-b32c-435f-8e30-338b174cd444" containerID="ff96f816bfa54510f698dff41c313699d16a6c4e7f32e794c0bf47b33ce80a0b" exitCode=0 Feb 14 18:45:01 crc kubenswrapper[4897]: I0214 18:45:01.700160 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgv5g" event={"ID":"7a553b46-b32c-435f-8e30-338b174cd444","Type":"ContainerDied","Data":"ff96f816bfa54510f698dff41c313699d16a6c4e7f32e794c0bf47b33ce80a0b"} Feb 14 18:45:01 crc kubenswrapper[4897]: I0214 18:45:01.700175 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgv5g" event={"ID":"7a553b46-b32c-435f-8e30-338b174cd444","Type":"ContainerStarted","Data":"2709ebdcf9cefea9c61df9ec193c2b027b86835ab7464fe2a9ea8307cf26bdbd"} Feb 14 18:45:01 crc kubenswrapper[4897]: I0214 18:45:01.713674 4897 generic.go:334] "Generic (PLEG): container finished" podID="c35d0c45-bd4b-4e9c-bd85-e121f336a572" containerID="2bc4f5fac5225af938a6d9bd383011802581fa899aed5dcae5fb26893275e7f8" exitCode=0 Feb 14 18:45:01 crc kubenswrapper[4897]: I0214 18:45:01.713742 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cf2f" event={"ID":"c35d0c45-bd4b-4e9c-bd85-e121f336a572","Type":"ContainerDied","Data":"2bc4f5fac5225af938a6d9bd383011802581fa899aed5dcae5fb26893275e7f8"} Feb 14 18:45:01 crc kubenswrapper[4897]: I0214 18:45:01.725991 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 18:45:01 crc kubenswrapper[4897]: I0214 18:45:01.726120 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 18:45:02 crc kubenswrapper[4897]: I0214 18:45:02.096772 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 18:45:02 crc kubenswrapper[4897]: I0214 18:45:02.268381 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9-kube-api-access\") pod \"00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9\" (UID: \"00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9\") " Feb 14 18:45:02 crc kubenswrapper[4897]: I0214 18:45:02.268464 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9-kubelet-dir\") pod \"00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9\" (UID: \"00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9\") " Feb 14 18:45:02 crc kubenswrapper[4897]: I0214 18:45:02.268704 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9" (UID: "00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:45:02 crc kubenswrapper[4897]: I0214 18:45:02.280524 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9" (UID: "00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:45:02 crc kubenswrapper[4897]: I0214 18:45:02.371084 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 18:45:02 crc kubenswrapper[4897]: I0214 18:45:02.371129 4897 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 14 18:45:02 crc kubenswrapper[4897]: I0214 18:45:02.624691 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 14 18:45:02 crc kubenswrapper[4897]: E0214 18:45:02.624947 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9" containerName="pruner" Feb 14 18:45:02 crc kubenswrapper[4897]: I0214 18:45:02.624959 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9" containerName="pruner" Feb 14 18:45:02 crc kubenswrapper[4897]: I0214 18:45:02.625122 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9" containerName="pruner" Feb 14 18:45:02 crc kubenswrapper[4897]: I0214 18:45:02.625654 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 18:45:02 crc kubenswrapper[4897]: I0214 18:45:02.637514 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 14 18:45:02 crc kubenswrapper[4897]: I0214 18:45:02.640787 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 14 18:45:02 crc kubenswrapper[4897]: I0214 18:45:02.646632 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 14 18:45:02 crc kubenswrapper[4897]: I0214 18:45:02.732420 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 18:45:02 crc kubenswrapper[4897]: I0214 18:45:02.732493 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"00f3f4a6-8fc6-493b-95eb-5aca7a51b9f9","Type":"ContainerDied","Data":"9a4d60f15ca9443eb9039e13e971e8fe6588c7322190485a3a291d13f175985e"} Feb 14 18:45:02 crc kubenswrapper[4897]: I0214 18:45:02.732526 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a4d60f15ca9443eb9039e13e971e8fe6588c7322190485a3a291d13f175985e" Feb 14 18:45:02 crc kubenswrapper[4897]: I0214 18:45:02.777166 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 18:45:02 crc kubenswrapper[4897]: I0214 18:45:02.777266 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 18:45:02 crc kubenswrapper[4897]: I0214 18:45:02.878285 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 18:45:02 crc kubenswrapper[4897]: I0214 18:45:02.878351 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 18:45:02 crc kubenswrapper[4897]: I0214 18:45:02.878426 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 18:45:02 crc kubenswrapper[4897]: I0214 18:45:02.902468 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 18:45:02 crc kubenswrapper[4897]: I0214 18:45:02.979233 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 18:45:03 crc kubenswrapper[4897]: I0214 18:45:03.018760 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518245-94xzh" Feb 14 18:45:03 crc kubenswrapper[4897]: I0214 18:45:03.189490 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtdxv\" (UniqueName: \"kubernetes.io/projected/66b06eb7-15a4-4237-9a72-9c3464f1cff1-kube-api-access-vtdxv\") pod \"66b06eb7-15a4-4237-9a72-9c3464f1cff1\" (UID: \"66b06eb7-15a4-4237-9a72-9c3464f1cff1\") " Feb 14 18:45:03 crc kubenswrapper[4897]: I0214 18:45:03.189638 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66b06eb7-15a4-4237-9a72-9c3464f1cff1-config-volume\") pod \"66b06eb7-15a4-4237-9a72-9c3464f1cff1\" (UID: \"66b06eb7-15a4-4237-9a72-9c3464f1cff1\") " Feb 14 18:45:03 crc kubenswrapper[4897]: I0214 18:45:03.189714 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66b06eb7-15a4-4237-9a72-9c3464f1cff1-secret-volume\") pod \"66b06eb7-15a4-4237-9a72-9c3464f1cff1\" (UID: \"66b06eb7-15a4-4237-9a72-9c3464f1cff1\") " Feb 14 18:45:03 crc kubenswrapper[4897]: I0214 18:45:03.190435 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66b06eb7-15a4-4237-9a72-9c3464f1cff1-config-volume" (OuterVolumeSpecName: "config-volume") pod "66b06eb7-15a4-4237-9a72-9c3464f1cff1" (UID: "66b06eb7-15a4-4237-9a72-9c3464f1cff1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:45:03 crc kubenswrapper[4897]: I0214 18:45:03.193522 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66b06eb7-15a4-4237-9a72-9c3464f1cff1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "66b06eb7-15a4-4237-9a72-9c3464f1cff1" (UID: "66b06eb7-15a4-4237-9a72-9c3464f1cff1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:45:03 crc kubenswrapper[4897]: I0214 18:45:03.194542 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66b06eb7-15a4-4237-9a72-9c3464f1cff1-kube-api-access-vtdxv" (OuterVolumeSpecName: "kube-api-access-vtdxv") pod "66b06eb7-15a4-4237-9a72-9c3464f1cff1" (UID: "66b06eb7-15a4-4237-9a72-9c3464f1cff1"). InnerVolumeSpecName "kube-api-access-vtdxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:45:03 crc kubenswrapper[4897]: I0214 18:45:03.290890 4897 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66b06eb7-15a4-4237-9a72-9c3464f1cff1-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 18:45:03 crc kubenswrapper[4897]: I0214 18:45:03.290921 4897 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66b06eb7-15a4-4237-9a72-9c3464f1cff1-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 18:45:03 crc kubenswrapper[4897]: I0214 18:45:03.290932 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtdxv\" (UniqueName: \"kubernetes.io/projected/66b06eb7-15a4-4237-9a72-9c3464f1cff1-kube-api-access-vtdxv\") on node \"crc\" DevicePath \"\"" Feb 14 18:45:03 crc kubenswrapper[4897]: I0214 18:45:03.534599 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 14 18:45:03 crc kubenswrapper[4897]: I0214 18:45:03.765179 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518245-94xzh" Feb 14 18:45:03 crc kubenswrapper[4897]: I0214 18:45:03.765785 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518245-94xzh" event={"ID":"66b06eb7-15a4-4237-9a72-9c3464f1cff1","Type":"ContainerDied","Data":"a015eebbdac1cdb88b81ab63f13906b06a42183a20d9a90950f06015b5981de9"} Feb 14 18:45:03 crc kubenswrapper[4897]: I0214 18:45:03.765825 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a015eebbdac1cdb88b81ab63f13906b06a42183a20d9a90950f06015b5981de9" Feb 14 18:45:03 crc kubenswrapper[4897]: I0214 18:45:03.790684 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4","Type":"ContainerStarted","Data":"d0c5e098e6b60d0f98040cb3f096c51fe7545997f5a033d88d7ef61599485d6d"} Feb 14 18:45:04 crc kubenswrapper[4897]: I0214 18:45:04.820888 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4","Type":"ContainerStarted","Data":"f778c251cd4bf9a0e8fd0b04886eab857fcdc2555d87a854bc6e976d04ab83c0"} Feb 14 18:45:04 crc kubenswrapper[4897]: I0214 18:45:04.939768 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-bzvvc" Feb 14 18:45:05 crc kubenswrapper[4897]: I0214 18:45:05.834790 4897 generic.go:334] "Generic (PLEG): container finished" podID="3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4" containerID="f778c251cd4bf9a0e8fd0b04886eab857fcdc2555d87a854bc6e976d04ab83c0" exitCode=0 Feb 14 18:45:05 crc kubenswrapper[4897]: I0214 18:45:05.834848 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4","Type":"ContainerDied","Data":"f778c251cd4bf9a0e8fd0b04886eab857fcdc2555d87a854bc6e976d04ab83c0"} Feb 14 18:45:09 crc kubenswrapper[4897]: I0214 18:45:09.323288 4897 patch_prober.go:28] interesting pod/downloads-7954f5f757-9kvql container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 14 18:45:09 crc kubenswrapper[4897]: I0214 18:45:09.324136 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9kvql" podUID="7ec1f803-3889-4483-87ae-9a38bd020818" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 14 18:45:09 crc kubenswrapper[4897]: I0214 18:45:09.323288 4897 patch_prober.go:28] interesting pod/downloads-7954f5f757-9kvql container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 14 18:45:09 crc kubenswrapper[4897]: I0214 18:45:09.324240 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9kvql" podUID="7ec1f803-3889-4483-87ae-9a38bd020818" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 14 18:45:09 crc kubenswrapper[4897]: I0214 18:45:09.594400 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:45:09 crc kubenswrapper[4897]: I0214 18:45:09.598581 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:45:10 crc kubenswrapper[4897]: I0214 18:45:10.198640 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b614985-b2f8-443d-9996-635d7e407b24-metrics-certs\") pod \"network-metrics-daemon-xrgww\" (UID: \"6b614985-b2f8-443d-9996-635d7e407b24\") " pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:45:10 crc kubenswrapper[4897]: I0214 18:45:10.203764 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b614985-b2f8-443d-9996-635d7e407b24-metrics-certs\") pod \"network-metrics-daemon-xrgww\" (UID: \"6b614985-b2f8-443d-9996-635d7e407b24\") " pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:45:10 crc kubenswrapper[4897]: I0214 18:45:10.209900 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrgww" Feb 14 18:45:11 crc kubenswrapper[4897]: E0214 18:45:11.934572 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/redhat-marketplace-index: received unexpected HTTP status: 504 Gateway Timeout" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 14 18:45:11 crc kubenswrapper[4897]: E0214 18:45:11.934754 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wt4gh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-6cf2f_openshift-marketplace(c35d0c45-bd4b-4e9c-bd85-e121f336a572): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/redhat-marketplace-index: received unexpected HTTP status: 504 Gateway Timeout" logger="UnhandledError" Feb 14 18:45:11 crc kubenswrapper[4897]: E0214 18:45:11.937025 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/redhat-marketplace-index: received unexpected HTTP status: 504 Gateway Timeout\"" pod="openshift-marketplace/redhat-marketplace-6cf2f" podUID="c35d0c45-bd4b-4e9c-bd85-e121f336a572" Feb 14 18:45:16 crc kubenswrapper[4897]: I0214 18:45:16.181126 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g8d99"] Feb 14 18:45:16 crc kubenswrapper[4897]: I0214 18:45:16.181674 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" podUID="eca953dd-cbbc-404a-974f-babb9bf2d0e8" containerName="controller-manager" containerID="cri-o://f588e1e1c8043949c4ea0ca1d83d86c01fd9f314c3f5609dd1b29643e9e07100" gracePeriod=30 Feb 14 18:45:16 crc kubenswrapper[4897]: I0214 18:45:16.199767 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2"] Feb 14 18:45:16 crc kubenswrapper[4897]: I0214 18:45:16.200005 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" podUID="b63cb010-df8f-4e29-a7f3-6b68cb03e63a" containerName="route-controller-manager" containerID="cri-o://eab8376e2eaf1c707ce818d228e29b8faa792a64f6b0039826a2d196d649afa8" gracePeriod=30 Feb 14 18:45:17 crc kubenswrapper[4897]: E0214 18:45:17.563411 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-6cf2f" podUID="c35d0c45-bd4b-4e9c-bd85-e121f336a572" Feb 14 18:45:17 crc kubenswrapper[4897]: I0214 18:45:17.834216 4897 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-g8d99 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 14 18:45:17 crc kubenswrapper[4897]: I0214 18:45:17.834683 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" podUID="eca953dd-cbbc-404a-974f-babb9bf2d0e8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 14 18:45:17 crc kubenswrapper[4897]: I0214 18:45:17.914966 4897 generic.go:334] "Generic (PLEG): container finished" podID="eca953dd-cbbc-404a-974f-babb9bf2d0e8" containerID="f588e1e1c8043949c4ea0ca1d83d86c01fd9f314c3f5609dd1b29643e9e07100" exitCode=0 Feb 14 18:45:17 crc kubenswrapper[4897]: I0214 18:45:17.915052 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" event={"ID":"eca953dd-cbbc-404a-974f-babb9bf2d0e8","Type":"ContainerDied","Data":"f588e1e1c8043949c4ea0ca1d83d86c01fd9f314c3f5609dd1b29643e9e07100"} Feb 14 18:45:17 crc kubenswrapper[4897]: I0214 18:45:17.920803 4897 generic.go:334] "Generic (PLEG): container finished" podID="b63cb010-df8f-4e29-a7f3-6b68cb03e63a" containerID="eab8376e2eaf1c707ce818d228e29b8faa792a64f6b0039826a2d196d649afa8" exitCode=0 Feb 14 18:45:17 crc kubenswrapper[4897]: I0214 18:45:17.920877 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" event={"ID":"b63cb010-df8f-4e29-a7f3-6b68cb03e63a","Type":"ContainerDied","Data":"eab8376e2eaf1c707ce818d228e29b8faa792a64f6b0039826a2d196d649afa8"} Feb 14 18:45:19 crc kubenswrapper[4897]: I0214 18:45:19.341672 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-9kvql" Feb 14 18:45:19 crc kubenswrapper[4897]: I0214 18:45:19.347266 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:45:19 crc kubenswrapper[4897]: I0214 18:45:19.382614 4897 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-ws2d2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 14 18:45:19 crc kubenswrapper[4897]: I0214 18:45:19.382678 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" podUID="b63cb010-df8f-4e29-a7f3-6b68cb03e63a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 14 18:45:27 crc kubenswrapper[4897]: I0214 18:45:27.833880 4897 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-g8d99 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 14 18:45:27 crc kubenswrapper[4897]: I0214 18:45:27.834251 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" podUID="eca953dd-cbbc-404a-974f-babb9bf2d0e8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 14 18:45:29 crc kubenswrapper[4897]: I0214 18:45:29.382507 4897 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-ws2d2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 14 18:45:29 crc kubenswrapper[4897]: I0214 18:45:29.382844 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" podUID="b63cb010-df8f-4e29-a7f3-6b68cb03e63a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 14 18:45:30 crc kubenswrapper[4897]: I0214 18:45:30.181513 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" Feb 14 18:45:31 crc kubenswrapper[4897]: I0214 18:45:31.725907 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 18:45:31 crc kubenswrapper[4897]: I0214 18:45:31.726338 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 18:45:32 crc kubenswrapper[4897]: I0214 18:45:32.033548 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4","Type":"ContainerDied","Data":"d0c5e098e6b60d0f98040cb3f096c51fe7545997f5a033d88d7ef61599485d6d"} Feb 14 18:45:32 crc kubenswrapper[4897]: I0214 18:45:32.033606 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0c5e098e6b60d0f98040cb3f096c51fe7545997f5a033d88d7ef61599485d6d" Feb 14 18:45:32 crc kubenswrapper[4897]: I0214 18:45:32.069844 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 18:45:32 crc kubenswrapper[4897]: I0214 18:45:32.246680 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4-kubelet-dir\") pod \"3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4\" (UID: \"3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4\") " Feb 14 18:45:32 crc kubenswrapper[4897]: I0214 18:45:32.246792 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4" (UID: "3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:45:32 crc kubenswrapper[4897]: I0214 18:45:32.246941 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4-kube-api-access\") pod \"3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4\" (UID: \"3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4\") " Feb 14 18:45:32 crc kubenswrapper[4897]: I0214 18:45:32.247326 4897 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 14 18:45:32 crc kubenswrapper[4897]: I0214 18:45:32.254826 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4" (UID: "3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:45:32 crc kubenswrapper[4897]: I0214 18:45:32.349319 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 18:45:33 crc kubenswrapper[4897]: I0214 18:45:33.041152 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 18:45:35 crc kubenswrapper[4897]: I0214 18:45:35.942360 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 18:45:38 crc kubenswrapper[4897]: I0214 18:45:38.834056 4897 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-g8d99 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: i/o timeout" start-of-body= Feb 14 18:45:38 crc kubenswrapper[4897]: I0214 18:45:38.834528 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" podUID="eca953dd-cbbc-404a-974f-babb9bf2d0e8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: i/o timeout" Feb 14 18:45:39 crc kubenswrapper[4897]: E0214 18:45:39.143446 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 14 18:45:39 crc kubenswrapper[4897]: E0214 18:45:39.143630 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d7v46,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-bgv5g_openshift-marketplace(7a553b46-b32c-435f-8e30-338b174cd444): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 18:45:39 crc kubenswrapper[4897]: E0214 18:45:39.144788 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-bgv5g" podUID="7a553b46-b32c-435f-8e30-338b174cd444" Feb 14 18:45:40 crc kubenswrapper[4897]: I0214 18:45:40.381416 4897 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-ws2d2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: i/o timeout" start-of-body= Feb 14 18:45:40 crc kubenswrapper[4897]: I0214 18:45:40.381803 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" podUID="b63cb010-df8f-4e29-a7f3-6b68cb03e63a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: i/o timeout" Feb 14 18:45:40 crc kubenswrapper[4897]: E0214 18:45:40.877877 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bgv5g" podUID="7a553b46-b32c-435f-8e30-338b174cd444" Feb 14 18:45:40 crc kubenswrapper[4897]: E0214 18:45:40.955590 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 14 18:45:40 crc kubenswrapper[4897]: E0214 18:45:40.955953 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xs62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-84zhb_openshift-marketplace(696766b1-de35-447a-8f84-537044aa0f34): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 18:45:40 crc kubenswrapper[4897]: E0214 18:45:40.957284 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-84zhb" podUID="696766b1-de35-447a-8f84-537044aa0f34" Feb 14 18:45:41 crc kubenswrapper[4897]: I0214 18:45:41.828210 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 14 18:45:41 crc kubenswrapper[4897]: E0214 18:45:41.828656 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4" containerName="pruner" Feb 14 18:45:41 crc kubenswrapper[4897]: I0214 18:45:41.828671 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4" containerName="pruner" Feb 14 18:45:41 crc kubenswrapper[4897]: E0214 18:45:41.828688 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66b06eb7-15a4-4237-9a72-9c3464f1cff1" containerName="collect-profiles" Feb 14 18:45:41 crc kubenswrapper[4897]: I0214 18:45:41.828694 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="66b06eb7-15a4-4237-9a72-9c3464f1cff1" containerName="collect-profiles" Feb 14 18:45:41 crc kubenswrapper[4897]: I0214 18:45:41.828798 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="66b06eb7-15a4-4237-9a72-9c3464f1cff1" containerName="collect-profiles" Feb 14 18:45:41 crc kubenswrapper[4897]: I0214 18:45:41.828812 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e0307f4-ccf6-48ec-ad8a-d56ed2a684f4" containerName="pruner" Feb 14 18:45:41 crc kubenswrapper[4897]: I0214 18:45:41.829226 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 18:45:41 crc kubenswrapper[4897]: I0214 18:45:41.831007 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 14 18:45:41 crc kubenswrapper[4897]: I0214 18:45:41.831190 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 14 18:45:41 crc kubenswrapper[4897]: I0214 18:45:41.835132 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 14 18:45:41 crc kubenswrapper[4897]: I0214 18:45:41.975327 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b7ef9cc6-6914-41d3-9614-5e9e6f0652f9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b7ef9cc6-6914-41d3-9614-5e9e6f0652f9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 18:45:41 crc kubenswrapper[4897]: I0214 18:45:41.975412 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7ef9cc6-6914-41d3-9614-5e9e6f0652f9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b7ef9cc6-6914-41d3-9614-5e9e6f0652f9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.076425 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7ef9cc6-6914-41d3-9614-5e9e6f0652f9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b7ef9cc6-6914-41d3-9614-5e9e6f0652f9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.076497 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b7ef9cc6-6914-41d3-9614-5e9e6f0652f9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b7ef9cc6-6914-41d3-9614-5e9e6f0652f9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.076575 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b7ef9cc6-6914-41d3-9614-5e9e6f0652f9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b7ef9cc6-6914-41d3-9614-5e9e6f0652f9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.095541 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7ef9cc6-6914-41d3-9614-5e9e6f0652f9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b7ef9cc6-6914-41d3-9614-5e9e6f0652f9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.153748 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 18:45:42 crc kubenswrapper[4897]: E0214 18:45:42.329885 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-84zhb" podUID="696766b1-de35-447a-8f84-537044aa0f34" Feb 14 18:45:42 crc kubenswrapper[4897]: E0214 18:45:42.395190 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 14 18:45:42 crc kubenswrapper[4897]: E0214 18:45:42.395348 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xvq8l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rph5f_openshift-marketplace(d360c9a9-d428-4ca4-9379-e052a6e60b22): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 18:45:42 crc kubenswrapper[4897]: E0214 18:45:42.397174 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-rph5f" podUID="d360c9a9-d428-4ca4-9379-e052a6e60b22" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.399428 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.401077 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.427210 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm"] Feb 14 18:45:42 crc kubenswrapper[4897]: E0214 18:45:42.427461 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eca953dd-cbbc-404a-974f-babb9bf2d0e8" containerName="controller-manager" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.427484 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="eca953dd-cbbc-404a-974f-babb9bf2d0e8" containerName="controller-manager" Feb 14 18:45:42 crc kubenswrapper[4897]: E0214 18:45:42.427528 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b63cb010-df8f-4e29-a7f3-6b68cb03e63a" containerName="route-controller-manager" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.427537 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b63cb010-df8f-4e29-a7f3-6b68cb03e63a" containerName="route-controller-manager" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.427649 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="eca953dd-cbbc-404a-974f-babb9bf2d0e8" containerName="controller-manager" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.427671 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="b63cb010-df8f-4e29-a7f3-6b68cb03e63a" containerName="route-controller-manager" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.428874 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm" Feb 14 18:45:42 crc kubenswrapper[4897]: E0214 18:45:42.439075 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 14 18:45:42 crc kubenswrapper[4897]: E0214 18:45:42.439223 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qnpjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-h6rm4_openshift-marketplace(5a07b450-333c-4f3f-8c4d-4b9bd35b7d74): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 18:45:42 crc kubenswrapper[4897]: E0214 18:45:42.441105 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-h6rm4" podUID="5a07b450-333c-4f3f-8c4d-4b9bd35b7d74" Feb 14 18:45:42 crc kubenswrapper[4897]: E0214 18:45:42.450985 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.451315 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm"] Feb 14 18:45:42 crc kubenswrapper[4897]: E0214 18:45:42.451499 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72nlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-wjckf_openshift-marketplace(7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 18:45:42 crc kubenswrapper[4897]: E0214 18:45:42.453491 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-wjckf" podUID="7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.481166 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb57x\" (UniqueName: \"kubernetes.io/projected/b63cb010-df8f-4e29-a7f3-6b68cb03e63a-kube-api-access-sb57x\") pod \"b63cb010-df8f-4e29-a7f3-6b68cb03e63a\" (UID: \"b63cb010-df8f-4e29-a7f3-6b68cb03e63a\") " Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.481240 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44zzq\" (UniqueName: \"kubernetes.io/projected/eca953dd-cbbc-404a-974f-babb9bf2d0e8-kube-api-access-44zzq\") pod \"eca953dd-cbbc-404a-974f-babb9bf2d0e8\" (UID: \"eca953dd-cbbc-404a-974f-babb9bf2d0e8\") " Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.481266 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b63cb010-df8f-4e29-a7f3-6b68cb03e63a-serving-cert\") pod \"b63cb010-df8f-4e29-a7f3-6b68cb03e63a\" (UID: \"b63cb010-df8f-4e29-a7f3-6b68cb03e63a\") " Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.481297 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eca953dd-cbbc-404a-974f-babb9bf2d0e8-client-ca\") pod \"eca953dd-cbbc-404a-974f-babb9bf2d0e8\" (UID: \"eca953dd-cbbc-404a-974f-babb9bf2d0e8\") " Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.481312 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b63cb010-df8f-4e29-a7f3-6b68cb03e63a-client-ca\") pod \"b63cb010-df8f-4e29-a7f3-6b68cb03e63a\" (UID: \"b63cb010-df8f-4e29-a7f3-6b68cb03e63a\") " Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.481339 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eca953dd-cbbc-404a-974f-babb9bf2d0e8-config\") pod \"eca953dd-cbbc-404a-974f-babb9bf2d0e8\" (UID: \"eca953dd-cbbc-404a-974f-babb9bf2d0e8\") " Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.481369 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eca953dd-cbbc-404a-974f-babb9bf2d0e8-proxy-ca-bundles\") pod \"eca953dd-cbbc-404a-974f-babb9bf2d0e8\" (UID: \"eca953dd-cbbc-404a-974f-babb9bf2d0e8\") " Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.481415 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eca953dd-cbbc-404a-974f-babb9bf2d0e8-serving-cert\") pod \"eca953dd-cbbc-404a-974f-babb9bf2d0e8\" (UID: \"eca953dd-cbbc-404a-974f-babb9bf2d0e8\") " Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.481458 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b63cb010-df8f-4e29-a7f3-6b68cb03e63a-config\") pod \"b63cb010-df8f-4e29-a7f3-6b68cb03e63a\" (UID: \"b63cb010-df8f-4e29-a7f3-6b68cb03e63a\") " Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.482828 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b63cb010-df8f-4e29-a7f3-6b68cb03e63a-client-ca" (OuterVolumeSpecName: "client-ca") pod "b63cb010-df8f-4e29-a7f3-6b68cb03e63a" (UID: "b63cb010-df8f-4e29-a7f3-6b68cb03e63a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.482833 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eca953dd-cbbc-404a-974f-babb9bf2d0e8-client-ca" (OuterVolumeSpecName: "client-ca") pod "eca953dd-cbbc-404a-974f-babb9bf2d0e8" (UID: "eca953dd-cbbc-404a-974f-babb9bf2d0e8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.483166 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eca953dd-cbbc-404a-974f-babb9bf2d0e8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "eca953dd-cbbc-404a-974f-babb9bf2d0e8" (UID: "eca953dd-cbbc-404a-974f-babb9bf2d0e8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.483283 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b63cb010-df8f-4e29-a7f3-6b68cb03e63a-config" (OuterVolumeSpecName: "config") pod "b63cb010-df8f-4e29-a7f3-6b68cb03e63a" (UID: "b63cb010-df8f-4e29-a7f3-6b68cb03e63a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.483995 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eca953dd-cbbc-404a-974f-babb9bf2d0e8-config" (OuterVolumeSpecName: "config") pod "eca953dd-cbbc-404a-974f-babb9bf2d0e8" (UID: "eca953dd-cbbc-404a-974f-babb9bf2d0e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:45:42 crc kubenswrapper[4897]: E0214 18:45:42.485208 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 14 18:45:42 crc kubenswrapper[4897]: E0214 18:45:42.485341 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wp9mp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-4c74d_openshift-marketplace(037c41d9-7976-43c9-baa6-57aec44c28de): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.486766 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b63cb010-df8f-4e29-a7f3-6b68cb03e63a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b63cb010-df8f-4e29-a7f3-6b68cb03e63a" (UID: "b63cb010-df8f-4e29-a7f3-6b68cb03e63a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:45:42 crc kubenswrapper[4897]: E0214 18:45:42.486829 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-4c74d" podUID="037c41d9-7976-43c9-baa6-57aec44c28de" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.486888 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eca953dd-cbbc-404a-974f-babb9bf2d0e8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eca953dd-cbbc-404a-974f-babb9bf2d0e8" (UID: "eca953dd-cbbc-404a-974f-babb9bf2d0e8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.487214 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eca953dd-cbbc-404a-974f-babb9bf2d0e8-kube-api-access-44zzq" (OuterVolumeSpecName: "kube-api-access-44zzq") pod "eca953dd-cbbc-404a-974f-babb9bf2d0e8" (UID: "eca953dd-cbbc-404a-974f-babb9bf2d0e8"). InnerVolumeSpecName "kube-api-access-44zzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.488251 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b63cb010-df8f-4e29-a7f3-6b68cb03e63a-kube-api-access-sb57x" (OuterVolumeSpecName: "kube-api-access-sb57x") pod "b63cb010-df8f-4e29-a7f3-6b68cb03e63a" (UID: "b63cb010-df8f-4e29-a7f3-6b68cb03e63a"). InnerVolumeSpecName "kube-api-access-sb57x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.583023 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55640e27-4dfa-4535-b02e-47e7caef07a8-serving-cert\") pod \"route-controller-manager-868dc56cb4-f2xqm\" (UID: \"55640e27-4dfa-4535-b02e-47e7caef07a8\") " pod="openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.583124 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55640e27-4dfa-4535-b02e-47e7caef07a8-config\") pod \"route-controller-manager-868dc56cb4-f2xqm\" (UID: \"55640e27-4dfa-4535-b02e-47e7caef07a8\") " pod="openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.583149 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55640e27-4dfa-4535-b02e-47e7caef07a8-client-ca\") pod \"route-controller-manager-868dc56cb4-f2xqm\" (UID: \"55640e27-4dfa-4535-b02e-47e7caef07a8\") " pod="openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.583168 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsgdd\" (UniqueName: \"kubernetes.io/projected/55640e27-4dfa-4535-b02e-47e7caef07a8-kube-api-access-vsgdd\") pod \"route-controller-manager-868dc56cb4-f2xqm\" (UID: \"55640e27-4dfa-4535-b02e-47e7caef07a8\") " pod="openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.583434 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb57x\" (UniqueName: \"kubernetes.io/projected/b63cb010-df8f-4e29-a7f3-6b68cb03e63a-kube-api-access-sb57x\") on node \"crc\" DevicePath \"\"" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.583463 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44zzq\" (UniqueName: \"kubernetes.io/projected/eca953dd-cbbc-404a-974f-babb9bf2d0e8-kube-api-access-44zzq\") on node \"crc\" DevicePath \"\"" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.583474 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b63cb010-df8f-4e29-a7f3-6b68cb03e63a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.583483 4897 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eca953dd-cbbc-404a-974f-babb9bf2d0e8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.583492 4897 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b63cb010-df8f-4e29-a7f3-6b68cb03e63a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.583501 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eca953dd-cbbc-404a-974f-babb9bf2d0e8-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.583512 4897 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eca953dd-cbbc-404a-974f-babb9bf2d0e8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.583539 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eca953dd-cbbc-404a-974f-babb9bf2d0e8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.583549 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b63cb010-df8f-4e29-a7f3-6b68cb03e63a-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.684168 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55640e27-4dfa-4535-b02e-47e7caef07a8-client-ca\") pod \"route-controller-manager-868dc56cb4-f2xqm\" (UID: \"55640e27-4dfa-4535-b02e-47e7caef07a8\") " pod="openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.684203 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsgdd\" (UniqueName: \"kubernetes.io/projected/55640e27-4dfa-4535-b02e-47e7caef07a8-kube-api-access-vsgdd\") pod \"route-controller-manager-868dc56cb4-f2xqm\" (UID: \"55640e27-4dfa-4535-b02e-47e7caef07a8\") " pod="openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.684271 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55640e27-4dfa-4535-b02e-47e7caef07a8-serving-cert\") pod \"route-controller-manager-868dc56cb4-f2xqm\" (UID: \"55640e27-4dfa-4535-b02e-47e7caef07a8\") " pod="openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.684313 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55640e27-4dfa-4535-b02e-47e7caef07a8-config\") pod \"route-controller-manager-868dc56cb4-f2xqm\" (UID: \"55640e27-4dfa-4535-b02e-47e7caef07a8\") " pod="openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.685160 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55640e27-4dfa-4535-b02e-47e7caef07a8-client-ca\") pod \"route-controller-manager-868dc56cb4-f2xqm\" (UID: \"55640e27-4dfa-4535-b02e-47e7caef07a8\") " pod="openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.685432 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55640e27-4dfa-4535-b02e-47e7caef07a8-config\") pod \"route-controller-manager-868dc56cb4-f2xqm\" (UID: \"55640e27-4dfa-4535-b02e-47e7caef07a8\") " pod="openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.693530 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55640e27-4dfa-4535-b02e-47e7caef07a8-serving-cert\") pod \"route-controller-manager-868dc56cb4-f2xqm\" (UID: \"55640e27-4dfa-4535-b02e-47e7caef07a8\") " pod="openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.698653 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsgdd\" (UniqueName: \"kubernetes.io/projected/55640e27-4dfa-4535-b02e-47e7caef07a8-kube-api-access-vsgdd\") pod \"route-controller-manager-868dc56cb4-f2xqm\" (UID: \"55640e27-4dfa-4535-b02e-47e7caef07a8\") " pod="openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm" Feb 14 18:45:42 crc kubenswrapper[4897]: I0214 18:45:42.752781 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm" Feb 14 18:45:43 crc kubenswrapper[4897]: I0214 18:45:43.086103 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" event={"ID":"b63cb010-df8f-4e29-a7f3-6b68cb03e63a","Type":"ContainerDied","Data":"17557ee6fab1f9dab1b078daf1fe67862c442b04be2bb62cdaf0f396cff542e3"} Feb 14 18:45:43 crc kubenswrapper[4897]: I0214 18:45:43.086122 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2" Feb 14 18:45:43 crc kubenswrapper[4897]: I0214 18:45:43.086153 4897 scope.go:117] "RemoveContainer" containerID="eab8376e2eaf1c707ce818d228e29b8faa792a64f6b0039826a2d196d649afa8" Feb 14 18:45:43 crc kubenswrapper[4897]: I0214 18:45:43.092874 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" Feb 14 18:45:43 crc kubenswrapper[4897]: I0214 18:45:43.092951 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-g8d99" event={"ID":"eca953dd-cbbc-404a-974f-babb9bf2d0e8","Type":"ContainerDied","Data":"6a066f08e081bc34fd0102b3a657573df6d5bc326f0ba5a812d2f5b204a6ac71"} Feb 14 18:45:43 crc kubenswrapper[4897]: I0214 18:45:43.175740 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2"] Feb 14 18:45:43 crc kubenswrapper[4897]: I0214 18:45:43.179864 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ws2d2"] Feb 14 18:45:43 crc kubenswrapper[4897]: I0214 18:45:43.188709 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g8d99"] Feb 14 18:45:43 crc kubenswrapper[4897]: I0214 18:45:43.192759 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g8d99"] Feb 14 18:45:43 crc kubenswrapper[4897]: I0214 18:45:43.802746 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b63cb010-df8f-4e29-a7f3-6b68cb03e63a" path="/var/lib/kubelet/pods/b63cb010-df8f-4e29-a7f3-6b68cb03e63a/volumes" Feb 14 18:45:43 crc kubenswrapper[4897]: I0214 18:45:43.803257 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eca953dd-cbbc-404a-974f-babb9bf2d0e8" path="/var/lib/kubelet/pods/eca953dd-cbbc-404a-974f-babb9bf2d0e8/volumes" Feb 14 18:45:44 crc kubenswrapper[4897]: I0214 18:45:44.794760 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-568ccf4c5-rqm9f"] Feb 14 18:45:44 crc kubenswrapper[4897]: I0214 18:45:44.796059 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-568ccf4c5-rqm9f" Feb 14 18:45:44 crc kubenswrapper[4897]: I0214 18:45:44.798611 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 18:45:44 crc kubenswrapper[4897]: I0214 18:45:44.798894 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 18:45:44 crc kubenswrapper[4897]: I0214 18:45:44.799010 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 18:45:44 crc kubenswrapper[4897]: I0214 18:45:44.799084 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 18:45:44 crc kubenswrapper[4897]: I0214 18:45:44.799198 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 18:45:44 crc kubenswrapper[4897]: I0214 18:45:44.801612 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 18:45:44 crc kubenswrapper[4897]: I0214 18:45:44.811937 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-568ccf4c5-rqm9f"] Feb 14 18:45:44 crc kubenswrapper[4897]: I0214 18:45:44.812330 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 18:45:44 crc kubenswrapper[4897]: I0214 18:45:44.842768 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adf0dad3-7118-4506-9603-dfc6b778980c-serving-cert\") pod \"controller-manager-568ccf4c5-rqm9f\" (UID: \"adf0dad3-7118-4506-9603-dfc6b778980c\") " pod="openshift-controller-manager/controller-manager-568ccf4c5-rqm9f" Feb 14 18:45:44 crc kubenswrapper[4897]: I0214 18:45:44.843062 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/adf0dad3-7118-4506-9603-dfc6b778980c-proxy-ca-bundles\") pod \"controller-manager-568ccf4c5-rqm9f\" (UID: \"adf0dad3-7118-4506-9603-dfc6b778980c\") " pod="openshift-controller-manager/controller-manager-568ccf4c5-rqm9f" Feb 14 18:45:44 crc kubenswrapper[4897]: I0214 18:45:44.843277 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adf0dad3-7118-4506-9603-dfc6b778980c-config\") pod \"controller-manager-568ccf4c5-rqm9f\" (UID: \"adf0dad3-7118-4506-9603-dfc6b778980c\") " pod="openshift-controller-manager/controller-manager-568ccf4c5-rqm9f" Feb 14 18:45:44 crc kubenswrapper[4897]: I0214 18:45:44.843452 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz8v2\" (UniqueName: \"kubernetes.io/projected/adf0dad3-7118-4506-9603-dfc6b778980c-kube-api-access-hz8v2\") pod \"controller-manager-568ccf4c5-rqm9f\" (UID: \"adf0dad3-7118-4506-9603-dfc6b778980c\") " pod="openshift-controller-manager/controller-manager-568ccf4c5-rqm9f" Feb 14 18:45:44 crc kubenswrapper[4897]: I0214 18:45:44.843582 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adf0dad3-7118-4506-9603-dfc6b778980c-client-ca\") pod \"controller-manager-568ccf4c5-rqm9f\" (UID: \"adf0dad3-7118-4506-9603-dfc6b778980c\") " pod="openshift-controller-manager/controller-manager-568ccf4c5-rqm9f" Feb 14 18:45:44 crc kubenswrapper[4897]: I0214 18:45:44.944332 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adf0dad3-7118-4506-9603-dfc6b778980c-config\") pod \"controller-manager-568ccf4c5-rqm9f\" (UID: \"adf0dad3-7118-4506-9603-dfc6b778980c\") " pod="openshift-controller-manager/controller-manager-568ccf4c5-rqm9f" Feb 14 18:45:44 crc kubenswrapper[4897]: I0214 18:45:44.944622 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz8v2\" (UniqueName: \"kubernetes.io/projected/adf0dad3-7118-4506-9603-dfc6b778980c-kube-api-access-hz8v2\") pod \"controller-manager-568ccf4c5-rqm9f\" (UID: \"adf0dad3-7118-4506-9603-dfc6b778980c\") " pod="openshift-controller-manager/controller-manager-568ccf4c5-rqm9f" Feb 14 18:45:44 crc kubenswrapper[4897]: I0214 18:45:44.944746 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adf0dad3-7118-4506-9603-dfc6b778980c-client-ca\") pod \"controller-manager-568ccf4c5-rqm9f\" (UID: \"adf0dad3-7118-4506-9603-dfc6b778980c\") " pod="openshift-controller-manager/controller-manager-568ccf4c5-rqm9f" Feb 14 18:45:44 crc kubenswrapper[4897]: I0214 18:45:44.944861 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adf0dad3-7118-4506-9603-dfc6b778980c-serving-cert\") pod \"controller-manager-568ccf4c5-rqm9f\" (UID: \"adf0dad3-7118-4506-9603-dfc6b778980c\") " pod="openshift-controller-manager/controller-manager-568ccf4c5-rqm9f" Feb 14 18:45:44 crc kubenswrapper[4897]: I0214 18:45:44.944981 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/adf0dad3-7118-4506-9603-dfc6b778980c-proxy-ca-bundles\") pod \"controller-manager-568ccf4c5-rqm9f\" (UID: \"adf0dad3-7118-4506-9603-dfc6b778980c\") " pod="openshift-controller-manager/controller-manager-568ccf4c5-rqm9f" Feb 14 18:45:44 crc kubenswrapper[4897]: I0214 18:45:44.946363 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/adf0dad3-7118-4506-9603-dfc6b778980c-proxy-ca-bundles\") pod \"controller-manager-568ccf4c5-rqm9f\" (UID: \"adf0dad3-7118-4506-9603-dfc6b778980c\") " pod="openshift-controller-manager/controller-manager-568ccf4c5-rqm9f" Feb 14 18:45:44 crc kubenswrapper[4897]: I0214 18:45:44.947071 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adf0dad3-7118-4506-9603-dfc6b778980c-client-ca\") pod \"controller-manager-568ccf4c5-rqm9f\" (UID: \"adf0dad3-7118-4506-9603-dfc6b778980c\") " pod="openshift-controller-manager/controller-manager-568ccf4c5-rqm9f" Feb 14 18:45:44 crc kubenswrapper[4897]: I0214 18:45:44.948083 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adf0dad3-7118-4506-9603-dfc6b778980c-config\") pod \"controller-manager-568ccf4c5-rqm9f\" (UID: \"adf0dad3-7118-4506-9603-dfc6b778980c\") " pod="openshift-controller-manager/controller-manager-568ccf4c5-rqm9f" Feb 14 18:45:44 crc kubenswrapper[4897]: I0214 18:45:44.952496 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adf0dad3-7118-4506-9603-dfc6b778980c-serving-cert\") pod \"controller-manager-568ccf4c5-rqm9f\" (UID: \"adf0dad3-7118-4506-9603-dfc6b778980c\") " pod="openshift-controller-manager/controller-manager-568ccf4c5-rqm9f" Feb 14 18:45:44 crc kubenswrapper[4897]: I0214 18:45:44.960705 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz8v2\" (UniqueName: \"kubernetes.io/projected/adf0dad3-7118-4506-9603-dfc6b778980c-kube-api-access-hz8v2\") pod \"controller-manager-568ccf4c5-rqm9f\" (UID: \"adf0dad3-7118-4506-9603-dfc6b778980c\") " pod="openshift-controller-manager/controller-manager-568ccf4c5-rqm9f" Feb 14 18:45:45 crc kubenswrapper[4897]: I0214 18:45:45.126600 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-568ccf4c5-rqm9f" Feb 14 18:45:45 crc kubenswrapper[4897]: E0214 18:45:45.151466 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-h6rm4" podUID="5a07b450-333c-4f3f-8c4d-4b9bd35b7d74" Feb 14 18:45:45 crc kubenswrapper[4897]: E0214 18:45:45.151498 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wjckf" podUID="7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e" Feb 14 18:45:45 crc kubenswrapper[4897]: E0214 18:45:45.151567 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rph5f" podUID="d360c9a9-d428-4ca4-9379-e052a6e60b22" Feb 14 18:45:45 crc kubenswrapper[4897]: E0214 18:45:45.151727 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-4c74d" podUID="037c41d9-7976-43c9-baa6-57aec44c28de" Feb 14 18:45:45 crc kubenswrapper[4897]: I0214 18:45:45.169720 4897 scope.go:117] "RemoveContainer" containerID="f588e1e1c8043949c4ea0ca1d83d86c01fd9f314c3f5609dd1b29643e9e07100" Feb 14 18:45:45 crc kubenswrapper[4897]: I0214 18:45:45.399636 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 14 18:45:45 crc kubenswrapper[4897]: I0214 18:45:45.492935 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-568ccf4c5-rqm9f"] Feb 14 18:45:45 crc kubenswrapper[4897]: W0214 18:45:45.499840 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadf0dad3_7118_4506_9603_dfc6b778980c.slice/crio-91d0770bfb47a587a12787d08ed337c3516a84e4f45de914c66a80350b92eea4 WatchSource:0}: Error finding container 91d0770bfb47a587a12787d08ed337c3516a84e4f45de914c66a80350b92eea4: Status 404 returned error can't find the container with id 91d0770bfb47a587a12787d08ed337c3516a84e4f45de914c66a80350b92eea4 Feb 14 18:45:45 crc kubenswrapper[4897]: I0214 18:45:45.549539 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xrgww"] Feb 14 18:45:45 crc kubenswrapper[4897]: I0214 18:45:45.648774 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm"] Feb 14 18:45:45 crc kubenswrapper[4897]: E0214 18:45:45.707699 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 14 18:45:45 crc kubenswrapper[4897]: E0214 18:45:45.707832 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r86b6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-twzlp_openshift-marketplace(6cdafc37-f772-4b48-b1cf-29759861b373): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 18:45:45 crc kubenswrapper[4897]: E0214 18:45:45.709099 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-twzlp" podUID="6cdafc37-f772-4b48-b1cf-29759861b373" Feb 14 18:45:46 crc kubenswrapper[4897]: I0214 18:45:46.108249 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-568ccf4c5-rqm9f" event={"ID":"adf0dad3-7118-4506-9603-dfc6b778980c","Type":"ContainerStarted","Data":"dbd7ab816795a99ec6d4210a0b2b832637559fa478b941695f2770ac2d2db425"} Feb 14 18:45:46 crc kubenswrapper[4897]: I0214 18:45:46.108291 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-568ccf4c5-rqm9f" event={"ID":"adf0dad3-7118-4506-9603-dfc6b778980c","Type":"ContainerStarted","Data":"91d0770bfb47a587a12787d08ed337c3516a84e4f45de914c66a80350b92eea4"} Feb 14 18:45:46 crc kubenswrapper[4897]: I0214 18:45:46.108604 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-568ccf4c5-rqm9f" Feb 14 18:45:46 crc kubenswrapper[4897]: I0214 18:45:46.109573 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b7ef9cc6-6914-41d3-9614-5e9e6f0652f9","Type":"ContainerStarted","Data":"287674dc67083854eab37c36309a15c36bbb2ed7556e878a64382b6cddb17536"} Feb 14 18:45:46 crc kubenswrapper[4897]: I0214 18:45:46.109612 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b7ef9cc6-6914-41d3-9614-5e9e6f0652f9","Type":"ContainerStarted","Data":"bcfc826309ab6c68c1cf2231230f21c5cf4e65efb266d43bda4f82e16835d234"} Feb 14 18:45:46 crc kubenswrapper[4897]: I0214 18:45:46.112471 4897 generic.go:334] "Generic (PLEG): container finished" podID="c35d0c45-bd4b-4e9c-bd85-e121f336a572" containerID="ec40a6bb051d7dc153370eec2d53aeff6abd46ebcd0dc7e229cc7891e742b111" exitCode=0 Feb 14 18:45:46 crc kubenswrapper[4897]: I0214 18:45:46.112496 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cf2f" event={"ID":"c35d0c45-bd4b-4e9c-bd85-e121f336a572","Type":"ContainerDied","Data":"ec40a6bb051d7dc153370eec2d53aeff6abd46ebcd0dc7e229cc7891e742b111"} Feb 14 18:45:46 crc kubenswrapper[4897]: I0214 18:45:46.112736 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-568ccf4c5-rqm9f" Feb 14 18:45:46 crc kubenswrapper[4897]: I0214 18:45:46.118360 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm" event={"ID":"55640e27-4dfa-4535-b02e-47e7caef07a8","Type":"ContainerStarted","Data":"d49f7cb108db453cd8ce6e236bd5750e1e0ad730377f585306193c4d97d8e8be"} Feb 14 18:45:46 crc kubenswrapper[4897]: I0214 18:45:46.118405 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm" event={"ID":"55640e27-4dfa-4535-b02e-47e7caef07a8","Type":"ContainerStarted","Data":"4964e23413babff098d67bd7450e8a8f7c9d77ae68a1c80648852fda26703ee6"} Feb 14 18:45:46 crc kubenswrapper[4897]: I0214 18:45:46.118602 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm" Feb 14 18:45:46 crc kubenswrapper[4897]: I0214 18:45:46.124258 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-568ccf4c5-rqm9f" podStartSLOduration=10.124243157 podStartE2EDuration="10.124243157s" podCreationTimestamp="2026-02-14 18:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:45:46.122645524 +0000 UTC m=+199.099054027" watchObservedRunningTime="2026-02-14 18:45:46.124243157 +0000 UTC m=+199.100651640" Feb 14 18:45:46 crc kubenswrapper[4897]: I0214 18:45:46.129435 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xrgww" event={"ID":"6b614985-b2f8-443d-9996-635d7e407b24","Type":"ContainerStarted","Data":"bcb3d12d6a07f01757115da906049f5ae4ce0274de62e358d62a2bbe0f78f068"} Feb 14 18:45:46 crc kubenswrapper[4897]: I0214 18:45:46.129484 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xrgww" event={"ID":"6b614985-b2f8-443d-9996-635d7e407b24","Type":"ContainerStarted","Data":"9995b2a9bb6024f26ba2d5a0ec86be08e38981c838714fa7db2a63c29feab1cd"} Feb 14 18:45:46 crc kubenswrapper[4897]: E0214 18:45:46.133288 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-twzlp" podUID="6cdafc37-f772-4b48-b1cf-29759861b373" Feb 14 18:45:46 crc kubenswrapper[4897]: I0214 18:45:46.157461 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=5.15744508 podStartE2EDuration="5.15744508s" podCreationTimestamp="2026-02-14 18:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:45:46.156002003 +0000 UTC m=+199.132410496" watchObservedRunningTime="2026-02-14 18:45:46.15744508 +0000 UTC m=+199.133853563" Feb 14 18:45:46 crc kubenswrapper[4897]: I0214 18:45:46.212703 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm" podStartSLOduration=10.212688319 podStartE2EDuration="10.212688319s" podCreationTimestamp="2026-02-14 18:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:45:46.211408298 +0000 UTC m=+199.187816791" watchObservedRunningTime="2026-02-14 18:45:46.212688319 +0000 UTC m=+199.189096802" Feb 14 18:45:46 crc kubenswrapper[4897]: I0214 18:45:46.477591 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm" Feb 14 18:45:47 crc kubenswrapper[4897]: I0214 18:45:47.134877 4897 generic.go:334] "Generic (PLEG): container finished" podID="b7ef9cc6-6914-41d3-9614-5e9e6f0652f9" containerID="287674dc67083854eab37c36309a15c36bbb2ed7556e878a64382b6cddb17536" exitCode=0 Feb 14 18:45:47 crc kubenswrapper[4897]: I0214 18:45:47.134942 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b7ef9cc6-6914-41d3-9614-5e9e6f0652f9","Type":"ContainerDied","Data":"287674dc67083854eab37c36309a15c36bbb2ed7556e878a64382b6cddb17536"} Feb 14 18:45:47 crc kubenswrapper[4897]: I0214 18:45:47.137969 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cf2f" event={"ID":"c35d0c45-bd4b-4e9c-bd85-e121f336a572","Type":"ContainerStarted","Data":"e34fbe5ee69378b4f5d2ffeaf74cca67258c0592c9a85014e9b47fb05a610874"} Feb 14 18:45:47 crc kubenswrapper[4897]: I0214 18:45:47.140933 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xrgww" event={"ID":"6b614985-b2f8-443d-9996-635d7e407b24","Type":"ContainerStarted","Data":"638114b4932efa30c853face72c5b3dc0dd117f5e4320f831a4fe2ff63b69abf"} Feb 14 18:45:47 crc kubenswrapper[4897]: I0214 18:45:47.175898 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6cf2f" podStartSLOduration=3.113324039 podStartE2EDuration="48.175873313s" podCreationTimestamp="2026-02-14 18:44:59 +0000 UTC" firstStartedPulling="2026-02-14 18:45:01.714822125 +0000 UTC m=+154.691230608" lastFinishedPulling="2026-02-14 18:45:46.777371399 +0000 UTC m=+199.753779882" observedRunningTime="2026-02-14 18:45:47.173714282 +0000 UTC m=+200.150122765" watchObservedRunningTime="2026-02-14 18:45:47.175873313 +0000 UTC m=+200.152281816" Feb 14 18:45:47 crc kubenswrapper[4897]: I0214 18:45:47.195572 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-xrgww" podStartSLOduration=180.195526141 podStartE2EDuration="3m0.195526141s" podCreationTimestamp="2026-02-14 18:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:45:47.191091255 +0000 UTC m=+200.167499758" watchObservedRunningTime="2026-02-14 18:45:47.195526141 +0000 UTC m=+200.171934644" Feb 14 18:45:48 crc kubenswrapper[4897]: I0214 18:45:48.448983 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 18:45:48 crc kubenswrapper[4897]: I0214 18:45:48.613236 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b7ef9cc6-6914-41d3-9614-5e9e6f0652f9-kubelet-dir\") pod \"b7ef9cc6-6914-41d3-9614-5e9e6f0652f9\" (UID: \"b7ef9cc6-6914-41d3-9614-5e9e6f0652f9\") " Feb 14 18:45:48 crc kubenswrapper[4897]: I0214 18:45:48.613383 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7ef9cc6-6914-41d3-9614-5e9e6f0652f9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b7ef9cc6-6914-41d3-9614-5e9e6f0652f9" (UID: "b7ef9cc6-6914-41d3-9614-5e9e6f0652f9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:45:48 crc kubenswrapper[4897]: I0214 18:45:48.613717 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7ef9cc6-6914-41d3-9614-5e9e6f0652f9-kube-api-access\") pod \"b7ef9cc6-6914-41d3-9614-5e9e6f0652f9\" (UID: \"b7ef9cc6-6914-41d3-9614-5e9e6f0652f9\") " Feb 14 18:45:48 crc kubenswrapper[4897]: I0214 18:45:48.613946 4897 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b7ef9cc6-6914-41d3-9614-5e9e6f0652f9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 14 18:45:48 crc kubenswrapper[4897]: I0214 18:45:48.619495 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7ef9cc6-6914-41d3-9614-5e9e6f0652f9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b7ef9cc6-6914-41d3-9614-5e9e6f0652f9" (UID: "b7ef9cc6-6914-41d3-9614-5e9e6f0652f9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:45:48 crc kubenswrapper[4897]: I0214 18:45:48.715613 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7ef9cc6-6914-41d3-9614-5e9e6f0652f9-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 18:45:49 crc kubenswrapper[4897]: I0214 18:45:49.015654 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 14 18:45:49 crc kubenswrapper[4897]: E0214 18:45:49.016586 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7ef9cc6-6914-41d3-9614-5e9e6f0652f9" containerName="pruner" Feb 14 18:45:49 crc kubenswrapper[4897]: I0214 18:45:49.016612 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7ef9cc6-6914-41d3-9614-5e9e6f0652f9" containerName="pruner" Feb 14 18:45:49 crc kubenswrapper[4897]: I0214 18:45:49.016743 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7ef9cc6-6914-41d3-9614-5e9e6f0652f9" containerName="pruner" Feb 14 18:45:49 crc kubenswrapper[4897]: I0214 18:45:49.017265 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 14 18:45:49 crc kubenswrapper[4897]: I0214 18:45:49.032602 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 14 18:45:49 crc kubenswrapper[4897]: I0214 18:45:49.119794 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69e1bf34-207a-47f6-a31f-035e5e25b2d7-kube-api-access\") pod \"installer-9-crc\" (UID: \"69e1bf34-207a-47f6-a31f-035e5e25b2d7\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 18:45:49 crc kubenswrapper[4897]: I0214 18:45:49.119890 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/69e1bf34-207a-47f6-a31f-035e5e25b2d7-var-lock\") pod \"installer-9-crc\" (UID: \"69e1bf34-207a-47f6-a31f-035e5e25b2d7\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 18:45:49 crc kubenswrapper[4897]: I0214 18:45:49.119916 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69e1bf34-207a-47f6-a31f-035e5e25b2d7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"69e1bf34-207a-47f6-a31f-035e5e25b2d7\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 18:45:49 crc kubenswrapper[4897]: I0214 18:45:49.151999 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b7ef9cc6-6914-41d3-9614-5e9e6f0652f9","Type":"ContainerDied","Data":"bcfc826309ab6c68c1cf2231230f21c5cf4e65efb266d43bda4f82e16835d234"} Feb 14 18:45:49 crc kubenswrapper[4897]: I0214 18:45:49.152050 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcfc826309ab6c68c1cf2231230f21c5cf4e65efb266d43bda4f82e16835d234" Feb 14 18:45:49 crc kubenswrapper[4897]: I0214 18:45:49.152094 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 18:45:49 crc kubenswrapper[4897]: I0214 18:45:49.220645 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/69e1bf34-207a-47f6-a31f-035e5e25b2d7-var-lock\") pod \"installer-9-crc\" (UID: \"69e1bf34-207a-47f6-a31f-035e5e25b2d7\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 18:45:49 crc kubenswrapper[4897]: I0214 18:45:49.220873 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69e1bf34-207a-47f6-a31f-035e5e25b2d7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"69e1bf34-207a-47f6-a31f-035e5e25b2d7\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 18:45:49 crc kubenswrapper[4897]: I0214 18:45:49.220957 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69e1bf34-207a-47f6-a31f-035e5e25b2d7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"69e1bf34-207a-47f6-a31f-035e5e25b2d7\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 18:45:49 crc kubenswrapper[4897]: I0214 18:45:49.220760 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/69e1bf34-207a-47f6-a31f-035e5e25b2d7-var-lock\") pod \"installer-9-crc\" (UID: \"69e1bf34-207a-47f6-a31f-035e5e25b2d7\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 18:45:49 crc kubenswrapper[4897]: I0214 18:45:49.221210 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69e1bf34-207a-47f6-a31f-035e5e25b2d7-kube-api-access\") pod \"installer-9-crc\" (UID: \"69e1bf34-207a-47f6-a31f-035e5e25b2d7\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 18:45:49 crc kubenswrapper[4897]: I0214 18:45:49.237830 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69e1bf34-207a-47f6-a31f-035e5e25b2d7-kube-api-access\") pod \"installer-9-crc\" (UID: \"69e1bf34-207a-47f6-a31f-035e5e25b2d7\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 18:45:49 crc kubenswrapper[4897]: I0214 18:45:49.337706 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 14 18:45:49 crc kubenswrapper[4897]: I0214 18:45:49.734779 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 14 18:45:49 crc kubenswrapper[4897]: W0214 18:45:49.741910 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod69e1bf34_207a_47f6_a31f_035e5e25b2d7.slice/crio-524465194cd8be6127d86e7e7859370f02c9fea1ad667bf00da16c4ddd830d9c WatchSource:0}: Error finding container 524465194cd8be6127d86e7e7859370f02c9fea1ad667bf00da16c4ddd830d9c: Status 404 returned error can't find the container with id 524465194cd8be6127d86e7e7859370f02c9fea1ad667bf00da16c4ddd830d9c Feb 14 18:45:49 crc kubenswrapper[4897]: I0214 18:45:49.789908 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6cf2f" Feb 14 18:45:49 crc kubenswrapper[4897]: I0214 18:45:49.789952 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6cf2f" Feb 14 18:45:49 crc kubenswrapper[4897]: I0214 18:45:49.925427 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6cf2f" Feb 14 18:45:50 crc kubenswrapper[4897]: I0214 18:45:50.159313 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"69e1bf34-207a-47f6-a31f-035e5e25b2d7","Type":"ContainerStarted","Data":"e58e4c02dcf3e3aa6c8744d59a13b9604dcfa050274ca705b7257dbbc11bb678"} Feb 14 18:45:50 crc kubenswrapper[4897]: I0214 18:45:50.159362 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"69e1bf34-207a-47f6-a31f-035e5e25b2d7","Type":"ContainerStarted","Data":"524465194cd8be6127d86e7e7859370f02c9fea1ad667bf00da16c4ddd830d9c"} Feb 14 18:45:50 crc kubenswrapper[4897]: I0214 18:45:50.178022 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.178000503 podStartE2EDuration="1.178000503s" podCreationTimestamp="2026-02-14 18:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:45:50.174616812 +0000 UTC m=+203.151025305" watchObservedRunningTime="2026-02-14 18:45:50.178000503 +0000 UTC m=+203.154408986" Feb 14 18:45:53 crc kubenswrapper[4897]: I0214 18:45:53.176111 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgv5g" event={"ID":"7a553b46-b32c-435f-8e30-338b174cd444","Type":"ContainerStarted","Data":"be3651668419941867ed3235576acc493035845436659f94a300ca64e7a2c8f6"} Feb 14 18:45:54 crc kubenswrapper[4897]: I0214 18:45:54.182754 4897 generic.go:334] "Generic (PLEG): container finished" podID="7a553b46-b32c-435f-8e30-338b174cd444" containerID="be3651668419941867ed3235576acc493035845436659f94a300ca64e7a2c8f6" exitCode=0 Feb 14 18:45:54 crc kubenswrapper[4897]: I0214 18:45:54.182801 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgv5g" event={"ID":"7a553b46-b32c-435f-8e30-338b174cd444","Type":"ContainerDied","Data":"be3651668419941867ed3235576acc493035845436659f94a300ca64e7a2c8f6"} Feb 14 18:45:56 crc kubenswrapper[4897]: I0214 18:45:56.170067 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-568ccf4c5-rqm9f"] Feb 14 18:45:56 crc kubenswrapper[4897]: I0214 18:45:56.170591 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-568ccf4c5-rqm9f" podUID="adf0dad3-7118-4506-9603-dfc6b778980c" containerName="controller-manager" containerID="cri-o://dbd7ab816795a99ec6d4210a0b2b832637559fa478b941695f2770ac2d2db425" gracePeriod=30 Feb 14 18:45:56 crc kubenswrapper[4897]: I0214 18:45:56.215072 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm"] Feb 14 18:45:56 crc kubenswrapper[4897]: I0214 18:45:56.215304 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm" podUID="55640e27-4dfa-4535-b02e-47e7caef07a8" containerName="route-controller-manager" containerID="cri-o://d49f7cb108db453cd8ce6e236bd5750e1e0ad730377f585306193c4d97d8e8be" gracePeriod=30 Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.204775 4897 generic.go:334] "Generic (PLEG): container finished" podID="adf0dad3-7118-4506-9603-dfc6b778980c" containerID="dbd7ab816795a99ec6d4210a0b2b832637559fa478b941695f2770ac2d2db425" exitCode=0 Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.205164 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-568ccf4c5-rqm9f" event={"ID":"adf0dad3-7118-4506-9603-dfc6b778980c","Type":"ContainerDied","Data":"dbd7ab816795a99ec6d4210a0b2b832637559fa478b941695f2770ac2d2db425"} Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.207626 4897 generic.go:334] "Generic (PLEG): container finished" podID="55640e27-4dfa-4535-b02e-47e7caef07a8" containerID="d49f7cb108db453cd8ce6e236bd5750e1e0ad730377f585306193c4d97d8e8be" exitCode=0 Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.207652 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm" event={"ID":"55640e27-4dfa-4535-b02e-47e7caef07a8","Type":"ContainerDied","Data":"d49f7cb108db453cd8ce6e236bd5750e1e0ad730377f585306193c4d97d8e8be"} Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.359052 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm" Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.393214 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx"] Feb 14 18:45:57 crc kubenswrapper[4897]: E0214 18:45:57.393462 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55640e27-4dfa-4535-b02e-47e7caef07a8" containerName="route-controller-manager" Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.393473 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="55640e27-4dfa-4535-b02e-47e7caef07a8" containerName="route-controller-manager" Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.393605 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="55640e27-4dfa-4535-b02e-47e7caef07a8" containerName="route-controller-manager" Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.394129 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx" Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.410677 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx"] Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.529067 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsgdd\" (UniqueName: \"kubernetes.io/projected/55640e27-4dfa-4535-b02e-47e7caef07a8-kube-api-access-vsgdd\") pod \"55640e27-4dfa-4535-b02e-47e7caef07a8\" (UID: \"55640e27-4dfa-4535-b02e-47e7caef07a8\") " Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.529175 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55640e27-4dfa-4535-b02e-47e7caef07a8-serving-cert\") pod \"55640e27-4dfa-4535-b02e-47e7caef07a8\" (UID: \"55640e27-4dfa-4535-b02e-47e7caef07a8\") " Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.529285 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55640e27-4dfa-4535-b02e-47e7caef07a8-config\") pod \"55640e27-4dfa-4535-b02e-47e7caef07a8\" (UID: \"55640e27-4dfa-4535-b02e-47e7caef07a8\") " Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.529348 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55640e27-4dfa-4535-b02e-47e7caef07a8-client-ca\") pod \"55640e27-4dfa-4535-b02e-47e7caef07a8\" (UID: \"55640e27-4dfa-4535-b02e-47e7caef07a8\") " Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.529606 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec0130b3-7cc4-4a83-a493-520737eaa30c-serving-cert\") pod \"route-controller-manager-6cb77dfd47-qtwnx\" (UID: \"ec0130b3-7cc4-4a83-a493-520737eaa30c\") " pod="openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx" Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.529854 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec0130b3-7cc4-4a83-a493-520737eaa30c-config\") pod \"route-controller-manager-6cb77dfd47-qtwnx\" (UID: \"ec0130b3-7cc4-4a83-a493-520737eaa30c\") " pod="openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx" Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.530117 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec0130b3-7cc4-4a83-a493-520737eaa30c-client-ca\") pod \"route-controller-manager-6cb77dfd47-qtwnx\" (UID: \"ec0130b3-7cc4-4a83-a493-520737eaa30c\") " pod="openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx" Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.530213 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpktq\" (UniqueName: \"kubernetes.io/projected/ec0130b3-7cc4-4a83-a493-520737eaa30c-kube-api-access-cpktq\") pod \"route-controller-manager-6cb77dfd47-qtwnx\" (UID: \"ec0130b3-7cc4-4a83-a493-520737eaa30c\") " pod="openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx" Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.530290 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55640e27-4dfa-4535-b02e-47e7caef07a8-client-ca" (OuterVolumeSpecName: "client-ca") pod "55640e27-4dfa-4535-b02e-47e7caef07a8" (UID: "55640e27-4dfa-4535-b02e-47e7caef07a8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.530444 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55640e27-4dfa-4535-b02e-47e7caef07a8-config" (OuterVolumeSpecName: "config") pod "55640e27-4dfa-4535-b02e-47e7caef07a8" (UID: "55640e27-4dfa-4535-b02e-47e7caef07a8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.535794 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55640e27-4dfa-4535-b02e-47e7caef07a8-kube-api-access-vsgdd" (OuterVolumeSpecName: "kube-api-access-vsgdd") pod "55640e27-4dfa-4535-b02e-47e7caef07a8" (UID: "55640e27-4dfa-4535-b02e-47e7caef07a8"). InnerVolumeSpecName "kube-api-access-vsgdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.545295 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55640e27-4dfa-4535-b02e-47e7caef07a8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "55640e27-4dfa-4535-b02e-47e7caef07a8" (UID: "55640e27-4dfa-4535-b02e-47e7caef07a8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.631751 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec0130b3-7cc4-4a83-a493-520737eaa30c-config\") pod \"route-controller-manager-6cb77dfd47-qtwnx\" (UID: \"ec0130b3-7cc4-4a83-a493-520737eaa30c\") " pod="openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx" Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.631837 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec0130b3-7cc4-4a83-a493-520737eaa30c-client-ca\") pod \"route-controller-manager-6cb77dfd47-qtwnx\" (UID: \"ec0130b3-7cc4-4a83-a493-520737eaa30c\") " pod="openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx" Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.631872 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpktq\" (UniqueName: \"kubernetes.io/projected/ec0130b3-7cc4-4a83-a493-520737eaa30c-kube-api-access-cpktq\") pod \"route-controller-manager-6cb77dfd47-qtwnx\" (UID: \"ec0130b3-7cc4-4a83-a493-520737eaa30c\") " pod="openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx" Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.631926 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec0130b3-7cc4-4a83-a493-520737eaa30c-serving-cert\") pod \"route-controller-manager-6cb77dfd47-qtwnx\" (UID: \"ec0130b3-7cc4-4a83-a493-520737eaa30c\") " pod="openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx" Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.631976 4897 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55640e27-4dfa-4535-b02e-47e7caef07a8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.631991 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsgdd\" (UniqueName: \"kubernetes.io/projected/55640e27-4dfa-4535-b02e-47e7caef07a8-kube-api-access-vsgdd\") on node \"crc\" DevicePath \"\"" Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.632005 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55640e27-4dfa-4535-b02e-47e7caef07a8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.632019 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55640e27-4dfa-4535-b02e-47e7caef07a8-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.633200 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec0130b3-7cc4-4a83-a493-520737eaa30c-client-ca\") pod \"route-controller-manager-6cb77dfd47-qtwnx\" (UID: \"ec0130b3-7cc4-4a83-a493-520737eaa30c\") " pod="openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx" Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.634405 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec0130b3-7cc4-4a83-a493-520737eaa30c-config\") pod \"route-controller-manager-6cb77dfd47-qtwnx\" (UID: \"ec0130b3-7cc4-4a83-a493-520737eaa30c\") " pod="openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx" Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.637705 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec0130b3-7cc4-4a83-a493-520737eaa30c-serving-cert\") pod \"route-controller-manager-6cb77dfd47-qtwnx\" (UID: \"ec0130b3-7cc4-4a83-a493-520737eaa30c\") " pod="openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx" Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.660728 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpktq\" (UniqueName: \"kubernetes.io/projected/ec0130b3-7cc4-4a83-a493-520737eaa30c-kube-api-access-cpktq\") pod \"route-controller-manager-6cb77dfd47-qtwnx\" (UID: \"ec0130b3-7cc4-4a83-a493-520737eaa30c\") " pod="openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx" Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.731483 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx" Feb 14 18:45:57 crc kubenswrapper[4897]: I0214 18:45:57.875629 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-568ccf4c5-rqm9f" Feb 14 18:45:58 crc kubenswrapper[4897]: I0214 18:45:58.040999 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adf0dad3-7118-4506-9603-dfc6b778980c-client-ca\") pod \"adf0dad3-7118-4506-9603-dfc6b778980c\" (UID: \"adf0dad3-7118-4506-9603-dfc6b778980c\") " Feb 14 18:45:58 crc kubenswrapper[4897]: I0214 18:45:58.041084 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adf0dad3-7118-4506-9603-dfc6b778980c-config\") pod \"adf0dad3-7118-4506-9603-dfc6b778980c\" (UID: \"adf0dad3-7118-4506-9603-dfc6b778980c\") " Feb 14 18:45:58 crc kubenswrapper[4897]: I0214 18:45:58.041127 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adf0dad3-7118-4506-9603-dfc6b778980c-serving-cert\") pod \"adf0dad3-7118-4506-9603-dfc6b778980c\" (UID: \"adf0dad3-7118-4506-9603-dfc6b778980c\") " Feb 14 18:45:58 crc kubenswrapper[4897]: I0214 18:45:58.041160 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/adf0dad3-7118-4506-9603-dfc6b778980c-proxy-ca-bundles\") pod \"adf0dad3-7118-4506-9603-dfc6b778980c\" (UID: \"adf0dad3-7118-4506-9603-dfc6b778980c\") " Feb 14 18:45:58 crc kubenswrapper[4897]: I0214 18:45:58.041229 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hz8v2\" (UniqueName: \"kubernetes.io/projected/adf0dad3-7118-4506-9603-dfc6b778980c-kube-api-access-hz8v2\") pod \"adf0dad3-7118-4506-9603-dfc6b778980c\" (UID: \"adf0dad3-7118-4506-9603-dfc6b778980c\") " Feb 14 18:45:58 crc kubenswrapper[4897]: I0214 18:45:58.042410 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adf0dad3-7118-4506-9603-dfc6b778980c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "adf0dad3-7118-4506-9603-dfc6b778980c" (UID: "adf0dad3-7118-4506-9603-dfc6b778980c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:45:58 crc kubenswrapper[4897]: I0214 18:45:58.042583 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adf0dad3-7118-4506-9603-dfc6b778980c-client-ca" (OuterVolumeSpecName: "client-ca") pod "adf0dad3-7118-4506-9603-dfc6b778980c" (UID: "adf0dad3-7118-4506-9603-dfc6b778980c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:45:58 crc kubenswrapper[4897]: I0214 18:45:58.042746 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adf0dad3-7118-4506-9603-dfc6b778980c-config" (OuterVolumeSpecName: "config") pod "adf0dad3-7118-4506-9603-dfc6b778980c" (UID: "adf0dad3-7118-4506-9603-dfc6b778980c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:45:58 crc kubenswrapper[4897]: I0214 18:45:58.045292 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adf0dad3-7118-4506-9603-dfc6b778980c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "adf0dad3-7118-4506-9603-dfc6b778980c" (UID: "adf0dad3-7118-4506-9603-dfc6b778980c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:45:58 crc kubenswrapper[4897]: I0214 18:45:58.046169 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adf0dad3-7118-4506-9603-dfc6b778980c-kube-api-access-hz8v2" (OuterVolumeSpecName: "kube-api-access-hz8v2") pod "adf0dad3-7118-4506-9603-dfc6b778980c" (UID: "adf0dad3-7118-4506-9603-dfc6b778980c"). InnerVolumeSpecName "kube-api-access-hz8v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:45:58 crc kubenswrapper[4897]: I0214 18:45:58.142905 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hz8v2\" (UniqueName: \"kubernetes.io/projected/adf0dad3-7118-4506-9603-dfc6b778980c-kube-api-access-hz8v2\") on node \"crc\" DevicePath \"\"" Feb 14 18:45:58 crc kubenswrapper[4897]: I0214 18:45:58.142972 4897 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adf0dad3-7118-4506-9603-dfc6b778980c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:45:58 crc kubenswrapper[4897]: I0214 18:45:58.142993 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adf0dad3-7118-4506-9603-dfc6b778980c-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:45:58 crc kubenswrapper[4897]: I0214 18:45:58.143056 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adf0dad3-7118-4506-9603-dfc6b778980c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:45:58 crc kubenswrapper[4897]: I0214 18:45:58.143077 4897 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/adf0dad3-7118-4506-9603-dfc6b778980c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 18:45:58 crc kubenswrapper[4897]: I0214 18:45:58.215056 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-568ccf4c5-rqm9f" event={"ID":"adf0dad3-7118-4506-9603-dfc6b778980c","Type":"ContainerDied","Data":"91d0770bfb47a587a12787d08ed337c3516a84e4f45de914c66a80350b92eea4"} Feb 14 18:45:58 crc kubenswrapper[4897]: I0214 18:45:58.215129 4897 scope.go:117] "RemoveContainer" containerID="dbd7ab816795a99ec6d4210a0b2b832637559fa478b941695f2770ac2d2db425" Feb 14 18:45:58 crc kubenswrapper[4897]: I0214 18:45:58.215265 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-568ccf4c5-rqm9f" Feb 14 18:45:58 crc kubenswrapper[4897]: I0214 18:45:58.221235 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm" event={"ID":"55640e27-4dfa-4535-b02e-47e7caef07a8","Type":"ContainerDied","Data":"4964e23413babff098d67bd7450e8a8f7c9d77ae68a1c80648852fda26703ee6"} Feb 14 18:45:58 crc kubenswrapper[4897]: I0214 18:45:58.221269 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm" Feb 14 18:45:58 crc kubenswrapper[4897]: I0214 18:45:58.249871 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm"] Feb 14 18:45:58 crc kubenswrapper[4897]: I0214 18:45:58.258337 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-868dc56cb4-f2xqm"] Feb 14 18:45:58 crc kubenswrapper[4897]: I0214 18:45:58.272332 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-568ccf4c5-rqm9f"] Feb 14 18:45:58 crc kubenswrapper[4897]: I0214 18:45:58.279074 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-568ccf4c5-rqm9f"] Feb 14 18:45:58 crc kubenswrapper[4897]: I0214 18:45:58.340240 4897 scope.go:117] "RemoveContainer" containerID="d49f7cb108db453cd8ce6e236bd5750e1e0ad730377f585306193c4d97d8e8be" Feb 14 18:45:58 crc kubenswrapper[4897]: I0214 18:45:58.622430 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx"] Feb 14 18:45:59 crc kubenswrapper[4897]: I0214 18:45:59.234303 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx" event={"ID":"ec0130b3-7cc4-4a83-a493-520737eaa30c","Type":"ContainerStarted","Data":"776b3b4df12614fbaa81a158e0d2e2c73e11aaf8ada54396fc21fab1622f0ec2"} Feb 14 18:45:59 crc kubenswrapper[4897]: I0214 18:45:59.241018 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84zhb" event={"ID":"696766b1-de35-447a-8f84-537044aa0f34","Type":"ContainerStarted","Data":"d9154695812d8eb20f6ba1b55e5a813e35989852f7de76ff6c0671ee683fadc4"} Feb 14 18:45:59 crc kubenswrapper[4897]: E0214 18:45:59.748593 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod696766b1_de35_447a_8f84_537044aa0f34.slice/crio-conmon-d9154695812d8eb20f6ba1b55e5a813e35989852f7de76ff6c0671ee683fadc4.scope\": RecentStats: unable to find data in memory cache]" Feb 14 18:45:59 crc kubenswrapper[4897]: I0214 18:45:59.812700 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55640e27-4dfa-4535-b02e-47e7caef07a8" path="/var/lib/kubelet/pods/55640e27-4dfa-4535-b02e-47e7caef07a8/volumes" Feb 14 18:45:59 crc kubenswrapper[4897]: I0214 18:45:59.813658 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adf0dad3-7118-4506-9603-dfc6b778980c" path="/var/lib/kubelet/pods/adf0dad3-7118-4506-9603-dfc6b778980c/volumes" Feb 14 18:45:59 crc kubenswrapper[4897]: I0214 18:45:59.815751 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7796f4d89c-ljgmh"] Feb 14 18:45:59 crc kubenswrapper[4897]: E0214 18:45:59.815943 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf0dad3-7118-4506-9603-dfc6b778980c" containerName="controller-manager" Feb 14 18:45:59 crc kubenswrapper[4897]: I0214 18:45:59.815963 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf0dad3-7118-4506-9603-dfc6b778980c" containerName="controller-manager" Feb 14 18:45:59 crc kubenswrapper[4897]: I0214 18:45:59.816562 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="adf0dad3-7118-4506-9603-dfc6b778980c" containerName="controller-manager" Feb 14 18:45:59 crc kubenswrapper[4897]: I0214 18:45:59.817050 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7796f4d89c-ljgmh" Feb 14 18:45:59 crc kubenswrapper[4897]: I0214 18:45:59.822132 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 18:45:59 crc kubenswrapper[4897]: I0214 18:45:59.822681 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 18:45:59 crc kubenswrapper[4897]: I0214 18:45:59.823171 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 18:45:59 crc kubenswrapper[4897]: I0214 18:45:59.823458 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 18:45:59 crc kubenswrapper[4897]: I0214 18:45:59.823655 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 18:45:59 crc kubenswrapper[4897]: I0214 18:45:59.825920 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 18:45:59 crc kubenswrapper[4897]: I0214 18:45:59.835593 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 18:45:59 crc kubenswrapper[4897]: I0214 18:45:59.838577 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7796f4d89c-ljgmh"] Feb 14 18:45:59 crc kubenswrapper[4897]: I0214 18:45:59.862258 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6cf2f" Feb 14 18:45:59 crc kubenswrapper[4897]: I0214 18:45:59.973659 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8cd8e4fe-070e-4868-b044-e2cbe1205989-client-ca\") pod \"controller-manager-7796f4d89c-ljgmh\" (UID: \"8cd8e4fe-070e-4868-b044-e2cbe1205989\") " pod="openshift-controller-manager/controller-manager-7796f4d89c-ljgmh" Feb 14 18:45:59 crc kubenswrapper[4897]: I0214 18:45:59.973732 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8cd8e4fe-070e-4868-b044-e2cbe1205989-proxy-ca-bundles\") pod \"controller-manager-7796f4d89c-ljgmh\" (UID: \"8cd8e4fe-070e-4868-b044-e2cbe1205989\") " pod="openshift-controller-manager/controller-manager-7796f4d89c-ljgmh" Feb 14 18:45:59 crc kubenswrapper[4897]: I0214 18:45:59.973820 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cd8e4fe-070e-4868-b044-e2cbe1205989-config\") pod \"controller-manager-7796f4d89c-ljgmh\" (UID: \"8cd8e4fe-070e-4868-b044-e2cbe1205989\") " pod="openshift-controller-manager/controller-manager-7796f4d89c-ljgmh" Feb 14 18:45:59 crc kubenswrapper[4897]: I0214 18:45:59.973882 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cd8e4fe-070e-4868-b044-e2cbe1205989-serving-cert\") pod \"controller-manager-7796f4d89c-ljgmh\" (UID: \"8cd8e4fe-070e-4868-b044-e2cbe1205989\") " pod="openshift-controller-manager/controller-manager-7796f4d89c-ljgmh" Feb 14 18:45:59 crc kubenswrapper[4897]: I0214 18:45:59.973974 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k59z\" (UniqueName: \"kubernetes.io/projected/8cd8e4fe-070e-4868-b044-e2cbe1205989-kube-api-access-8k59z\") pod \"controller-manager-7796f4d89c-ljgmh\" (UID: \"8cd8e4fe-070e-4868-b044-e2cbe1205989\") " pod="openshift-controller-manager/controller-manager-7796f4d89c-ljgmh" Feb 14 18:46:00 crc kubenswrapper[4897]: I0214 18:46:00.075645 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8cd8e4fe-070e-4868-b044-e2cbe1205989-client-ca\") pod \"controller-manager-7796f4d89c-ljgmh\" (UID: \"8cd8e4fe-070e-4868-b044-e2cbe1205989\") " pod="openshift-controller-manager/controller-manager-7796f4d89c-ljgmh" Feb 14 18:46:00 crc kubenswrapper[4897]: I0214 18:46:00.075711 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8cd8e4fe-070e-4868-b044-e2cbe1205989-proxy-ca-bundles\") pod \"controller-manager-7796f4d89c-ljgmh\" (UID: \"8cd8e4fe-070e-4868-b044-e2cbe1205989\") " pod="openshift-controller-manager/controller-manager-7796f4d89c-ljgmh" Feb 14 18:46:00 crc kubenswrapper[4897]: I0214 18:46:00.075779 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cd8e4fe-070e-4868-b044-e2cbe1205989-config\") pod \"controller-manager-7796f4d89c-ljgmh\" (UID: \"8cd8e4fe-070e-4868-b044-e2cbe1205989\") " pod="openshift-controller-manager/controller-manager-7796f4d89c-ljgmh" Feb 14 18:46:00 crc kubenswrapper[4897]: I0214 18:46:00.075840 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cd8e4fe-070e-4868-b044-e2cbe1205989-serving-cert\") pod \"controller-manager-7796f4d89c-ljgmh\" (UID: \"8cd8e4fe-070e-4868-b044-e2cbe1205989\") " pod="openshift-controller-manager/controller-manager-7796f4d89c-ljgmh" Feb 14 18:46:00 crc kubenswrapper[4897]: I0214 18:46:00.075921 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k59z\" (UniqueName: \"kubernetes.io/projected/8cd8e4fe-070e-4868-b044-e2cbe1205989-kube-api-access-8k59z\") pod \"controller-manager-7796f4d89c-ljgmh\" (UID: \"8cd8e4fe-070e-4868-b044-e2cbe1205989\") " pod="openshift-controller-manager/controller-manager-7796f4d89c-ljgmh" Feb 14 18:46:00 crc kubenswrapper[4897]: I0214 18:46:00.076902 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8cd8e4fe-070e-4868-b044-e2cbe1205989-client-ca\") pod \"controller-manager-7796f4d89c-ljgmh\" (UID: \"8cd8e4fe-070e-4868-b044-e2cbe1205989\") " pod="openshift-controller-manager/controller-manager-7796f4d89c-ljgmh" Feb 14 18:46:00 crc kubenswrapper[4897]: I0214 18:46:00.077555 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8cd8e4fe-070e-4868-b044-e2cbe1205989-proxy-ca-bundles\") pod \"controller-manager-7796f4d89c-ljgmh\" (UID: \"8cd8e4fe-070e-4868-b044-e2cbe1205989\") " pod="openshift-controller-manager/controller-manager-7796f4d89c-ljgmh" Feb 14 18:46:00 crc kubenswrapper[4897]: I0214 18:46:00.077826 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cd8e4fe-070e-4868-b044-e2cbe1205989-config\") pod \"controller-manager-7796f4d89c-ljgmh\" (UID: \"8cd8e4fe-070e-4868-b044-e2cbe1205989\") " pod="openshift-controller-manager/controller-manager-7796f4d89c-ljgmh" Feb 14 18:46:00 crc kubenswrapper[4897]: I0214 18:46:00.095914 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cd8e4fe-070e-4868-b044-e2cbe1205989-serving-cert\") pod \"controller-manager-7796f4d89c-ljgmh\" (UID: \"8cd8e4fe-070e-4868-b044-e2cbe1205989\") " pod="openshift-controller-manager/controller-manager-7796f4d89c-ljgmh" Feb 14 18:46:00 crc kubenswrapper[4897]: I0214 18:46:00.106913 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k59z\" (UniqueName: \"kubernetes.io/projected/8cd8e4fe-070e-4868-b044-e2cbe1205989-kube-api-access-8k59z\") pod \"controller-manager-7796f4d89c-ljgmh\" (UID: \"8cd8e4fe-070e-4868-b044-e2cbe1205989\") " pod="openshift-controller-manager/controller-manager-7796f4d89c-ljgmh" Feb 14 18:46:00 crc kubenswrapper[4897]: I0214 18:46:00.153775 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7796f4d89c-ljgmh" Feb 14 18:46:00 crc kubenswrapper[4897]: I0214 18:46:00.264171 4897 generic.go:334] "Generic (PLEG): container finished" podID="696766b1-de35-447a-8f84-537044aa0f34" containerID="d9154695812d8eb20f6ba1b55e5a813e35989852f7de76ff6c0671ee683fadc4" exitCode=0 Feb 14 18:46:00 crc kubenswrapper[4897]: I0214 18:46:00.264269 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84zhb" event={"ID":"696766b1-de35-447a-8f84-537044aa0f34","Type":"ContainerDied","Data":"d9154695812d8eb20f6ba1b55e5a813e35989852f7de76ff6c0671ee683fadc4"} Feb 14 18:46:00 crc kubenswrapper[4897]: I0214 18:46:00.276948 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgv5g" event={"ID":"7a553b46-b32c-435f-8e30-338b174cd444","Type":"ContainerStarted","Data":"8b0af50173192025174bf574ac8424a4eae8b650c256abc93133dda2481c0fb9"} Feb 14 18:46:00 crc kubenswrapper[4897]: I0214 18:46:00.284122 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx" event={"ID":"ec0130b3-7cc4-4a83-a493-520737eaa30c","Type":"ContainerStarted","Data":"8684ea7ede908e964a116dd71ba8f9ed677e31f6a52e928b266fa120e42626ff"} Feb 14 18:46:00 crc kubenswrapper[4897]: I0214 18:46:00.284969 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx" Feb 14 18:46:00 crc kubenswrapper[4897]: I0214 18:46:00.294853 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx" Feb 14 18:46:00 crc kubenswrapper[4897]: I0214 18:46:00.306570 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bgv5g" podStartSLOduration=4.666514868 podStartE2EDuration="1m1.30654913s" podCreationTimestamp="2026-02-14 18:44:59 +0000 UTC" firstStartedPulling="2026-02-14 18:45:01.701411675 +0000 UTC m=+154.677820158" lastFinishedPulling="2026-02-14 18:45:58.341445897 +0000 UTC m=+211.317854420" observedRunningTime="2026-02-14 18:46:00.302746285 +0000 UTC m=+213.279154778" watchObservedRunningTime="2026-02-14 18:46:00.30654913 +0000 UTC m=+213.282957623" Feb 14 18:46:00 crc kubenswrapper[4897]: I0214 18:46:00.354919 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx" podStartSLOduration=4.354902583 podStartE2EDuration="4.354902583s" podCreationTimestamp="2026-02-14 18:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:46:00.334541932 +0000 UTC m=+213.310950445" watchObservedRunningTime="2026-02-14 18:46:00.354902583 +0000 UTC m=+213.331311056" Feb 14 18:46:00 crc kubenswrapper[4897]: I0214 18:46:00.391149 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bgv5g" Feb 14 18:46:00 crc kubenswrapper[4897]: I0214 18:46:00.391210 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bgv5g" Feb 14 18:46:00 crc kubenswrapper[4897]: I0214 18:46:00.400890 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7796f4d89c-ljgmh"] Feb 14 18:46:00 crc kubenswrapper[4897]: W0214 18:46:00.417385 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8cd8e4fe_070e_4868_b044_e2cbe1205989.slice/crio-cee60748aff435b6fec65bd49ffe5f90e8a2ea1e947d614ac87ce7cd4cd3524b WatchSource:0}: Error finding container cee60748aff435b6fec65bd49ffe5f90e8a2ea1e947d614ac87ce7cd4cd3524b: Status 404 returned error can't find the container with id cee60748aff435b6fec65bd49ffe5f90e8a2ea1e947d614ac87ce7cd4cd3524b Feb 14 18:46:01 crc kubenswrapper[4897]: I0214 18:46:01.290311 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7796f4d89c-ljgmh" event={"ID":"8cd8e4fe-070e-4868-b044-e2cbe1205989","Type":"ContainerStarted","Data":"1a866da21f68c64765fbc4a518ca5c6390d6fd0ec310c1146fc64389de84f913"} Feb 14 18:46:01 crc kubenswrapper[4897]: I0214 18:46:01.290368 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7796f4d89c-ljgmh" event={"ID":"8cd8e4fe-070e-4868-b044-e2cbe1205989","Type":"ContainerStarted","Data":"cee60748aff435b6fec65bd49ffe5f90e8a2ea1e947d614ac87ce7cd4cd3524b"} Feb 14 18:46:01 crc kubenswrapper[4897]: I0214 18:46:01.291582 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7796f4d89c-ljgmh" Feb 14 18:46:01 crc kubenswrapper[4897]: I0214 18:46:01.296267 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7796f4d89c-ljgmh" Feb 14 18:46:01 crc kubenswrapper[4897]: I0214 18:46:01.311227 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7796f4d89c-ljgmh" podStartSLOduration=5.31120338 podStartE2EDuration="5.31120338s" podCreationTimestamp="2026-02-14 18:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:46:01.305838283 +0000 UTC m=+214.282246776" watchObservedRunningTime="2026-02-14 18:46:01.31120338 +0000 UTC m=+214.287611883" Feb 14 18:46:01 crc kubenswrapper[4897]: I0214 18:46:01.436408 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bgv5g" podUID="7a553b46-b32c-435f-8e30-338b174cd444" containerName="registry-server" probeResult="failure" output=< Feb 14 18:46:01 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 18:46:01 crc kubenswrapper[4897]: > Feb 14 18:46:01 crc kubenswrapper[4897]: I0214 18:46:01.632653 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6cf2f"] Feb 14 18:46:01 crc kubenswrapper[4897]: I0214 18:46:01.632895 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6cf2f" podUID="c35d0c45-bd4b-4e9c-bd85-e121f336a572" containerName="registry-server" containerID="cri-o://e34fbe5ee69378b4f5d2ffeaf74cca67258c0592c9a85014e9b47fb05a610874" gracePeriod=2 Feb 14 18:46:01 crc kubenswrapper[4897]: I0214 18:46:01.726142 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 18:46:01 crc kubenswrapper[4897]: I0214 18:46:01.726250 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 18:46:01 crc kubenswrapper[4897]: I0214 18:46:01.726325 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 18:46:01 crc kubenswrapper[4897]: I0214 18:46:01.727265 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af"} pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 18:46:01 crc kubenswrapper[4897]: I0214 18:46:01.727448 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" containerID="cri-o://b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af" gracePeriod=600 Feb 14 18:46:02 crc kubenswrapper[4897]: I0214 18:46:02.299574 4897 generic.go:334] "Generic (PLEG): container finished" podID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerID="b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af" exitCode=0 Feb 14 18:46:02 crc kubenswrapper[4897]: I0214 18:46:02.299622 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerDied","Data":"b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af"} Feb 14 18:46:03 crc kubenswrapper[4897]: I0214 18:46:03.311963 4897 generic.go:334] "Generic (PLEG): container finished" podID="c35d0c45-bd4b-4e9c-bd85-e121f336a572" containerID="e34fbe5ee69378b4f5d2ffeaf74cca67258c0592c9a85014e9b47fb05a610874" exitCode=0 Feb 14 18:46:03 crc kubenswrapper[4897]: I0214 18:46:03.312091 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cf2f" event={"ID":"c35d0c45-bd4b-4e9c-bd85-e121f336a572","Type":"ContainerDied","Data":"e34fbe5ee69378b4f5d2ffeaf74cca67258c0592c9a85014e9b47fb05a610874"} Feb 14 18:46:04 crc kubenswrapper[4897]: I0214 18:46:04.014009 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6cf2f" Feb 14 18:46:04 crc kubenswrapper[4897]: I0214 18:46:04.133265 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35d0c45-bd4b-4e9c-bd85-e121f336a572-catalog-content\") pod \"c35d0c45-bd4b-4e9c-bd85-e121f336a572\" (UID: \"c35d0c45-bd4b-4e9c-bd85-e121f336a572\") " Feb 14 18:46:04 crc kubenswrapper[4897]: I0214 18:46:04.133433 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35d0c45-bd4b-4e9c-bd85-e121f336a572-utilities\") pod \"c35d0c45-bd4b-4e9c-bd85-e121f336a572\" (UID: \"c35d0c45-bd4b-4e9c-bd85-e121f336a572\") " Feb 14 18:46:04 crc kubenswrapper[4897]: I0214 18:46:04.133471 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt4gh\" (UniqueName: \"kubernetes.io/projected/c35d0c45-bd4b-4e9c-bd85-e121f336a572-kube-api-access-wt4gh\") pod \"c35d0c45-bd4b-4e9c-bd85-e121f336a572\" (UID: \"c35d0c45-bd4b-4e9c-bd85-e121f336a572\") " Feb 14 18:46:04 crc kubenswrapper[4897]: I0214 18:46:04.134139 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c35d0c45-bd4b-4e9c-bd85-e121f336a572-utilities" (OuterVolumeSpecName: "utilities") pod "c35d0c45-bd4b-4e9c-bd85-e121f336a572" (UID: "c35d0c45-bd4b-4e9c-bd85-e121f336a572"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:46:04 crc kubenswrapper[4897]: I0214 18:46:04.155321 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c35d0c45-bd4b-4e9c-bd85-e121f336a572-kube-api-access-wt4gh" (OuterVolumeSpecName: "kube-api-access-wt4gh") pod "c35d0c45-bd4b-4e9c-bd85-e121f336a572" (UID: "c35d0c45-bd4b-4e9c-bd85-e121f336a572"). InnerVolumeSpecName "kube-api-access-wt4gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:46:04 crc kubenswrapper[4897]: I0214 18:46:04.155867 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c35d0c45-bd4b-4e9c-bd85-e121f336a572-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c35d0c45-bd4b-4e9c-bd85-e121f336a572" (UID: "c35d0c45-bd4b-4e9c-bd85-e121f336a572"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:46:04 crc kubenswrapper[4897]: I0214 18:46:04.234931 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35d0c45-bd4b-4e9c-bd85-e121f336a572-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:04 crc kubenswrapper[4897]: I0214 18:46:04.234974 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt4gh\" (UniqueName: \"kubernetes.io/projected/c35d0c45-bd4b-4e9c-bd85-e121f336a572-kube-api-access-wt4gh\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:04 crc kubenswrapper[4897]: I0214 18:46:04.234985 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35d0c45-bd4b-4e9c-bd85-e121f336a572-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:04 crc kubenswrapper[4897]: I0214 18:46:04.319600 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cf2f" event={"ID":"c35d0c45-bd4b-4e9c-bd85-e121f336a572","Type":"ContainerDied","Data":"8f1932c1fc5f56edd5f4578ad960bda441efba90565d7ff24c78b3ed7570f5ef"} Feb 14 18:46:04 crc kubenswrapper[4897]: I0214 18:46:04.319661 4897 scope.go:117] "RemoveContainer" containerID="e34fbe5ee69378b4f5d2ffeaf74cca67258c0592c9a85014e9b47fb05a610874" Feb 14 18:46:04 crc kubenswrapper[4897]: I0214 18:46:04.319683 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6cf2f" Feb 14 18:46:04 crc kubenswrapper[4897]: I0214 18:46:04.360993 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6cf2f"] Feb 14 18:46:04 crc kubenswrapper[4897]: I0214 18:46:04.368917 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6cf2f"] Feb 14 18:46:04 crc kubenswrapper[4897]: I0214 18:46:04.686068 4897 scope.go:117] "RemoveContainer" containerID="ec40a6bb051d7dc153370eec2d53aeff6abd46ebcd0dc7e229cc7891e742b111" Feb 14 18:46:04 crc kubenswrapper[4897]: I0214 18:46:04.940012 4897 scope.go:117] "RemoveContainer" containerID="2bc4f5fac5225af938a6d9bd383011802581fa899aed5dcae5fb26893275e7f8" Feb 14 18:46:05 crc kubenswrapper[4897]: I0214 18:46:05.807967 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c35d0c45-bd4b-4e9c-bd85-e121f336a572" path="/var/lib/kubelet/pods/c35d0c45-bd4b-4e9c-bd85-e121f336a572/volumes" Feb 14 18:46:06 crc kubenswrapper[4897]: I0214 18:46:06.335473 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerStarted","Data":"ac685437ddee138a3eaa2a50823011ad70b1b32e6d58f93b6f0439596a8822de"} Feb 14 18:46:08 crc kubenswrapper[4897]: I0214 18:46:08.355290 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84zhb" event={"ID":"696766b1-de35-447a-8f84-537044aa0f34","Type":"ContainerStarted","Data":"347c4df4681fe6466512a3690f8b9b20e5836734bdf96b46ed34405eb71d3530"} Feb 14 18:46:08 crc kubenswrapper[4897]: I0214 18:46:08.358815 4897 generic.go:334] "Generic (PLEG): container finished" podID="d360c9a9-d428-4ca4-9379-e052a6e60b22" containerID="7880691ac52f749b5389dcc750c89461df2786eaf24dae02299bdc702972c810" exitCode=0 Feb 14 18:46:08 crc kubenswrapper[4897]: I0214 18:46:08.358951 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rph5f" event={"ID":"d360c9a9-d428-4ca4-9379-e052a6e60b22","Type":"ContainerDied","Data":"7880691ac52f749b5389dcc750c89461df2786eaf24dae02299bdc702972c810"} Feb 14 18:46:08 crc kubenswrapper[4897]: I0214 18:46:08.377333 4897 generic.go:334] "Generic (PLEG): container finished" podID="5a07b450-333c-4f3f-8c4d-4b9bd35b7d74" containerID="bd6aaa0670b395f704bb6637597bcb06ce8c68dfe19948d0fd34c025ed062a76" exitCode=0 Feb 14 18:46:08 crc kubenswrapper[4897]: I0214 18:46:08.377458 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6rm4" event={"ID":"5a07b450-333c-4f3f-8c4d-4b9bd35b7d74","Type":"ContainerDied","Data":"bd6aaa0670b395f704bb6637597bcb06ce8c68dfe19948d0fd34c025ed062a76"} Feb 14 18:46:08 crc kubenswrapper[4897]: I0214 18:46:08.386073 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twzlp" event={"ID":"6cdafc37-f772-4b48-b1cf-29759861b373","Type":"ContainerDied","Data":"655ac657f55cf60ca8f4cf3187edfeabee4612635a9de808ed951a557955ce94"} Feb 14 18:46:08 crc kubenswrapper[4897]: I0214 18:46:08.386148 4897 generic.go:334] "Generic (PLEG): container finished" podID="6cdafc37-f772-4b48-b1cf-29759861b373" containerID="655ac657f55cf60ca8f4cf3187edfeabee4612635a9de808ed951a557955ce94" exitCode=0 Feb 14 18:46:08 crc kubenswrapper[4897]: I0214 18:46:08.398000 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-84zhb" podStartSLOduration=4.554701801 podStartE2EDuration="1m11.397977772s" podCreationTimestamp="2026-02-14 18:44:57 +0000 UTC" firstStartedPulling="2026-02-14 18:44:58.581174144 +0000 UTC m=+151.557582617" lastFinishedPulling="2026-02-14 18:46:05.424450055 +0000 UTC m=+218.400858588" observedRunningTime="2026-02-14 18:46:08.388888763 +0000 UTC m=+221.365297266" watchObservedRunningTime="2026-02-14 18:46:08.397977772 +0000 UTC m=+221.374386275" Feb 14 18:46:08 crc kubenswrapper[4897]: I0214 18:46:08.401522 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjckf" event={"ID":"7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e","Type":"ContainerStarted","Data":"5d8c9d46daaf2d36d4130a7dbc6399dad3850d0249a0f4557e5616965d8d2d91"} Feb 14 18:46:08 crc kubenswrapper[4897]: I0214 18:46:08.405164 4897 generic.go:334] "Generic (PLEG): container finished" podID="037c41d9-7976-43c9-baa6-57aec44c28de" containerID="90567fa9cca2f7340fc6646d72e23d706212039b065c0a15da26caf1000ca514" exitCode=0 Feb 14 18:46:08 crc kubenswrapper[4897]: I0214 18:46:08.405222 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4c74d" event={"ID":"037c41d9-7976-43c9-baa6-57aec44c28de","Type":"ContainerDied","Data":"90567fa9cca2f7340fc6646d72e23d706212039b065c0a15da26caf1000ca514"} Feb 14 18:46:09 crc kubenswrapper[4897]: I0214 18:46:09.413301 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6rm4" event={"ID":"5a07b450-333c-4f3f-8c4d-4b9bd35b7d74","Type":"ContainerStarted","Data":"9fb1c3881d3d907ce8e9972a59289d66c498d0d36a4c79641d9297d9d4bcd028"} Feb 14 18:46:09 crc kubenswrapper[4897]: I0214 18:46:09.417556 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twzlp" event={"ID":"6cdafc37-f772-4b48-b1cf-29759861b373","Type":"ContainerStarted","Data":"85f0aaa618a56bc1bb5010a755815f41953d9ab2f19ed3d285556ffb37b071ad"} Feb 14 18:46:09 crc kubenswrapper[4897]: I0214 18:46:09.422400 4897 generic.go:334] "Generic (PLEG): container finished" podID="7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e" containerID="5d8c9d46daaf2d36d4130a7dbc6399dad3850d0249a0f4557e5616965d8d2d91" exitCode=0 Feb 14 18:46:09 crc kubenswrapper[4897]: I0214 18:46:09.422558 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjckf" event={"ID":"7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e","Type":"ContainerDied","Data":"5d8c9d46daaf2d36d4130a7dbc6399dad3850d0249a0f4557e5616965d8d2d91"} Feb 14 18:46:09 crc kubenswrapper[4897]: I0214 18:46:09.426324 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4c74d" event={"ID":"037c41d9-7976-43c9-baa6-57aec44c28de","Type":"ContainerStarted","Data":"7489420b8f7e6a38e90f23215d803ce5203e5f3494e5d61ea473bf4c4ffca102"} Feb 14 18:46:09 crc kubenswrapper[4897]: I0214 18:46:09.427041 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-twzlp" Feb 14 18:46:09 crc kubenswrapper[4897]: I0214 18:46:09.427062 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-twzlp" Feb 14 18:46:09 crc kubenswrapper[4897]: I0214 18:46:09.429546 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rph5f" event={"ID":"d360c9a9-d428-4ca4-9379-e052a6e60b22","Type":"ContainerStarted","Data":"df2c0c009e1a1304236f9e5fd2b291a64ab0346e216c31892406bbbb5e496e77"} Feb 14 18:46:09 crc kubenswrapper[4897]: I0214 18:46:09.440568 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h6rm4" podStartSLOduration=2.109532377 podStartE2EDuration="1m12.44054894s" podCreationTimestamp="2026-02-14 18:44:57 +0000 UTC" firstStartedPulling="2026-02-14 18:44:58.565778358 +0000 UTC m=+151.542186841" lastFinishedPulling="2026-02-14 18:46:08.896794911 +0000 UTC m=+221.873203404" observedRunningTime="2026-02-14 18:46:09.43993538 +0000 UTC m=+222.416343873" watchObservedRunningTime="2026-02-14 18:46:09.44054894 +0000 UTC m=+222.416957433" Feb 14 18:46:09 crc kubenswrapper[4897]: I0214 18:46:09.466421 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4c74d" podStartSLOduration=3.032407768 podStartE2EDuration="1m13.466399722s" podCreationTimestamp="2026-02-14 18:44:56 +0000 UTC" firstStartedPulling="2026-02-14 18:44:58.567560587 +0000 UTC m=+151.543969070" lastFinishedPulling="2026-02-14 18:46:09.001552541 +0000 UTC m=+221.977961024" observedRunningTime="2026-02-14 18:46:09.464869252 +0000 UTC m=+222.441277765" watchObservedRunningTime="2026-02-14 18:46:09.466399722 +0000 UTC m=+222.442808205" Feb 14 18:46:09 crc kubenswrapper[4897]: I0214 18:46:09.484610 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-twzlp" podStartSLOduration=3.245063912 podStartE2EDuration="1m11.484593351s" podCreationTimestamp="2026-02-14 18:44:58 +0000 UTC" firstStartedPulling="2026-02-14 18:45:00.640781729 +0000 UTC m=+153.617190212" lastFinishedPulling="2026-02-14 18:46:08.880311158 +0000 UTC m=+221.856719651" observedRunningTime="2026-02-14 18:46:09.481426957 +0000 UTC m=+222.457835450" watchObservedRunningTime="2026-02-14 18:46:09.484593351 +0000 UTC m=+222.461001834" Feb 14 18:46:09 crc kubenswrapper[4897]: I0214 18:46:09.519234 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rph5f" podStartSLOduration=3.320593277 podStartE2EDuration="1m13.519217971s" podCreationTimestamp="2026-02-14 18:44:56 +0000 UTC" firstStartedPulling="2026-02-14 18:44:58.584314207 +0000 UTC m=+151.560722690" lastFinishedPulling="2026-02-14 18:46:08.782938871 +0000 UTC m=+221.759347384" observedRunningTime="2026-02-14 18:46:09.517622099 +0000 UTC m=+222.494030592" watchObservedRunningTime="2026-02-14 18:46:09.519217971 +0000 UTC m=+222.495626454" Feb 14 18:46:10 crc kubenswrapper[4897]: I0214 18:46:10.437158 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjckf" event={"ID":"7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e","Type":"ContainerStarted","Data":"60ecec18c948974e204cc8041d0add02b0519c372616913861a45a9b07ed0c79"} Feb 14 18:46:10 crc kubenswrapper[4897]: I0214 18:46:10.442505 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bgv5g" Feb 14 18:46:10 crc kubenswrapper[4897]: I0214 18:46:10.468234 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wjckf" podStartSLOduration=2.341294109 podStartE2EDuration="1m10.468214428s" podCreationTimestamp="2026-02-14 18:45:00 +0000 UTC" firstStartedPulling="2026-02-14 18:45:01.696521163 +0000 UTC m=+154.672929646" lastFinishedPulling="2026-02-14 18:46:09.823441482 +0000 UTC m=+222.799849965" observedRunningTime="2026-02-14 18:46:10.465625922 +0000 UTC m=+223.442034415" watchObservedRunningTime="2026-02-14 18:46:10.468214428 +0000 UTC m=+223.444622911" Feb 14 18:46:10 crc kubenswrapper[4897]: I0214 18:46:10.474371 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-twzlp" podUID="6cdafc37-f772-4b48-b1cf-29759861b373" containerName="registry-server" probeResult="failure" output=< Feb 14 18:46:10 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 18:46:10 crc kubenswrapper[4897]: > Feb 14 18:46:10 crc kubenswrapper[4897]: I0214 18:46:10.483816 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bgv5g" Feb 14 18:46:10 crc kubenswrapper[4897]: I0214 18:46:10.776696 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wjckf" Feb 14 18:46:10 crc kubenswrapper[4897]: I0214 18:46:10.776755 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wjckf" Feb 14 18:46:11 crc kubenswrapper[4897]: I0214 18:46:11.812117 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wjckf" podUID="7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e" containerName="registry-server" probeResult="failure" output=< Feb 14 18:46:11 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 18:46:11 crc kubenswrapper[4897]: > Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.185479 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7796f4d89c-ljgmh"] Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.186254 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7796f4d89c-ljgmh" podUID="8cd8e4fe-070e-4868-b044-e2cbe1205989" containerName="controller-manager" containerID="cri-o://1a866da21f68c64765fbc4a518ca5c6390d6fd0ec310c1146fc64389de84f913" gracePeriod=30 Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.287253 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx"] Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.287777 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx" podUID="ec0130b3-7cc4-4a83-a493-520737eaa30c" containerName="route-controller-manager" containerID="cri-o://8684ea7ede908e964a116dd71ba8f9ed677e31f6a52e928b266fa120e42626ff" gracePeriod=30 Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.491619 4897 generic.go:334] "Generic (PLEG): container finished" podID="ec0130b3-7cc4-4a83-a493-520737eaa30c" containerID="8684ea7ede908e964a116dd71ba8f9ed677e31f6a52e928b266fa120e42626ff" exitCode=0 Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.491700 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx" event={"ID":"ec0130b3-7cc4-4a83-a493-520737eaa30c","Type":"ContainerDied","Data":"8684ea7ede908e964a116dd71ba8f9ed677e31f6a52e928b266fa120e42626ff"} Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.495591 4897 generic.go:334] "Generic (PLEG): container finished" podID="8cd8e4fe-070e-4868-b044-e2cbe1205989" containerID="1a866da21f68c64765fbc4a518ca5c6390d6fd0ec310c1146fc64389de84f913" exitCode=0 Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.495641 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7796f4d89c-ljgmh" event={"ID":"8cd8e4fe-070e-4868-b044-e2cbe1205989","Type":"ContainerDied","Data":"1a866da21f68c64765fbc4a518ca5c6390d6fd0ec310c1146fc64389de84f913"} Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.745068 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx" Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.748296 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7796f4d89c-ljgmh" Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.918303 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8cd8e4fe-070e-4868-b044-e2cbe1205989-client-ca\") pod \"8cd8e4fe-070e-4868-b044-e2cbe1205989\" (UID: \"8cd8e4fe-070e-4868-b044-e2cbe1205989\") " Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.918375 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8cd8e4fe-070e-4868-b044-e2cbe1205989-proxy-ca-bundles\") pod \"8cd8e4fe-070e-4868-b044-e2cbe1205989\" (UID: \"8cd8e4fe-070e-4868-b044-e2cbe1205989\") " Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.918427 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec0130b3-7cc4-4a83-a493-520737eaa30c-config\") pod \"ec0130b3-7cc4-4a83-a493-520737eaa30c\" (UID: \"ec0130b3-7cc4-4a83-a493-520737eaa30c\") " Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.918472 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec0130b3-7cc4-4a83-a493-520737eaa30c-serving-cert\") pod \"ec0130b3-7cc4-4a83-a493-520737eaa30c\" (UID: \"ec0130b3-7cc4-4a83-a493-520737eaa30c\") " Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.918497 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k59z\" (UniqueName: \"kubernetes.io/projected/8cd8e4fe-070e-4868-b044-e2cbe1205989-kube-api-access-8k59z\") pod \"8cd8e4fe-070e-4868-b044-e2cbe1205989\" (UID: \"8cd8e4fe-070e-4868-b044-e2cbe1205989\") " Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.918524 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpktq\" (UniqueName: \"kubernetes.io/projected/ec0130b3-7cc4-4a83-a493-520737eaa30c-kube-api-access-cpktq\") pod \"ec0130b3-7cc4-4a83-a493-520737eaa30c\" (UID: \"ec0130b3-7cc4-4a83-a493-520737eaa30c\") " Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.918608 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cd8e4fe-070e-4868-b044-e2cbe1205989-serving-cert\") pod \"8cd8e4fe-070e-4868-b044-e2cbe1205989\" (UID: \"8cd8e4fe-070e-4868-b044-e2cbe1205989\") " Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.918633 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cd8e4fe-070e-4868-b044-e2cbe1205989-config\") pod \"8cd8e4fe-070e-4868-b044-e2cbe1205989\" (UID: \"8cd8e4fe-070e-4868-b044-e2cbe1205989\") " Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.918676 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec0130b3-7cc4-4a83-a493-520737eaa30c-client-ca\") pod \"ec0130b3-7cc4-4a83-a493-520737eaa30c\" (UID: \"ec0130b3-7cc4-4a83-a493-520737eaa30c\") " Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.919398 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cd8e4fe-070e-4868-b044-e2cbe1205989-client-ca" (OuterVolumeSpecName: "client-ca") pod "8cd8e4fe-070e-4868-b044-e2cbe1205989" (UID: "8cd8e4fe-070e-4868-b044-e2cbe1205989"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.920150 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec0130b3-7cc4-4a83-a493-520737eaa30c-client-ca" (OuterVolumeSpecName: "client-ca") pod "ec0130b3-7cc4-4a83-a493-520737eaa30c" (UID: "ec0130b3-7cc4-4a83-a493-520737eaa30c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.920397 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cd8e4fe-070e-4868-b044-e2cbe1205989-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8cd8e4fe-070e-4868-b044-e2cbe1205989" (UID: "8cd8e4fe-070e-4868-b044-e2cbe1205989"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.920945 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec0130b3-7cc4-4a83-a493-520737eaa30c-config" (OuterVolumeSpecName: "config") pod "ec0130b3-7cc4-4a83-a493-520737eaa30c" (UID: "ec0130b3-7cc4-4a83-a493-520737eaa30c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.920986 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cd8e4fe-070e-4868-b044-e2cbe1205989-config" (OuterVolumeSpecName: "config") pod "8cd8e4fe-070e-4868-b044-e2cbe1205989" (UID: "8cd8e4fe-070e-4868-b044-e2cbe1205989"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.925813 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec0130b3-7cc4-4a83-a493-520737eaa30c-kube-api-access-cpktq" (OuterVolumeSpecName: "kube-api-access-cpktq") pod "ec0130b3-7cc4-4a83-a493-520737eaa30c" (UID: "ec0130b3-7cc4-4a83-a493-520737eaa30c"). InnerVolumeSpecName "kube-api-access-cpktq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.926533 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cd8e4fe-070e-4868-b044-e2cbe1205989-kube-api-access-8k59z" (OuterVolumeSpecName: "kube-api-access-8k59z") pod "8cd8e4fe-070e-4868-b044-e2cbe1205989" (UID: "8cd8e4fe-070e-4868-b044-e2cbe1205989"). InnerVolumeSpecName "kube-api-access-8k59z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.927445 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cd8e4fe-070e-4868-b044-e2cbe1205989-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cd8e4fe-070e-4868-b044-e2cbe1205989" (UID: "8cd8e4fe-070e-4868-b044-e2cbe1205989"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:46:16 crc kubenswrapper[4897]: I0214 18:46:16.928609 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec0130b3-7cc4-4a83-a493-520737eaa30c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ec0130b3-7cc4-4a83-a493-520737eaa30c" (UID: "ec0130b3-7cc4-4a83-a493-520737eaa30c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.020732 4897 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec0130b3-7cc4-4a83-a493-520737eaa30c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.020783 4897 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8cd8e4fe-070e-4868-b044-e2cbe1205989-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.020805 4897 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8cd8e4fe-070e-4868-b044-e2cbe1205989-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.020827 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec0130b3-7cc4-4a83-a493-520737eaa30c-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.020848 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec0130b3-7cc4-4a83-a493-520737eaa30c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.020865 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8k59z\" (UniqueName: \"kubernetes.io/projected/8cd8e4fe-070e-4868-b044-e2cbe1205989-kube-api-access-8k59z\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.020883 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpktq\" (UniqueName: \"kubernetes.io/projected/ec0130b3-7cc4-4a83-a493-520737eaa30c-kube-api-access-cpktq\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.020900 4897 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cd8e4fe-070e-4868-b044-e2cbe1205989-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.020920 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cd8e4fe-070e-4868-b044-e2cbe1205989-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.198649 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4c74d" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.200263 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4c74d" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.280584 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4c74d" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.367848 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rph5f" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.367926 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rph5f" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.432328 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rph5f" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.507392 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7796f4d89c-ljgmh" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.507417 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7796f4d89c-ljgmh" event={"ID":"8cd8e4fe-070e-4868-b044-e2cbe1205989","Type":"ContainerDied","Data":"cee60748aff435b6fec65bd49ffe5f90e8a2ea1e947d614ac87ce7cd4cd3524b"} Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.507491 4897 scope.go:117] "RemoveContainer" containerID="1a866da21f68c64765fbc4a518ca5c6390d6fd0ec310c1146fc64389de84f913" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.510079 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx" event={"ID":"ec0130b3-7cc4-4a83-a493-520737eaa30c","Type":"ContainerDied","Data":"776b3b4df12614fbaa81a158e0d2e2c73e11aaf8ada54396fc21fab1622f0ec2"} Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.510139 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.535281 4897 scope.go:117] "RemoveContainer" containerID="8684ea7ede908e964a116dd71ba8f9ed677e31f6a52e928b266fa120e42626ff" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.564993 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx"] Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.569484 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-84zhb" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.569538 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-84zhb" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.569971 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb77dfd47-qtwnx"] Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.577782 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7796f4d89c-ljgmh"] Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.580324 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rph5f" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.582661 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7796f4d89c-ljgmh"] Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.593167 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4c74d" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.641810 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-84zhb" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.768728 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h6rm4" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.768798 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h6rm4" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.807879 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cd8e4fe-070e-4868-b044-e2cbe1205989" path="/var/lib/kubelet/pods/8cd8e4fe-070e-4868-b044-e2cbe1205989/volumes" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.809304 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec0130b3-7cc4-4a83-a493-520737eaa30c" path="/var/lib/kubelet/pods/ec0130b3-7cc4-4a83-a493-520737eaa30c/volumes" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.828253 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86b69bbd49-9rnzb"] Feb 14 18:46:17 crc kubenswrapper[4897]: E0214 18:46:17.828703 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec0130b3-7cc4-4a83-a493-520737eaa30c" containerName="route-controller-manager" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.828736 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec0130b3-7cc4-4a83-a493-520737eaa30c" containerName="route-controller-manager" Feb 14 18:46:17 crc kubenswrapper[4897]: E0214 18:46:17.828759 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c35d0c45-bd4b-4e9c-bd85-e121f336a572" containerName="extract-utilities" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.828774 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c35d0c45-bd4b-4e9c-bd85-e121f336a572" containerName="extract-utilities" Feb 14 18:46:17 crc kubenswrapper[4897]: E0214 18:46:17.828790 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c35d0c45-bd4b-4e9c-bd85-e121f336a572" containerName="extract-content" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.828802 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c35d0c45-bd4b-4e9c-bd85-e121f336a572" containerName="extract-content" Feb 14 18:46:17 crc kubenswrapper[4897]: E0214 18:46:17.828834 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cd8e4fe-070e-4868-b044-e2cbe1205989" containerName="controller-manager" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.828848 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cd8e4fe-070e-4868-b044-e2cbe1205989" containerName="controller-manager" Feb 14 18:46:17 crc kubenswrapper[4897]: E0214 18:46:17.828866 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c35d0c45-bd4b-4e9c-bd85-e121f336a572" containerName="registry-server" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.828878 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c35d0c45-bd4b-4e9c-bd85-e121f336a572" containerName="registry-server" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.829114 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec0130b3-7cc4-4a83-a493-520737eaa30c" containerName="route-controller-manager" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.829135 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cd8e4fe-070e-4868-b044-e2cbe1205989" containerName="controller-manager" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.829152 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="c35d0c45-bd4b-4e9c-bd85-e121f336a572" containerName="registry-server" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.829955 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.830899 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf"] Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.831672 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.833758 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.833767 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.833933 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.834123 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h6rm4" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.834794 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.835407 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.835524 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.835763 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.836222 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.836344 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.836548 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.841409 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.843543 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.847142 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.851210 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf"] Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.854665 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86b69bbd49-9rnzb"] Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.933156 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4clb8\" (UniqueName: \"kubernetes.io/projected/7e892adf-50be-43db-bfb6-6ad0530bf7a5-kube-api-access-4clb8\") pod \"route-controller-manager-66464749f5-tftwf\" (UID: \"7e892adf-50be-43db-bfb6-6ad0530bf7a5\") " pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.933237 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7325d839-07ed-4966-bb45-10719d4ec580-serving-cert\") pod \"controller-manager-86b69bbd49-9rnzb\" (UID: \"7325d839-07ed-4966-bb45-10719d4ec580\") " pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.933282 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e892adf-50be-43db-bfb6-6ad0530bf7a5-config\") pod \"route-controller-manager-66464749f5-tftwf\" (UID: \"7e892adf-50be-43db-bfb6-6ad0530bf7a5\") " pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.933402 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcfgw\" (UniqueName: \"kubernetes.io/projected/7325d839-07ed-4966-bb45-10719d4ec580-kube-api-access-rcfgw\") pod \"controller-manager-86b69bbd49-9rnzb\" (UID: \"7325d839-07ed-4966-bb45-10719d4ec580\") " pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.933435 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e892adf-50be-43db-bfb6-6ad0530bf7a5-client-ca\") pod \"route-controller-manager-66464749f5-tftwf\" (UID: \"7e892adf-50be-43db-bfb6-6ad0530bf7a5\") " pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.933454 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7325d839-07ed-4966-bb45-10719d4ec580-proxy-ca-bundles\") pod \"controller-manager-86b69bbd49-9rnzb\" (UID: \"7325d839-07ed-4966-bb45-10719d4ec580\") " pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.933504 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7325d839-07ed-4966-bb45-10719d4ec580-client-ca\") pod \"controller-manager-86b69bbd49-9rnzb\" (UID: \"7325d839-07ed-4966-bb45-10719d4ec580\") " pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.933528 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e892adf-50be-43db-bfb6-6ad0530bf7a5-serving-cert\") pod \"route-controller-manager-66464749f5-tftwf\" (UID: \"7e892adf-50be-43db-bfb6-6ad0530bf7a5\") " pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" Feb 14 18:46:17 crc kubenswrapper[4897]: I0214 18:46:17.933563 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7325d839-07ed-4966-bb45-10719d4ec580-config\") pod \"controller-manager-86b69bbd49-9rnzb\" (UID: \"7325d839-07ed-4966-bb45-10719d4ec580\") " pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" Feb 14 18:46:18 crc kubenswrapper[4897]: I0214 18:46:18.035111 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcfgw\" (UniqueName: \"kubernetes.io/projected/7325d839-07ed-4966-bb45-10719d4ec580-kube-api-access-rcfgw\") pod \"controller-manager-86b69bbd49-9rnzb\" (UID: \"7325d839-07ed-4966-bb45-10719d4ec580\") " pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" Feb 14 18:46:18 crc kubenswrapper[4897]: I0214 18:46:18.035191 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e892adf-50be-43db-bfb6-6ad0530bf7a5-client-ca\") pod \"route-controller-manager-66464749f5-tftwf\" (UID: \"7e892adf-50be-43db-bfb6-6ad0530bf7a5\") " pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" Feb 14 18:46:18 crc kubenswrapper[4897]: I0214 18:46:18.035220 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7325d839-07ed-4966-bb45-10719d4ec580-proxy-ca-bundles\") pod \"controller-manager-86b69bbd49-9rnzb\" (UID: \"7325d839-07ed-4966-bb45-10719d4ec580\") " pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" Feb 14 18:46:18 crc kubenswrapper[4897]: I0214 18:46:18.035257 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7325d839-07ed-4966-bb45-10719d4ec580-client-ca\") pod \"controller-manager-86b69bbd49-9rnzb\" (UID: \"7325d839-07ed-4966-bb45-10719d4ec580\") " pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" Feb 14 18:46:18 crc kubenswrapper[4897]: I0214 18:46:18.035288 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e892adf-50be-43db-bfb6-6ad0530bf7a5-serving-cert\") pod \"route-controller-manager-66464749f5-tftwf\" (UID: \"7e892adf-50be-43db-bfb6-6ad0530bf7a5\") " pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" Feb 14 18:46:18 crc kubenswrapper[4897]: I0214 18:46:18.035324 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7325d839-07ed-4966-bb45-10719d4ec580-config\") pod \"controller-manager-86b69bbd49-9rnzb\" (UID: \"7325d839-07ed-4966-bb45-10719d4ec580\") " pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" Feb 14 18:46:18 crc kubenswrapper[4897]: I0214 18:46:18.035362 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4clb8\" (UniqueName: \"kubernetes.io/projected/7e892adf-50be-43db-bfb6-6ad0530bf7a5-kube-api-access-4clb8\") pod \"route-controller-manager-66464749f5-tftwf\" (UID: \"7e892adf-50be-43db-bfb6-6ad0530bf7a5\") " pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" Feb 14 18:46:18 crc kubenswrapper[4897]: I0214 18:46:18.035392 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7325d839-07ed-4966-bb45-10719d4ec580-serving-cert\") pod \"controller-manager-86b69bbd49-9rnzb\" (UID: \"7325d839-07ed-4966-bb45-10719d4ec580\") " pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" Feb 14 18:46:18 crc kubenswrapper[4897]: I0214 18:46:18.035414 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e892adf-50be-43db-bfb6-6ad0530bf7a5-config\") pod \"route-controller-manager-66464749f5-tftwf\" (UID: \"7e892adf-50be-43db-bfb6-6ad0530bf7a5\") " pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" Feb 14 18:46:18 crc kubenswrapper[4897]: I0214 18:46:18.036922 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e892adf-50be-43db-bfb6-6ad0530bf7a5-config\") pod \"route-controller-manager-66464749f5-tftwf\" (UID: \"7e892adf-50be-43db-bfb6-6ad0530bf7a5\") " pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" Feb 14 18:46:18 crc kubenswrapper[4897]: I0214 18:46:18.038901 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7325d839-07ed-4966-bb45-10719d4ec580-client-ca\") pod \"controller-manager-86b69bbd49-9rnzb\" (UID: \"7325d839-07ed-4966-bb45-10719d4ec580\") " pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" Feb 14 18:46:18 crc kubenswrapper[4897]: I0214 18:46:18.039870 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e892adf-50be-43db-bfb6-6ad0530bf7a5-serving-cert\") pod \"route-controller-manager-66464749f5-tftwf\" (UID: \"7e892adf-50be-43db-bfb6-6ad0530bf7a5\") " pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" Feb 14 18:46:18 crc kubenswrapper[4897]: I0214 18:46:18.039905 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7325d839-07ed-4966-bb45-10719d4ec580-config\") pod \"controller-manager-86b69bbd49-9rnzb\" (UID: \"7325d839-07ed-4966-bb45-10719d4ec580\") " pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" Feb 14 18:46:18 crc kubenswrapper[4897]: I0214 18:46:18.041436 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e892adf-50be-43db-bfb6-6ad0530bf7a5-client-ca\") pod \"route-controller-manager-66464749f5-tftwf\" (UID: \"7e892adf-50be-43db-bfb6-6ad0530bf7a5\") " pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" Feb 14 18:46:18 crc kubenswrapper[4897]: I0214 18:46:18.038792 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7325d839-07ed-4966-bb45-10719d4ec580-proxy-ca-bundles\") pod \"controller-manager-86b69bbd49-9rnzb\" (UID: \"7325d839-07ed-4966-bb45-10719d4ec580\") " pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" Feb 14 18:46:18 crc kubenswrapper[4897]: I0214 18:46:18.047407 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7325d839-07ed-4966-bb45-10719d4ec580-serving-cert\") pod \"controller-manager-86b69bbd49-9rnzb\" (UID: \"7325d839-07ed-4966-bb45-10719d4ec580\") " pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" Feb 14 18:46:18 crc kubenswrapper[4897]: I0214 18:46:18.059607 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcfgw\" (UniqueName: \"kubernetes.io/projected/7325d839-07ed-4966-bb45-10719d4ec580-kube-api-access-rcfgw\") pod \"controller-manager-86b69bbd49-9rnzb\" (UID: \"7325d839-07ed-4966-bb45-10719d4ec580\") " pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" Feb 14 18:46:18 crc kubenswrapper[4897]: I0214 18:46:18.069406 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4clb8\" (UniqueName: \"kubernetes.io/projected/7e892adf-50be-43db-bfb6-6ad0530bf7a5-kube-api-access-4clb8\") pod \"route-controller-manager-66464749f5-tftwf\" (UID: \"7e892adf-50be-43db-bfb6-6ad0530bf7a5\") " pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" Feb 14 18:46:18 crc kubenswrapper[4897]: I0214 18:46:18.167146 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" Feb 14 18:46:18 crc kubenswrapper[4897]: I0214 18:46:18.180233 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" Feb 14 18:46:18 crc kubenswrapper[4897]: I0214 18:46:18.462670 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86b69bbd49-9rnzb"] Feb 14 18:46:18 crc kubenswrapper[4897]: I0214 18:46:18.511628 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf"] Feb 14 18:46:18 crc kubenswrapper[4897]: W0214 18:46:18.518832 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e892adf_50be_43db_bfb6_6ad0530bf7a5.slice/crio-8fb340bc624bc52ba9c70f2163225352685862f2f805d2f6e9f4e3625fb9106d WatchSource:0}: Error finding container 8fb340bc624bc52ba9c70f2163225352685862f2f805d2f6e9f4e3625fb9106d: Status 404 returned error can't find the container with id 8fb340bc624bc52ba9c70f2163225352685862f2f805d2f6e9f4e3625fb9106d Feb 14 18:46:18 crc kubenswrapper[4897]: I0214 18:46:18.520817 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" event={"ID":"7325d839-07ed-4966-bb45-10719d4ec580","Type":"ContainerStarted","Data":"d911a5fa6ea8202c93a3faede27a8fb3c767ae8469c4e6671821f0b1cde46c72"} Feb 14 18:46:18 crc kubenswrapper[4897]: I0214 18:46:18.568240 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-84zhb" Feb 14 18:46:18 crc kubenswrapper[4897]: I0214 18:46:18.595267 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h6rm4" Feb 14 18:46:18 crc kubenswrapper[4897]: I0214 18:46:18.703821 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c8v6s"] Feb 14 18:46:19 crc kubenswrapper[4897]: I0214 18:46:19.491265 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-twzlp" Feb 14 18:46:19 crc kubenswrapper[4897]: I0214 18:46:19.533471 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" event={"ID":"7e892adf-50be-43db-bfb6-6ad0530bf7a5","Type":"ContainerStarted","Data":"d4a5ac44915f8d2ec150972798e573aedc58e447d8751f83be105c48b10327a2"} Feb 14 18:46:19 crc kubenswrapper[4897]: I0214 18:46:19.533529 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" event={"ID":"7e892adf-50be-43db-bfb6-6ad0530bf7a5","Type":"ContainerStarted","Data":"8fb340bc624bc52ba9c70f2163225352685862f2f805d2f6e9f4e3625fb9106d"} Feb 14 18:46:19 crc kubenswrapper[4897]: I0214 18:46:19.533553 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" Feb 14 18:46:19 crc kubenswrapper[4897]: I0214 18:46:19.536391 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" event={"ID":"7325d839-07ed-4966-bb45-10719d4ec580","Type":"ContainerStarted","Data":"b5a7994574aca1091156dc54e21e19937c01fd33af545851e0560dafb8bc8803"} Feb 14 18:46:19 crc kubenswrapper[4897]: I0214 18:46:19.536816 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" Feb 14 18:46:19 crc kubenswrapper[4897]: I0214 18:46:19.542899 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" Feb 14 18:46:19 crc kubenswrapper[4897]: I0214 18:46:19.543407 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" Feb 14 18:46:19 crc kubenswrapper[4897]: I0214 18:46:19.546163 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-twzlp" Feb 14 18:46:19 crc kubenswrapper[4897]: I0214 18:46:19.561478 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" podStartSLOduration=3.561455376 podStartE2EDuration="3.561455376s" podCreationTimestamp="2026-02-14 18:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:46:19.553481063 +0000 UTC m=+232.529889566" watchObservedRunningTime="2026-02-14 18:46:19.561455376 +0000 UTC m=+232.537863859" Feb 14 18:46:19 crc kubenswrapper[4897]: I0214 18:46:19.597952 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" podStartSLOduration=3.597934677 podStartE2EDuration="3.597934677s" podCreationTimestamp="2026-02-14 18:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:46:19.59498734 +0000 UTC m=+232.571395843" watchObservedRunningTime="2026-02-14 18:46:19.597934677 +0000 UTC m=+232.574343160" Feb 14 18:46:20 crc kubenswrapper[4897]: I0214 18:46:20.204377 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-84zhb"] Feb 14 18:46:20 crc kubenswrapper[4897]: I0214 18:46:20.543288 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-84zhb" podUID="696766b1-de35-447a-8f84-537044aa0f34" containerName="registry-server" containerID="cri-o://347c4df4681fe6466512a3690f8b9b20e5836734bdf96b46ed34405eb71d3530" gracePeriod=2 Feb 14 18:46:20 crc kubenswrapper[4897]: I0214 18:46:20.858770 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wjckf" Feb 14 18:46:20 crc kubenswrapper[4897]: I0214 18:46:20.926433 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wjckf" Feb 14 18:46:21 crc kubenswrapper[4897]: I0214 18:46:21.204746 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h6rm4"] Feb 14 18:46:21 crc kubenswrapper[4897]: I0214 18:46:21.204957 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h6rm4" podUID="5a07b450-333c-4f3f-8c4d-4b9bd35b7d74" containerName="registry-server" containerID="cri-o://9fb1c3881d3d907ce8e9972a59289d66c498d0d36a4c79641d9297d9d4bcd028" gracePeriod=2 Feb 14 18:46:21 crc kubenswrapper[4897]: I0214 18:46:21.554308 4897 generic.go:334] "Generic (PLEG): container finished" podID="696766b1-de35-447a-8f84-537044aa0f34" containerID="347c4df4681fe6466512a3690f8b9b20e5836734bdf96b46ed34405eb71d3530" exitCode=0 Feb 14 18:46:21 crc kubenswrapper[4897]: I0214 18:46:21.554382 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84zhb" event={"ID":"696766b1-de35-447a-8f84-537044aa0f34","Type":"ContainerDied","Data":"347c4df4681fe6466512a3690f8b9b20e5836734bdf96b46ed34405eb71d3530"} Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.246217 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84zhb" Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.412789 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/696766b1-de35-447a-8f84-537044aa0f34-catalog-content\") pod \"696766b1-de35-447a-8f84-537044aa0f34\" (UID: \"696766b1-de35-447a-8f84-537044aa0f34\") " Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.412994 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xs62\" (UniqueName: \"kubernetes.io/projected/696766b1-de35-447a-8f84-537044aa0f34-kube-api-access-7xs62\") pod \"696766b1-de35-447a-8f84-537044aa0f34\" (UID: \"696766b1-de35-447a-8f84-537044aa0f34\") " Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.413090 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/696766b1-de35-447a-8f84-537044aa0f34-utilities\") pod \"696766b1-de35-447a-8f84-537044aa0f34\" (UID: \"696766b1-de35-447a-8f84-537044aa0f34\") " Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.414898 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/696766b1-de35-447a-8f84-537044aa0f34-utilities" (OuterVolumeSpecName: "utilities") pod "696766b1-de35-447a-8f84-537044aa0f34" (UID: "696766b1-de35-447a-8f84-537044aa0f34"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.437456 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/696766b1-de35-447a-8f84-537044aa0f34-kube-api-access-7xs62" (OuterVolumeSpecName: "kube-api-access-7xs62") pod "696766b1-de35-447a-8f84-537044aa0f34" (UID: "696766b1-de35-447a-8f84-537044aa0f34"). InnerVolumeSpecName "kube-api-access-7xs62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.491529 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/696766b1-de35-447a-8f84-537044aa0f34-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "696766b1-de35-447a-8f84-537044aa0f34" (UID: "696766b1-de35-447a-8f84-537044aa0f34"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.514749 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/696766b1-de35-447a-8f84-537044aa0f34-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.514788 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/696766b1-de35-447a-8f84-537044aa0f34-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.514804 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xs62\" (UniqueName: \"kubernetes.io/projected/696766b1-de35-447a-8f84-537044aa0f34-kube-api-access-7xs62\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.562857 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84zhb" event={"ID":"696766b1-de35-447a-8f84-537044aa0f34","Type":"ContainerDied","Data":"e4491d4fff704b77d948b6406477e0f8338ff38fb4e93821cf8ebd309084c212"} Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.562931 4897 scope.go:117] "RemoveContainer" containerID="347c4df4681fe6466512a3690f8b9b20e5836734bdf96b46ed34405eb71d3530" Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.563140 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84zhb" Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.570548 4897 generic.go:334] "Generic (PLEG): container finished" podID="5a07b450-333c-4f3f-8c4d-4b9bd35b7d74" containerID="9fb1c3881d3d907ce8e9972a59289d66c498d0d36a4c79641d9297d9d4bcd028" exitCode=0 Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.570602 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6rm4" event={"ID":"5a07b450-333c-4f3f-8c4d-4b9bd35b7d74","Type":"ContainerDied","Data":"9fb1c3881d3d907ce8e9972a59289d66c498d0d36a4c79641d9297d9d4bcd028"} Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.647818 4897 scope.go:117] "RemoveContainer" containerID="d9154695812d8eb20f6ba1b55e5a813e35989852f7de76ff6c0671ee683fadc4" Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.647965 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6rm4" Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.661071 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-84zhb"] Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.665570 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-84zhb"] Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.672101 4897 scope.go:117] "RemoveContainer" containerID="858bc61f7bb6de22b22ee68c63ad04754b6e31ca6e5b45016fe83d84d7f6dc7e" Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.820071 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnpjk\" (UniqueName: \"kubernetes.io/projected/5a07b450-333c-4f3f-8c4d-4b9bd35b7d74-kube-api-access-qnpjk\") pod \"5a07b450-333c-4f3f-8c4d-4b9bd35b7d74\" (UID: \"5a07b450-333c-4f3f-8c4d-4b9bd35b7d74\") " Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.820306 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a07b450-333c-4f3f-8c4d-4b9bd35b7d74-utilities\") pod \"5a07b450-333c-4f3f-8c4d-4b9bd35b7d74\" (UID: \"5a07b450-333c-4f3f-8c4d-4b9bd35b7d74\") " Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.820361 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a07b450-333c-4f3f-8c4d-4b9bd35b7d74-catalog-content\") pod \"5a07b450-333c-4f3f-8c4d-4b9bd35b7d74\" (UID: \"5a07b450-333c-4f3f-8c4d-4b9bd35b7d74\") " Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.821375 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a07b450-333c-4f3f-8c4d-4b9bd35b7d74-utilities" (OuterVolumeSpecName: "utilities") pod "5a07b450-333c-4f3f-8c4d-4b9bd35b7d74" (UID: "5a07b450-333c-4f3f-8c4d-4b9bd35b7d74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.826094 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a07b450-333c-4f3f-8c4d-4b9bd35b7d74-kube-api-access-qnpjk" (OuterVolumeSpecName: "kube-api-access-qnpjk") pod "5a07b450-333c-4f3f-8c4d-4b9bd35b7d74" (UID: "5a07b450-333c-4f3f-8c4d-4b9bd35b7d74"). InnerVolumeSpecName "kube-api-access-qnpjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.902932 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a07b450-333c-4f3f-8c4d-4b9bd35b7d74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a07b450-333c-4f3f-8c4d-4b9bd35b7d74" (UID: "5a07b450-333c-4f3f-8c4d-4b9bd35b7d74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.922165 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnpjk\" (UniqueName: \"kubernetes.io/projected/5a07b450-333c-4f3f-8c4d-4b9bd35b7d74-kube-api-access-qnpjk\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.922212 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a07b450-333c-4f3f-8c4d-4b9bd35b7d74-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:22 crc kubenswrapper[4897]: I0214 18:46:22.922226 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a07b450-333c-4f3f-8c4d-4b9bd35b7d74-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:23 crc kubenswrapper[4897]: I0214 18:46:23.584936 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6rm4" event={"ID":"5a07b450-333c-4f3f-8c4d-4b9bd35b7d74","Type":"ContainerDied","Data":"04d8ef554810c70e1851f6cd3e7a10efb226bbffa990026363f3043ceeff0b22"} Feb 14 18:46:23 crc kubenswrapper[4897]: I0214 18:46:23.585069 4897 scope.go:117] "RemoveContainer" containerID="9fb1c3881d3d907ce8e9972a59289d66c498d0d36a4c79641d9297d9d4bcd028" Feb 14 18:46:23 crc kubenswrapper[4897]: I0214 18:46:23.585081 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6rm4" Feb 14 18:46:23 crc kubenswrapper[4897]: I0214 18:46:23.614618 4897 scope.go:117] "RemoveContainer" containerID="bd6aaa0670b395f704bb6637597bcb06ce8c68dfe19948d0fd34c025ed062a76" Feb 14 18:46:23 crc kubenswrapper[4897]: I0214 18:46:23.633549 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h6rm4"] Feb 14 18:46:23 crc kubenswrapper[4897]: I0214 18:46:23.648123 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h6rm4"] Feb 14 18:46:23 crc kubenswrapper[4897]: I0214 18:46:23.653912 4897 scope.go:117] "RemoveContainer" containerID="bee5b88c3c6c44098aed3f41c7bcd73da5c9ce0da8e84552712bb7824754b58b" Feb 14 18:46:23 crc kubenswrapper[4897]: I0214 18:46:23.807924 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a07b450-333c-4f3f-8c4d-4b9bd35b7d74" path="/var/lib/kubelet/pods/5a07b450-333c-4f3f-8c4d-4b9bd35b7d74/volumes" Feb 14 18:46:23 crc kubenswrapper[4897]: I0214 18:46:23.809993 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="696766b1-de35-447a-8f84-537044aa0f34" path="/var/lib/kubelet/pods/696766b1-de35-447a-8f84-537044aa0f34/volumes" Feb 14 18:46:24 crc kubenswrapper[4897]: I0214 18:46:24.607456 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wjckf"] Feb 14 18:46:24 crc kubenswrapper[4897]: I0214 18:46:24.608866 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wjckf" podUID="7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e" containerName="registry-server" containerID="cri-o://60ecec18c948974e204cc8041d0add02b0519c372616913861a45a9b07ed0c79" gracePeriod=2 Feb 14 18:46:25 crc kubenswrapper[4897]: I0214 18:46:25.221384 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wjckf" Feb 14 18:46:25 crc kubenswrapper[4897]: I0214 18:46:25.355871 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e-catalog-content\") pod \"7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e\" (UID: \"7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e\") " Feb 14 18:46:25 crc kubenswrapper[4897]: I0214 18:46:25.355958 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72nlk\" (UniqueName: \"kubernetes.io/projected/7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e-kube-api-access-72nlk\") pod \"7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e\" (UID: \"7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e\") " Feb 14 18:46:25 crc kubenswrapper[4897]: I0214 18:46:25.356205 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e-utilities\") pod \"7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e\" (UID: \"7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e\") " Feb 14 18:46:25 crc kubenswrapper[4897]: I0214 18:46:25.358222 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e-utilities" (OuterVolumeSpecName: "utilities") pod "7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e" (UID: "7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:46:25 crc kubenswrapper[4897]: I0214 18:46:25.365743 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e-kube-api-access-72nlk" (OuterVolumeSpecName: "kube-api-access-72nlk") pod "7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e" (UID: "7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e"). InnerVolumeSpecName "kube-api-access-72nlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:46:25 crc kubenswrapper[4897]: I0214 18:46:25.458762 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:25 crc kubenswrapper[4897]: I0214 18:46:25.458884 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72nlk\" (UniqueName: \"kubernetes.io/projected/7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e-kube-api-access-72nlk\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:25 crc kubenswrapper[4897]: I0214 18:46:25.536160 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e" (UID: "7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:46:25 crc kubenswrapper[4897]: I0214 18:46:25.560424 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:25 crc kubenswrapper[4897]: I0214 18:46:25.607174 4897 generic.go:334] "Generic (PLEG): container finished" podID="7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e" containerID="60ecec18c948974e204cc8041d0add02b0519c372616913861a45a9b07ed0c79" exitCode=0 Feb 14 18:46:25 crc kubenswrapper[4897]: I0214 18:46:25.607251 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjckf" event={"ID":"7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e","Type":"ContainerDied","Data":"60ecec18c948974e204cc8041d0add02b0519c372616913861a45a9b07ed0c79"} Feb 14 18:46:25 crc kubenswrapper[4897]: I0214 18:46:25.607332 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjckf" event={"ID":"7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e","Type":"ContainerDied","Data":"7f69c9ee4821b91b02e0a459c416e8f899c0f5bc97670f1f00d7e02682a3fc23"} Feb 14 18:46:25 crc kubenswrapper[4897]: I0214 18:46:25.607331 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wjckf" Feb 14 18:46:25 crc kubenswrapper[4897]: I0214 18:46:25.607359 4897 scope.go:117] "RemoveContainer" containerID="60ecec18c948974e204cc8041d0add02b0519c372616913861a45a9b07ed0c79" Feb 14 18:46:25 crc kubenswrapper[4897]: I0214 18:46:25.635448 4897 scope.go:117] "RemoveContainer" containerID="5d8c9d46daaf2d36d4130a7dbc6399dad3850d0249a0f4557e5616965d8d2d91" Feb 14 18:46:25 crc kubenswrapper[4897]: I0214 18:46:25.657644 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wjckf"] Feb 14 18:46:25 crc kubenswrapper[4897]: I0214 18:46:25.660947 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wjckf"] Feb 14 18:46:25 crc kubenswrapper[4897]: I0214 18:46:25.682635 4897 scope.go:117] "RemoveContainer" containerID="dbcab298bba1e66af2b06f2640d62eb84d03aa33c3da16e35f5c5963ec824359" Feb 14 18:46:25 crc kubenswrapper[4897]: I0214 18:46:25.705814 4897 scope.go:117] "RemoveContainer" containerID="60ecec18c948974e204cc8041d0add02b0519c372616913861a45a9b07ed0c79" Feb 14 18:46:25 crc kubenswrapper[4897]: E0214 18:46:25.706624 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60ecec18c948974e204cc8041d0add02b0519c372616913861a45a9b07ed0c79\": container with ID starting with 60ecec18c948974e204cc8041d0add02b0519c372616913861a45a9b07ed0c79 not found: ID does not exist" containerID="60ecec18c948974e204cc8041d0add02b0519c372616913861a45a9b07ed0c79" Feb 14 18:46:25 crc kubenswrapper[4897]: I0214 18:46:25.706702 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60ecec18c948974e204cc8041d0add02b0519c372616913861a45a9b07ed0c79"} err="failed to get container status \"60ecec18c948974e204cc8041d0add02b0519c372616913861a45a9b07ed0c79\": rpc error: code = NotFound desc = could not find container \"60ecec18c948974e204cc8041d0add02b0519c372616913861a45a9b07ed0c79\": container with ID starting with 60ecec18c948974e204cc8041d0add02b0519c372616913861a45a9b07ed0c79 not found: ID does not exist" Feb 14 18:46:25 crc kubenswrapper[4897]: I0214 18:46:25.706749 4897 scope.go:117] "RemoveContainer" containerID="5d8c9d46daaf2d36d4130a7dbc6399dad3850d0249a0f4557e5616965d8d2d91" Feb 14 18:46:25 crc kubenswrapper[4897]: E0214 18:46:25.707866 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d8c9d46daaf2d36d4130a7dbc6399dad3850d0249a0f4557e5616965d8d2d91\": container with ID starting with 5d8c9d46daaf2d36d4130a7dbc6399dad3850d0249a0f4557e5616965d8d2d91 not found: ID does not exist" containerID="5d8c9d46daaf2d36d4130a7dbc6399dad3850d0249a0f4557e5616965d8d2d91" Feb 14 18:46:25 crc kubenswrapper[4897]: I0214 18:46:25.708165 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d8c9d46daaf2d36d4130a7dbc6399dad3850d0249a0f4557e5616965d8d2d91"} err="failed to get container status \"5d8c9d46daaf2d36d4130a7dbc6399dad3850d0249a0f4557e5616965d8d2d91\": rpc error: code = NotFound desc = could not find container \"5d8c9d46daaf2d36d4130a7dbc6399dad3850d0249a0f4557e5616965d8d2d91\": container with ID starting with 5d8c9d46daaf2d36d4130a7dbc6399dad3850d0249a0f4557e5616965d8d2d91 not found: ID does not exist" Feb 14 18:46:25 crc kubenswrapper[4897]: I0214 18:46:25.708343 4897 scope.go:117] "RemoveContainer" containerID="dbcab298bba1e66af2b06f2640d62eb84d03aa33c3da16e35f5c5963ec824359" Feb 14 18:46:25 crc kubenswrapper[4897]: E0214 18:46:25.709117 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbcab298bba1e66af2b06f2640d62eb84d03aa33c3da16e35f5c5963ec824359\": container with ID starting with dbcab298bba1e66af2b06f2640d62eb84d03aa33c3da16e35f5c5963ec824359 not found: ID does not exist" containerID="dbcab298bba1e66af2b06f2640d62eb84d03aa33c3da16e35f5c5963ec824359" Feb 14 18:46:25 crc kubenswrapper[4897]: I0214 18:46:25.709204 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbcab298bba1e66af2b06f2640d62eb84d03aa33c3da16e35f5c5963ec824359"} err="failed to get container status \"dbcab298bba1e66af2b06f2640d62eb84d03aa33c3da16e35f5c5963ec824359\": rpc error: code = NotFound desc = could not find container \"dbcab298bba1e66af2b06f2640d62eb84d03aa33c3da16e35f5c5963ec824359\": container with ID starting with dbcab298bba1e66af2b06f2640d62eb84d03aa33c3da16e35f5c5963ec824359 not found: ID does not exist" Feb 14 18:46:25 crc kubenswrapper[4897]: I0214 18:46:25.807163 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e" path="/var/lib/kubelet/pods/7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e/volumes" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.889617 4897 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 14 18:46:27 crc kubenswrapper[4897]: E0214 18:46:27.890387 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a07b450-333c-4f3f-8c4d-4b9bd35b7d74" containerName="extract-content" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.890424 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a07b450-333c-4f3f-8c4d-4b9bd35b7d74" containerName="extract-content" Feb 14 18:46:27 crc kubenswrapper[4897]: E0214 18:46:27.890451 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a07b450-333c-4f3f-8c4d-4b9bd35b7d74" containerName="registry-server" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.890468 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a07b450-333c-4f3f-8c4d-4b9bd35b7d74" containerName="registry-server" Feb 14 18:46:27 crc kubenswrapper[4897]: E0214 18:46:27.890493 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696766b1-de35-447a-8f84-537044aa0f34" containerName="extract-content" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.890510 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="696766b1-de35-447a-8f84-537044aa0f34" containerName="extract-content" Feb 14 18:46:27 crc kubenswrapper[4897]: E0214 18:46:27.890534 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e" containerName="extract-utilities" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.890551 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e" containerName="extract-utilities" Feb 14 18:46:27 crc kubenswrapper[4897]: E0214 18:46:27.890577 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e" containerName="extract-content" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.890594 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e" containerName="extract-content" Feb 14 18:46:27 crc kubenswrapper[4897]: E0214 18:46:27.890614 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e" containerName="registry-server" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.890631 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e" containerName="registry-server" Feb 14 18:46:27 crc kubenswrapper[4897]: E0214 18:46:27.890654 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696766b1-de35-447a-8f84-537044aa0f34" containerName="registry-server" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.890670 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="696766b1-de35-447a-8f84-537044aa0f34" containerName="registry-server" Feb 14 18:46:27 crc kubenswrapper[4897]: E0214 18:46:27.890706 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a07b450-333c-4f3f-8c4d-4b9bd35b7d74" containerName="extract-utilities" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.890722 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a07b450-333c-4f3f-8c4d-4b9bd35b7d74" containerName="extract-utilities" Feb 14 18:46:27 crc kubenswrapper[4897]: E0214 18:46:27.890743 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696766b1-de35-447a-8f84-537044aa0f34" containerName="extract-utilities" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.890759 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="696766b1-de35-447a-8f84-537044aa0f34" containerName="extract-utilities" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.891024 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a07b450-333c-4f3f-8c4d-4b9bd35b7d74" containerName="registry-server" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.891110 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="7287ac6e-a9b8-45f8-8b29-f2e46fe20d1e" containerName="registry-server" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.891130 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="696766b1-de35-447a-8f84-537044aa0f34" containerName="registry-server" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.891943 4897 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.893018 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0" gracePeriod=15 Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.893172 4897 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.893276 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.893417 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a" gracePeriod=15 Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.893710 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d" gracePeriod=15 Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.893866 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006" gracePeriod=15 Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.893938 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00" gracePeriod=15 Feb 14 18:46:27 crc kubenswrapper[4897]: E0214 18:46:27.894394 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.894422 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 14 18:46:27 crc kubenswrapper[4897]: E0214 18:46:27.894440 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.894452 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 14 18:46:27 crc kubenswrapper[4897]: E0214 18:46:27.894475 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.894488 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 14 18:46:27 crc kubenswrapper[4897]: E0214 18:46:27.894513 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.894526 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 18:46:27 crc kubenswrapper[4897]: E0214 18:46:27.894546 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.894558 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 14 18:46:27 crc kubenswrapper[4897]: E0214 18:46:27.894580 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.894595 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 14 18:46:27 crc kubenswrapper[4897]: E0214 18:46:27.894619 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.894631 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.894832 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.894853 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.894874 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.894892 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.894909 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.894932 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 14 18:46:27 crc kubenswrapper[4897]: E0214 18:46:27.895129 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.895145 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.895332 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.908274 4897 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.996451 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.997101 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.997176 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.997232 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.997264 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.997303 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.997716 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:46:27 crc kubenswrapper[4897]: I0214 18:46:27.997768 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 18:46:28 crc kubenswrapper[4897]: I0214 18:46:28.099474 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:46:28 crc kubenswrapper[4897]: I0214 18:46:28.099535 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 18:46:28 crc kubenswrapper[4897]: I0214 18:46:28.099582 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 18:46:28 crc kubenswrapper[4897]: I0214 18:46:28.099638 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:46:28 crc kubenswrapper[4897]: I0214 18:46:28.099659 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:46:28 crc kubenswrapper[4897]: I0214 18:46:28.099717 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 18:46:28 crc kubenswrapper[4897]: I0214 18:46:28.099679 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 18:46:28 crc kubenswrapper[4897]: I0214 18:46:28.099860 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:46:28 crc kubenswrapper[4897]: I0214 18:46:28.099737 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 18:46:28 crc kubenswrapper[4897]: I0214 18:46:28.099737 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 18:46:28 crc kubenswrapper[4897]: I0214 18:46:28.100212 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 18:46:28 crc kubenswrapper[4897]: I0214 18:46:28.100260 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 18:46:28 crc kubenswrapper[4897]: I0214 18:46:28.100361 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:46:28 crc kubenswrapper[4897]: I0214 18:46:28.100385 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 18:46:28 crc kubenswrapper[4897]: I0214 18:46:28.100321 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 18:46:28 crc kubenswrapper[4897]: I0214 18:46:28.100521 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:46:28 crc kubenswrapper[4897]: I0214 18:46:28.636500 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 14 18:46:28 crc kubenswrapper[4897]: I0214 18:46:28.638868 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 14 18:46:28 crc kubenswrapper[4897]: I0214 18:46:28.639975 4897 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d" exitCode=0 Feb 14 18:46:28 crc kubenswrapper[4897]: I0214 18:46:28.640064 4897 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a" exitCode=0 Feb 14 18:46:28 crc kubenswrapper[4897]: I0214 18:46:28.640088 4897 scope.go:117] "RemoveContainer" containerID="e9c886eaa6eb452b6a24aedd506e621a6be5bf6a2f68b3262745b56b73101a2a" Feb 14 18:46:28 crc kubenswrapper[4897]: I0214 18:46:28.640101 4897 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00" exitCode=0 Feb 14 18:46:28 crc kubenswrapper[4897]: I0214 18:46:28.640228 4897 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006" exitCode=2 Feb 14 18:46:28 crc kubenswrapper[4897]: I0214 18:46:28.646845 4897 generic.go:334] "Generic (PLEG): container finished" podID="69e1bf34-207a-47f6-a31f-035e5e25b2d7" containerID="e58e4c02dcf3e3aa6c8744d59a13b9604dcfa050274ca705b7257dbbc11bb678" exitCode=0 Feb 14 18:46:28 crc kubenswrapper[4897]: I0214 18:46:28.646915 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"69e1bf34-207a-47f6-a31f-035e5e25b2d7","Type":"ContainerDied","Data":"e58e4c02dcf3e3aa6c8744d59a13b9604dcfa050274ca705b7257dbbc11bb678"} Feb 14 18:46:28 crc kubenswrapper[4897]: I0214 18:46:28.648075 4897 status_manager.go:851] "Failed to get status for pod" podUID="69e1bf34-207a-47f6-a31f-035e5e25b2d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:28 crc kubenswrapper[4897]: E0214 18:46:28.937267 4897 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:28 crc kubenswrapper[4897]: E0214 18:46:28.938221 4897 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:28 crc kubenswrapper[4897]: E0214 18:46:28.939071 4897 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:28 crc kubenswrapper[4897]: E0214 18:46:28.939624 4897 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:28 crc kubenswrapper[4897]: E0214 18:46:28.939954 4897 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:28 crc kubenswrapper[4897]: I0214 18:46:28.940024 4897 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 14 18:46:28 crc kubenswrapper[4897]: E0214 18:46:28.940545 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="200ms" Feb 14 18:46:29 crc kubenswrapper[4897]: E0214 18:46:29.030945 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:46:29Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:46:29Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:46:29Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:46:29Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:29 crc kubenswrapper[4897]: E0214 18:46:29.031558 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:29 crc kubenswrapper[4897]: E0214 18:46:29.032070 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:29 crc kubenswrapper[4897]: E0214 18:46:29.032352 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:29 crc kubenswrapper[4897]: E0214 18:46:29.032608 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:29 crc kubenswrapper[4897]: E0214 18:46:29.032631 4897 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 18:46:29 crc kubenswrapper[4897]: E0214 18:46:29.142253 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="400ms" Feb 14 18:46:29 crc kubenswrapper[4897]: E0214 18:46:29.543378 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="800ms" Feb 14 18:46:29 crc kubenswrapper[4897]: I0214 18:46:29.656910 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.142440 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.143125 4897 status_manager.go:851] "Failed to get status for pod" podUID="69e1bf34-207a-47f6-a31f-035e5e25b2d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.234633 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/69e1bf34-207a-47f6-a31f-035e5e25b2d7-var-lock\") pod \"69e1bf34-207a-47f6-a31f-035e5e25b2d7\" (UID: \"69e1bf34-207a-47f6-a31f-035e5e25b2d7\") " Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.234700 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69e1bf34-207a-47f6-a31f-035e5e25b2d7-kube-api-access\") pod \"69e1bf34-207a-47f6-a31f-035e5e25b2d7\" (UID: \"69e1bf34-207a-47f6-a31f-035e5e25b2d7\") " Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.234741 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69e1bf34-207a-47f6-a31f-035e5e25b2d7-kubelet-dir\") pod \"69e1bf34-207a-47f6-a31f-035e5e25b2d7\" (UID: \"69e1bf34-207a-47f6-a31f-035e5e25b2d7\") " Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.234765 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69e1bf34-207a-47f6-a31f-035e5e25b2d7-var-lock" (OuterVolumeSpecName: "var-lock") pod "69e1bf34-207a-47f6-a31f-035e5e25b2d7" (UID: "69e1bf34-207a-47f6-a31f-035e5e25b2d7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.235009 4897 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/69e1bf34-207a-47f6-a31f-035e5e25b2d7-var-lock\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.235056 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69e1bf34-207a-47f6-a31f-035e5e25b2d7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "69e1bf34-207a-47f6-a31f-035e5e25b2d7" (UID: "69e1bf34-207a-47f6-a31f-035e5e25b2d7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.240767 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69e1bf34-207a-47f6-a31f-035e5e25b2d7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "69e1bf34-207a-47f6-a31f-035e5e25b2d7" (UID: "69e1bf34-207a-47f6-a31f-035e5e25b2d7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.317622 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.318616 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.319321 4897 status_manager.go:851] "Failed to get status for pod" podUID="69e1bf34-207a-47f6-a31f-035e5e25b2d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.319911 4897 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.336638 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69e1bf34-207a-47f6-a31f-035e5e25b2d7-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.336663 4897 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69e1bf34-207a-47f6-a31f-035e5e25b2d7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:30 crc kubenswrapper[4897]: E0214 18:46:30.344477 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="1.6s" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.437384 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.437640 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.437725 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.437756 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.437774 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.437984 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.540154 4897 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.540208 4897 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.540227 4897 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.668193 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.669145 4897 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0" exitCode=0 Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.669205 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.669310 4897 scope.go:117] "RemoveContainer" containerID="099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.671498 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.671356 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"69e1bf34-207a-47f6-a31f-035e5e25b2d7","Type":"ContainerDied","Data":"524465194cd8be6127d86e7e7859370f02c9fea1ad667bf00da16c4ddd830d9c"} Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.672906 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="524465194cd8be6127d86e7e7859370f02c9fea1ad667bf00da16c4ddd830d9c" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.684977 4897 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.685616 4897 status_manager.go:851] "Failed to get status for pod" podUID="69e1bf34-207a-47f6-a31f-035e5e25b2d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.690359 4897 scope.go:117] "RemoveContainer" containerID="38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.702723 4897 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.703716 4897 status_manager.go:851] "Failed to get status for pod" podUID="69e1bf34-207a-47f6-a31f-035e5e25b2d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.711727 4897 scope.go:117] "RemoveContainer" containerID="fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.732110 4897 scope.go:117] "RemoveContainer" containerID="4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.751152 4897 scope.go:117] "RemoveContainer" containerID="e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.774258 4897 scope.go:117] "RemoveContainer" containerID="36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.797659 4897 scope.go:117] "RemoveContainer" containerID="099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d" Feb 14 18:46:30 crc kubenswrapper[4897]: E0214 18:46:30.799447 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d\": container with ID starting with 099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d not found: ID does not exist" containerID="099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.799510 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d"} err="failed to get container status \"099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d\": rpc error: code = NotFound desc = could not find container \"099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d\": container with ID starting with 099e601ee68892a7c8ddf6f0a5de71f5f7ac2cbc88d60962daff839334e84b1d not found: ID does not exist" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.799554 4897 scope.go:117] "RemoveContainer" containerID="38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a" Feb 14 18:46:30 crc kubenswrapper[4897]: E0214 18:46:30.800145 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\": container with ID starting with 38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a not found: ID does not exist" containerID="38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.800191 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a"} err="failed to get container status \"38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\": rpc error: code = NotFound desc = could not find container \"38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a\": container with ID starting with 38496cc543396d09b888c7188fff154098c80901d42bac77cb8f20e48fe61d2a not found: ID does not exist" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.800218 4897 scope.go:117] "RemoveContainer" containerID="fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00" Feb 14 18:46:30 crc kubenswrapper[4897]: E0214 18:46:30.800746 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\": container with ID starting with fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00 not found: ID does not exist" containerID="fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.800789 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00"} err="failed to get container status \"fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\": rpc error: code = NotFound desc = could not find container \"fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00\": container with ID starting with fc4633f63d9caf9c9f0e003da72fa66ae71ecee9f20139d357fa5ad87aab4b00 not found: ID does not exist" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.800817 4897 scope.go:117] "RemoveContainer" containerID="4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006" Feb 14 18:46:30 crc kubenswrapper[4897]: E0214 18:46:30.801284 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\": container with ID starting with 4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006 not found: ID does not exist" containerID="4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.801312 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006"} err="failed to get container status \"4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\": rpc error: code = NotFound desc = could not find container \"4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006\": container with ID starting with 4b51d470801e48efed9bb203336d157b0182e3dd3a19af320dd4e80575cb5006 not found: ID does not exist" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.801327 4897 scope.go:117] "RemoveContainer" containerID="e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0" Feb 14 18:46:30 crc kubenswrapper[4897]: E0214 18:46:30.801650 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\": container with ID starting with e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0 not found: ID does not exist" containerID="e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.801677 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0"} err="failed to get container status \"e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\": rpc error: code = NotFound desc = could not find container \"e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0\": container with ID starting with e6c887e1b7924941be7fb4e9b5844ece52d76bfe59972bb20a5bc5907564b5c0 not found: ID does not exist" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.801695 4897 scope.go:117] "RemoveContainer" containerID="36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4" Feb 14 18:46:30 crc kubenswrapper[4897]: E0214 18:46:30.802113 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\": container with ID starting with 36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4 not found: ID does not exist" containerID="36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4" Feb 14 18:46:30 crc kubenswrapper[4897]: I0214 18:46:30.802139 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4"} err="failed to get container status \"36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\": rpc error: code = NotFound desc = could not find container \"36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4\": container with ID starting with 36bcd0527e10a19841af430e255d1cc233bc730456494b64873c3023d5c708d4 not found: ID does not exist" Feb 14 18:46:31 crc kubenswrapper[4897]: I0214 18:46:31.803643 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 14 18:46:31 crc kubenswrapper[4897]: E0214 18:46:31.945642 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="3.2s" Feb 14 18:46:32 crc kubenswrapper[4897]: E0214 18:46:32.960598 4897 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.41:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 18:46:32 crc kubenswrapper[4897]: I0214 18:46:32.961284 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 18:46:33 crc kubenswrapper[4897]: E0214 18:46:33.002302 4897 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.41:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18943154a0646099 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-14 18:46:33.001394329 +0000 UTC m=+245.977802842,LastTimestamp:2026-02-14 18:46:33.001394329 +0000 UTC m=+245.977802842,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 14 18:46:33 crc kubenswrapper[4897]: I0214 18:46:33.695908 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1f8aa3756daef6854a9d7b0ad138876a28c8a428e190ade29e50973cf208a022"} Feb 14 18:46:33 crc kubenswrapper[4897]: I0214 18:46:33.696403 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"af3736f36820609972744994f85fbecaa9349ffac1bff27b23643cdaa4bdaa9c"} Feb 14 18:46:33 crc kubenswrapper[4897]: E0214 18:46:33.698177 4897 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.41:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 18:46:33 crc kubenswrapper[4897]: I0214 18:46:33.697956 4897 status_manager.go:851] "Failed to get status for pod" podUID="69e1bf34-207a-47f6-a31f-035e5e25b2d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:35 crc kubenswrapper[4897]: E0214 18:46:35.147440 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="6.4s" Feb 14 18:46:35 crc kubenswrapper[4897]: E0214 18:46:35.533444 4897 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.41:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18943154a0646099 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-14 18:46:33.001394329 +0000 UTC m=+245.977802842,LastTimestamp:2026-02-14 18:46:33.001394329 +0000 UTC m=+245.977802842,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 14 18:46:37 crc kubenswrapper[4897]: I0214 18:46:37.797906 4897 status_manager.go:851] "Failed to get status for pod" podUID="69e1bf34-207a-47f6-a31f-035e5e25b2d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:39 crc kubenswrapper[4897]: E0214 18:46:39.134202 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:46:39Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:46:39Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:46:39Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T18:46:39Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:39 crc kubenswrapper[4897]: E0214 18:46:39.135025 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:39 crc kubenswrapper[4897]: E0214 18:46:39.135613 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:39 crc kubenswrapper[4897]: E0214 18:46:39.136088 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:39 crc kubenswrapper[4897]: E0214 18:46:39.136519 4897 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:39 crc kubenswrapper[4897]: E0214 18:46:39.136568 4897 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 18:46:39 crc kubenswrapper[4897]: E0214 18:46:39.876419 4897 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.41:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" volumeName="registry-storage" Feb 14 18:46:41 crc kubenswrapper[4897]: E0214 18:46:41.549646 4897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="7s" Feb 14 18:46:42 crc kubenswrapper[4897]: I0214 18:46:42.759696 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 14 18:46:42 crc kubenswrapper[4897]: I0214 18:46:42.759755 4897 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c" exitCode=1 Feb 14 18:46:42 crc kubenswrapper[4897]: I0214 18:46:42.759793 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c"} Feb 14 18:46:42 crc kubenswrapper[4897]: I0214 18:46:42.760370 4897 scope.go:117] "RemoveContainer" containerID="4cfa39190969d96ec151920146d68701baeba306310c0aa2e67f687cd4bd3c9c" Feb 14 18:46:42 crc kubenswrapper[4897]: I0214 18:46:42.761567 4897 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:42 crc kubenswrapper[4897]: I0214 18:46:42.762146 4897 status_manager.go:851] "Failed to get status for pod" podUID="69e1bf34-207a-47f6-a31f-035e5e25b2d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:42 crc kubenswrapper[4897]: I0214 18:46:42.793458 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:46:42 crc kubenswrapper[4897]: I0214 18:46:42.794527 4897 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:42 crc kubenswrapper[4897]: I0214 18:46:42.795198 4897 status_manager.go:851] "Failed to get status for pod" podUID="69e1bf34-207a-47f6-a31f-035e5e25b2d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:42 crc kubenswrapper[4897]: I0214 18:46:42.852555 4897 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ab8356f5-2c48-45bc-a850-d81b87845955" Feb 14 18:46:42 crc kubenswrapper[4897]: I0214 18:46:42.852613 4897 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ab8356f5-2c48-45bc-a850-d81b87845955" Feb 14 18:46:42 crc kubenswrapper[4897]: E0214 18:46:42.853383 4897 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:46:42 crc kubenswrapper[4897]: I0214 18:46:42.853907 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:46:42 crc kubenswrapper[4897]: W0214 18:46:42.885170 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-a96b7dd4e08902ddc6d0bc714746af99b86fc2e04b2683616806138d33931cfb WatchSource:0}: Error finding container a96b7dd4e08902ddc6d0bc714746af99b86fc2e04b2683616806138d33931cfb: Status 404 returned error can't find the container with id a96b7dd4e08902ddc6d0bc714746af99b86fc2e04b2683616806138d33931cfb Feb 14 18:46:43 crc kubenswrapper[4897]: I0214 18:46:43.737275 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" podUID="88a85445-8209-4b30-a0e0-c0f14d790fb5" containerName="oauth-openshift" containerID="cri-o://f57d9e8e6b33f49c13b09fab54911760226509c9df432c9b7bbde2f4b41a1c28" gracePeriod=15 Feb 14 18:46:43 crc kubenswrapper[4897]: I0214 18:46:43.770836 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 14 18:46:43 crc kubenswrapper[4897]: I0214 18:46:43.770987 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"136abac019a0ee5c9850930691dc107c7c06be287d7576356806c0d9845e5a92"} Feb 14 18:46:43 crc kubenswrapper[4897]: I0214 18:46:43.772397 4897 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:43 crc kubenswrapper[4897]: I0214 18:46:43.772869 4897 status_manager.go:851] "Failed to get status for pod" podUID="69e1bf34-207a-47f6-a31f-035e5e25b2d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:43 crc kubenswrapper[4897]: I0214 18:46:43.774472 4897 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="2187573fd0af6de2c3f4bfbb1500d51c750a77d2792ddb2e116226b35ec443f1" exitCode=0 Feb 14 18:46:43 crc kubenswrapper[4897]: I0214 18:46:43.774520 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"2187573fd0af6de2c3f4bfbb1500d51c750a77d2792ddb2e116226b35ec443f1"} Feb 14 18:46:43 crc kubenswrapper[4897]: I0214 18:46:43.774569 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a96b7dd4e08902ddc6d0bc714746af99b86fc2e04b2683616806138d33931cfb"} Feb 14 18:46:43 crc kubenswrapper[4897]: I0214 18:46:43.775012 4897 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ab8356f5-2c48-45bc-a850-d81b87845955" Feb 14 18:46:43 crc kubenswrapper[4897]: I0214 18:46:43.775091 4897 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ab8356f5-2c48-45bc-a850-d81b87845955" Feb 14 18:46:43 crc kubenswrapper[4897]: E0214 18:46:43.775568 4897 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:46:43 crc kubenswrapper[4897]: I0214 18:46:43.775597 4897 status_manager.go:851] "Failed to get status for pod" podUID="69e1bf34-207a-47f6-a31f-035e5e25b2d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:43 crc kubenswrapper[4897]: I0214 18:46:43.776218 4897 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.120302 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.292637 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.437218 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/88a85445-8209-4b30-a0e0-c0f14d790fb5-audit-policies\") pod \"88a85445-8209-4b30-a0e0-c0f14d790fb5\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.437307 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-service-ca\") pod \"88a85445-8209-4b30-a0e0-c0f14d790fb5\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.437378 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-user-template-provider-selection\") pod \"88a85445-8209-4b30-a0e0-c0f14d790fb5\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.437461 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-session\") pod \"88a85445-8209-4b30-a0e0-c0f14d790fb5\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.437521 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-ocp-branding-template\") pod \"88a85445-8209-4b30-a0e0-c0f14d790fb5\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.437600 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/88a85445-8209-4b30-a0e0-c0f14d790fb5-audit-dir\") pod \"88a85445-8209-4b30-a0e0-c0f14d790fb5\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.437653 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-cliconfig\") pod \"88a85445-8209-4b30-a0e0-c0f14d790fb5\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.437700 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-serving-cert\") pod \"88a85445-8209-4b30-a0e0-c0f14d790fb5\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.437740 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v7bw\" (UniqueName: \"kubernetes.io/projected/88a85445-8209-4b30-a0e0-c0f14d790fb5-kube-api-access-6v7bw\") pod \"88a85445-8209-4b30-a0e0-c0f14d790fb5\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.437803 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-user-template-error\") pod \"88a85445-8209-4b30-a0e0-c0f14d790fb5\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.437847 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-trusted-ca-bundle\") pod \"88a85445-8209-4b30-a0e0-c0f14d790fb5\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.437921 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-user-idp-0-file-data\") pod \"88a85445-8209-4b30-a0e0-c0f14d790fb5\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.438013 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-user-template-login\") pod \"88a85445-8209-4b30-a0e0-c0f14d790fb5\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.438125 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-router-certs\") pod \"88a85445-8209-4b30-a0e0-c0f14d790fb5\" (UID: \"88a85445-8209-4b30-a0e0-c0f14d790fb5\") " Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.438272 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88a85445-8209-4b30-a0e0-c0f14d790fb5-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "88a85445-8209-4b30-a0e0-c0f14d790fb5" (UID: "88a85445-8209-4b30-a0e0-c0f14d790fb5"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.438593 4897 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/88a85445-8209-4b30-a0e0-c0f14d790fb5-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.438737 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "88a85445-8209-4b30-a0e0-c0f14d790fb5" (UID: "88a85445-8209-4b30-a0e0-c0f14d790fb5"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.439177 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "88a85445-8209-4b30-a0e0-c0f14d790fb5" (UID: "88a85445-8209-4b30-a0e0-c0f14d790fb5"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.439706 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "88a85445-8209-4b30-a0e0-c0f14d790fb5" (UID: "88a85445-8209-4b30-a0e0-c0f14d790fb5"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.439771 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88a85445-8209-4b30-a0e0-c0f14d790fb5-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "88a85445-8209-4b30-a0e0-c0f14d790fb5" (UID: "88a85445-8209-4b30-a0e0-c0f14d790fb5"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.447843 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "88a85445-8209-4b30-a0e0-c0f14d790fb5" (UID: "88a85445-8209-4b30-a0e0-c0f14d790fb5"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.448185 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "88a85445-8209-4b30-a0e0-c0f14d790fb5" (UID: "88a85445-8209-4b30-a0e0-c0f14d790fb5"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.448610 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "88a85445-8209-4b30-a0e0-c0f14d790fb5" (UID: "88a85445-8209-4b30-a0e0-c0f14d790fb5"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.448708 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "88a85445-8209-4b30-a0e0-c0f14d790fb5" (UID: "88a85445-8209-4b30-a0e0-c0f14d790fb5"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.448898 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88a85445-8209-4b30-a0e0-c0f14d790fb5-kube-api-access-6v7bw" (OuterVolumeSpecName: "kube-api-access-6v7bw") pod "88a85445-8209-4b30-a0e0-c0f14d790fb5" (UID: "88a85445-8209-4b30-a0e0-c0f14d790fb5"). InnerVolumeSpecName "kube-api-access-6v7bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.448997 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "88a85445-8209-4b30-a0e0-c0f14d790fb5" (UID: "88a85445-8209-4b30-a0e0-c0f14d790fb5"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.449277 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "88a85445-8209-4b30-a0e0-c0f14d790fb5" (UID: "88a85445-8209-4b30-a0e0-c0f14d790fb5"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.449532 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "88a85445-8209-4b30-a0e0-c0f14d790fb5" (UID: "88a85445-8209-4b30-a0e0-c0f14d790fb5"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.457493 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "88a85445-8209-4b30-a0e0-c0f14d790fb5" (UID: "88a85445-8209-4b30-a0e0-c0f14d790fb5"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.542215 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v7bw\" (UniqueName: \"kubernetes.io/projected/88a85445-8209-4b30-a0e0-c0f14d790fb5-kube-api-access-6v7bw\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.542246 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.542260 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.542273 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.542285 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.542297 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.542309 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.542320 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.542333 4897 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/88a85445-8209-4b30-a0e0-c0f14d790fb5-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.542345 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.542358 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.542371 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.542382 4897 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/88a85445-8209-4b30-a0e0-c0f14d790fb5-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.782921 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d9f3ccdc9cf2e0c5bdecd4a388430a3775acc2dc6278199041b5124766fcf79c"} Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.782968 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9b8d2f37349c5581b725a3810e58719d06045f363fe997f46d9c97d89a7dd1dd"} Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.782987 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"35c43188f38615597b7aab28fe7a9c9d4e36e2c02048f733b705fedc913debb1"} Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.784309 4897 generic.go:334] "Generic (PLEG): container finished" podID="88a85445-8209-4b30-a0e0-c0f14d790fb5" containerID="f57d9e8e6b33f49c13b09fab54911760226509c9df432c9b7bbde2f4b41a1c28" exitCode=0 Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.784386 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.784452 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" event={"ID":"88a85445-8209-4b30-a0e0-c0f14d790fb5","Type":"ContainerDied","Data":"f57d9e8e6b33f49c13b09fab54911760226509c9df432c9b7bbde2f4b41a1c28"} Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.784485 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c8v6s" event={"ID":"88a85445-8209-4b30-a0e0-c0f14d790fb5","Type":"ContainerDied","Data":"4c498ae963b7f2ee5451cb19e9552698d3f2efb61c474f5c3c7c0741b18a696d"} Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.784503 4897 scope.go:117] "RemoveContainer" containerID="f57d9e8e6b33f49c13b09fab54911760226509c9df432c9b7bbde2f4b41a1c28" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.824266 4897 scope.go:117] "RemoveContainer" containerID="f57d9e8e6b33f49c13b09fab54911760226509c9df432c9b7bbde2f4b41a1c28" Feb 14 18:46:44 crc kubenswrapper[4897]: E0214 18:46:44.825205 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f57d9e8e6b33f49c13b09fab54911760226509c9df432c9b7bbde2f4b41a1c28\": container with ID starting with f57d9e8e6b33f49c13b09fab54911760226509c9df432c9b7bbde2f4b41a1c28 not found: ID does not exist" containerID="f57d9e8e6b33f49c13b09fab54911760226509c9df432c9b7bbde2f4b41a1c28" Feb 14 18:46:44 crc kubenswrapper[4897]: I0214 18:46:44.825248 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f57d9e8e6b33f49c13b09fab54911760226509c9df432c9b7bbde2f4b41a1c28"} err="failed to get container status \"f57d9e8e6b33f49c13b09fab54911760226509c9df432c9b7bbde2f4b41a1c28\": rpc error: code = NotFound desc = could not find container \"f57d9e8e6b33f49c13b09fab54911760226509c9df432c9b7bbde2f4b41a1c28\": container with ID starting with f57d9e8e6b33f49c13b09fab54911760226509c9df432c9b7bbde2f4b41a1c28 not found: ID does not exist" Feb 14 18:46:45 crc kubenswrapper[4897]: I0214 18:46:45.790187 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"86a3401cc7af3e34feb5c77d048eea396f7eed90acd3bae81aa74796049dea5f"} Feb 14 18:46:45 crc kubenswrapper[4897]: I0214 18:46:45.790231 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f5b9b1e60acbb238abc40d6d918735a494c674ac1540af3002a2f28a8fca4d15"} Feb 14 18:46:45 crc kubenswrapper[4897]: I0214 18:46:45.790444 4897 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ab8356f5-2c48-45bc-a850-d81b87845955" Feb 14 18:46:45 crc kubenswrapper[4897]: I0214 18:46:45.790458 4897 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ab8356f5-2c48-45bc-a850-d81b87845955" Feb 14 18:46:45 crc kubenswrapper[4897]: I0214 18:46:45.790613 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:46:47 crc kubenswrapper[4897]: I0214 18:46:47.854589 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:46:47 crc kubenswrapper[4897]: I0214 18:46:47.855150 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:46:47 crc kubenswrapper[4897]: I0214 18:46:47.865751 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:46:50 crc kubenswrapper[4897]: I0214 18:46:50.798993 4897 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:46:50 crc kubenswrapper[4897]: I0214 18:46:50.822338 4897 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ab8356f5-2c48-45bc-a850-d81b87845955" Feb 14 18:46:50 crc kubenswrapper[4897]: I0214 18:46:50.822373 4897 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ab8356f5-2c48-45bc-a850-d81b87845955" Feb 14 18:46:50 crc kubenswrapper[4897]: I0214 18:46:50.826971 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:46:50 crc kubenswrapper[4897]: I0214 18:46:50.830387 4897 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f1a864fc-f893-4871-9ca8-51124ebf2245" Feb 14 18:46:51 crc kubenswrapper[4897]: I0214 18:46:51.825539 4897 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ab8356f5-2c48-45bc-a850-d81b87845955" Feb 14 18:46:51 crc kubenswrapper[4897]: I0214 18:46:51.825568 4897 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ab8356f5-2c48-45bc-a850-d81b87845955" Feb 14 18:46:52 crc kubenswrapper[4897]: I0214 18:46:52.661216 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 18:46:52 crc kubenswrapper[4897]: I0214 18:46:52.668222 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 18:46:54 crc kubenswrapper[4897]: I0214 18:46:54.129623 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 18:46:57 crc kubenswrapper[4897]: I0214 18:46:57.816349 4897 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f1a864fc-f893-4871-9ca8-51124ebf2245" Feb 14 18:46:59 crc kubenswrapper[4897]: I0214 18:46:59.854116 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 18:47:00 crc kubenswrapper[4897]: I0214 18:47:00.520658 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 14 18:47:01 crc kubenswrapper[4897]: I0214 18:47:01.204553 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 14 18:47:01 crc kubenswrapper[4897]: I0214 18:47:01.255197 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 14 18:47:01 crc kubenswrapper[4897]: I0214 18:47:01.393612 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 14 18:47:01 crc kubenswrapper[4897]: I0214 18:47:01.417554 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 14 18:47:01 crc kubenswrapper[4897]: I0214 18:47:01.444481 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 14 18:47:01 crc kubenswrapper[4897]: I0214 18:47:01.602378 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 14 18:47:01 crc kubenswrapper[4897]: I0214 18:47:01.609607 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 14 18:47:01 crc kubenswrapper[4897]: I0214 18:47:01.684865 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 14 18:47:01 crc kubenswrapper[4897]: I0214 18:47:01.736377 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 14 18:47:02 crc kubenswrapper[4897]: I0214 18:47:02.016226 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 14 18:47:02 crc kubenswrapper[4897]: I0214 18:47:02.016300 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 14 18:47:02 crc kubenswrapper[4897]: I0214 18:47:02.242708 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 14 18:47:02 crc kubenswrapper[4897]: I0214 18:47:02.349640 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 14 18:47:02 crc kubenswrapper[4897]: I0214 18:47:02.390827 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 14 18:47:02 crc kubenswrapper[4897]: I0214 18:47:02.565189 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 14 18:47:02 crc kubenswrapper[4897]: I0214 18:47:02.642837 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 14 18:47:02 crc kubenswrapper[4897]: I0214 18:47:02.663180 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 14 18:47:02 crc kubenswrapper[4897]: I0214 18:47:02.782437 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 14 18:47:02 crc kubenswrapper[4897]: I0214 18:47:02.814983 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 14 18:47:02 crc kubenswrapper[4897]: I0214 18:47:02.934394 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 14 18:47:02 crc kubenswrapper[4897]: I0214 18:47:02.938383 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 14 18:47:03 crc kubenswrapper[4897]: I0214 18:47:03.027521 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 14 18:47:03 crc kubenswrapper[4897]: I0214 18:47:03.238426 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 14 18:47:03 crc kubenswrapper[4897]: I0214 18:47:03.244254 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 14 18:47:03 crc kubenswrapper[4897]: I0214 18:47:03.340149 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 14 18:47:03 crc kubenswrapper[4897]: I0214 18:47:03.418946 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 14 18:47:03 crc kubenswrapper[4897]: I0214 18:47:03.570713 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 14 18:47:03 crc kubenswrapper[4897]: I0214 18:47:03.619504 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 14 18:47:03 crc kubenswrapper[4897]: I0214 18:47:03.671778 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 14 18:47:03 crc kubenswrapper[4897]: I0214 18:47:03.719062 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 14 18:47:03 crc kubenswrapper[4897]: I0214 18:47:03.798976 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 14 18:47:03 crc kubenswrapper[4897]: I0214 18:47:03.824559 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 14 18:47:03 crc kubenswrapper[4897]: I0214 18:47:03.903182 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 14 18:47:03 crc kubenswrapper[4897]: I0214 18:47:03.973641 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 14 18:47:04 crc kubenswrapper[4897]: I0214 18:47:04.053569 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 14 18:47:04 crc kubenswrapper[4897]: I0214 18:47:04.085292 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 14 18:47:04 crc kubenswrapper[4897]: I0214 18:47:04.160660 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 14 18:47:04 crc kubenswrapper[4897]: I0214 18:47:04.215256 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 14 18:47:04 crc kubenswrapper[4897]: I0214 18:47:04.503192 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 14 18:47:04 crc kubenswrapper[4897]: I0214 18:47:04.536262 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 14 18:47:04 crc kubenswrapper[4897]: I0214 18:47:04.559144 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 18:47:04 crc kubenswrapper[4897]: I0214 18:47:04.588978 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 14 18:47:04 crc kubenswrapper[4897]: I0214 18:47:04.662378 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 18:47:04 crc kubenswrapper[4897]: I0214 18:47:04.667072 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 14 18:47:04 crc kubenswrapper[4897]: I0214 18:47:04.678663 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 14 18:47:04 crc kubenswrapper[4897]: I0214 18:47:04.883979 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 14 18:47:04 crc kubenswrapper[4897]: I0214 18:47:04.887092 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 14 18:47:04 crc kubenswrapper[4897]: I0214 18:47:04.890330 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 14 18:47:04 crc kubenswrapper[4897]: I0214 18:47:04.948130 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 14 18:47:04 crc kubenswrapper[4897]: I0214 18:47:04.993820 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 14 18:47:04 crc kubenswrapper[4897]: I0214 18:47:04.996164 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 14 18:47:05 crc kubenswrapper[4897]: I0214 18:47:05.033437 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 14 18:47:05 crc kubenswrapper[4897]: I0214 18:47:05.084661 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 14 18:47:05 crc kubenswrapper[4897]: I0214 18:47:05.100126 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 14 18:47:05 crc kubenswrapper[4897]: I0214 18:47:05.269672 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 14 18:47:05 crc kubenswrapper[4897]: I0214 18:47:05.327465 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 14 18:47:05 crc kubenswrapper[4897]: I0214 18:47:05.378877 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 14 18:47:05 crc kubenswrapper[4897]: I0214 18:47:05.393020 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 14 18:47:05 crc kubenswrapper[4897]: I0214 18:47:05.395488 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 14 18:47:05 crc kubenswrapper[4897]: I0214 18:47:05.418260 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 14 18:47:05 crc kubenswrapper[4897]: I0214 18:47:05.503116 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 14 18:47:05 crc kubenswrapper[4897]: I0214 18:47:05.505531 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 14 18:47:05 crc kubenswrapper[4897]: I0214 18:47:05.678012 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 14 18:47:05 crc kubenswrapper[4897]: I0214 18:47:05.734843 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 14 18:47:05 crc kubenswrapper[4897]: I0214 18:47:05.784551 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 14 18:47:05 crc kubenswrapper[4897]: I0214 18:47:05.792380 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 14 18:47:05 crc kubenswrapper[4897]: I0214 18:47:05.840093 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 14 18:47:05 crc kubenswrapper[4897]: I0214 18:47:05.889728 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 14 18:47:06 crc kubenswrapper[4897]: I0214 18:47:06.038753 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 14 18:47:06 crc kubenswrapper[4897]: I0214 18:47:06.060924 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 14 18:47:06 crc kubenswrapper[4897]: I0214 18:47:06.141831 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 14 18:47:06 crc kubenswrapper[4897]: I0214 18:47:06.142523 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 14 18:47:06 crc kubenswrapper[4897]: I0214 18:47:06.148552 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 14 18:47:06 crc kubenswrapper[4897]: I0214 18:47:06.188576 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 14 18:47:06 crc kubenswrapper[4897]: I0214 18:47:06.343191 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 14 18:47:06 crc kubenswrapper[4897]: I0214 18:47:06.465421 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 14 18:47:06 crc kubenswrapper[4897]: I0214 18:47:06.489883 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 14 18:47:06 crc kubenswrapper[4897]: I0214 18:47:06.525902 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 14 18:47:06 crc kubenswrapper[4897]: I0214 18:47:06.541899 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 14 18:47:06 crc kubenswrapper[4897]: I0214 18:47:06.766530 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 14 18:47:06 crc kubenswrapper[4897]: I0214 18:47:06.769717 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 14 18:47:06 crc kubenswrapper[4897]: I0214 18:47:06.868853 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 14 18:47:06 crc kubenswrapper[4897]: I0214 18:47:06.921881 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 14 18:47:06 crc kubenswrapper[4897]: I0214 18:47:06.941211 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 14 18:47:07 crc kubenswrapper[4897]: I0214 18:47:07.025060 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 14 18:47:07 crc kubenswrapper[4897]: I0214 18:47:07.068653 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 14 18:47:07 crc kubenswrapper[4897]: I0214 18:47:07.084812 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 14 18:47:07 crc kubenswrapper[4897]: I0214 18:47:07.135974 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 14 18:47:07 crc kubenswrapper[4897]: I0214 18:47:07.136517 4897 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 14 18:47:07 crc kubenswrapper[4897]: I0214 18:47:07.147846 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 14 18:47:07 crc kubenswrapper[4897]: I0214 18:47:07.169978 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 14 18:47:07 crc kubenswrapper[4897]: I0214 18:47:07.235851 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 14 18:47:07 crc kubenswrapper[4897]: I0214 18:47:07.263535 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 14 18:47:07 crc kubenswrapper[4897]: I0214 18:47:07.301288 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 14 18:47:07 crc kubenswrapper[4897]: I0214 18:47:07.374066 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 18:47:07 crc kubenswrapper[4897]: I0214 18:47:07.461022 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 14 18:47:07 crc kubenswrapper[4897]: I0214 18:47:07.520140 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 14 18:47:07 crc kubenswrapper[4897]: I0214 18:47:07.601391 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 18:47:07 crc kubenswrapper[4897]: I0214 18:47:07.700269 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 14 18:47:07 crc kubenswrapper[4897]: I0214 18:47:07.746941 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 14 18:47:07 crc kubenswrapper[4897]: I0214 18:47:07.823818 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 14 18:47:07 crc kubenswrapper[4897]: I0214 18:47:07.934007 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 14 18:47:07 crc kubenswrapper[4897]: I0214 18:47:07.941881 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 14 18:47:08 crc kubenswrapper[4897]: I0214 18:47:08.048384 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 14 18:47:08 crc kubenswrapper[4897]: I0214 18:47:08.170890 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 14 18:47:08 crc kubenswrapper[4897]: I0214 18:47:08.170987 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 14 18:47:08 crc kubenswrapper[4897]: I0214 18:47:08.198635 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 14 18:47:08 crc kubenswrapper[4897]: I0214 18:47:08.275533 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 14 18:47:08 crc kubenswrapper[4897]: I0214 18:47:08.293960 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 14 18:47:08 crc kubenswrapper[4897]: I0214 18:47:08.442751 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 14 18:47:08 crc kubenswrapper[4897]: I0214 18:47:08.578768 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 14 18:47:08 crc kubenswrapper[4897]: I0214 18:47:08.817221 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 14 18:47:08 crc kubenswrapper[4897]: I0214 18:47:08.826143 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 14 18:47:08 crc kubenswrapper[4897]: I0214 18:47:08.880290 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 14 18:47:08 crc kubenswrapper[4897]: I0214 18:47:08.941328 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 14 18:47:08 crc kubenswrapper[4897]: I0214 18:47:08.979254 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 14 18:47:09 crc kubenswrapper[4897]: I0214 18:47:09.007153 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 18:47:09 crc kubenswrapper[4897]: I0214 18:47:09.053728 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 14 18:47:09 crc kubenswrapper[4897]: I0214 18:47:09.085499 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 14 18:47:09 crc kubenswrapper[4897]: I0214 18:47:09.288282 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 14 18:47:09 crc kubenswrapper[4897]: I0214 18:47:09.335781 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 14 18:47:09 crc kubenswrapper[4897]: I0214 18:47:09.390152 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 14 18:47:09 crc kubenswrapper[4897]: I0214 18:47:09.453497 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 14 18:47:09 crc kubenswrapper[4897]: I0214 18:47:09.490369 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 14 18:47:09 crc kubenswrapper[4897]: I0214 18:47:09.522875 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 14 18:47:09 crc kubenswrapper[4897]: I0214 18:47:09.754444 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 14 18:47:09 crc kubenswrapper[4897]: I0214 18:47:09.838678 4897 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 14 18:47:09 crc kubenswrapper[4897]: I0214 18:47:09.846954 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c8v6s","openshift-kube-apiserver/kube-apiserver-crc"] Feb 14 18:47:09 crc kubenswrapper[4897]: I0214 18:47:09.847074 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 14 18:47:09 crc kubenswrapper[4897]: I0214 18:47:09.855135 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 18:47:09 crc kubenswrapper[4897]: I0214 18:47:09.864644 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 14 18:47:09 crc kubenswrapper[4897]: I0214 18:47:09.871458 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.871435963 podStartE2EDuration="19.871435963s" podCreationTimestamp="2026-02-14 18:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:47:09.866176225 +0000 UTC m=+282.842584778" watchObservedRunningTime="2026-02-14 18:47:09.871435963 +0000 UTC m=+282.847844496" Feb 14 18:47:09 crc kubenswrapper[4897]: I0214 18:47:09.925572 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 14 18:47:10 crc kubenswrapper[4897]: I0214 18:47:10.024268 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 14 18:47:10 crc kubenswrapper[4897]: I0214 18:47:10.030096 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 14 18:47:10 crc kubenswrapper[4897]: I0214 18:47:10.031515 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 14 18:47:10 crc kubenswrapper[4897]: I0214 18:47:10.036455 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 14 18:47:10 crc kubenswrapper[4897]: I0214 18:47:10.051743 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 14 18:47:10 crc kubenswrapper[4897]: I0214 18:47:10.115310 4897 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 14 18:47:10 crc kubenswrapper[4897]: I0214 18:47:10.158871 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 18:47:10 crc kubenswrapper[4897]: I0214 18:47:10.166634 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 14 18:47:10 crc kubenswrapper[4897]: I0214 18:47:10.191593 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 14 18:47:10 crc kubenswrapper[4897]: I0214 18:47:10.231603 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 14 18:47:10 crc kubenswrapper[4897]: I0214 18:47:10.432826 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 14 18:47:10 crc kubenswrapper[4897]: I0214 18:47:10.456527 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 14 18:47:10 crc kubenswrapper[4897]: I0214 18:47:10.520704 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 14 18:47:10 crc kubenswrapper[4897]: I0214 18:47:10.641640 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 14 18:47:10 crc kubenswrapper[4897]: I0214 18:47:10.728612 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 14 18:47:10 crc kubenswrapper[4897]: I0214 18:47:10.767800 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 14 18:47:10 crc kubenswrapper[4897]: I0214 18:47:10.834490 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 14 18:47:10 crc kubenswrapper[4897]: I0214 18:47:10.942561 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 14 18:47:11 crc kubenswrapper[4897]: I0214 18:47:11.019866 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 14 18:47:11 crc kubenswrapper[4897]: I0214 18:47:11.039515 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 14 18:47:11 crc kubenswrapper[4897]: I0214 18:47:11.040927 4897 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 14 18:47:11 crc kubenswrapper[4897]: I0214 18:47:11.056475 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 14 18:47:11 crc kubenswrapper[4897]: I0214 18:47:11.141318 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 14 18:47:11 crc kubenswrapper[4897]: I0214 18:47:11.188277 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 18:47:11 crc kubenswrapper[4897]: I0214 18:47:11.225547 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 18:47:11 crc kubenswrapper[4897]: I0214 18:47:11.277812 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 14 18:47:11 crc kubenswrapper[4897]: I0214 18:47:11.299980 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 14 18:47:11 crc kubenswrapper[4897]: I0214 18:47:11.320585 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 14 18:47:11 crc kubenswrapper[4897]: I0214 18:47:11.399447 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 14 18:47:11 crc kubenswrapper[4897]: I0214 18:47:11.405716 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 14 18:47:11 crc kubenswrapper[4897]: I0214 18:47:11.548633 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 14 18:47:11 crc kubenswrapper[4897]: I0214 18:47:11.589219 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 14 18:47:11 crc kubenswrapper[4897]: I0214 18:47:11.608672 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 14 18:47:11 crc kubenswrapper[4897]: I0214 18:47:11.628679 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 14 18:47:11 crc kubenswrapper[4897]: I0214 18:47:11.640132 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 14 18:47:11 crc kubenswrapper[4897]: I0214 18:47:11.771987 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 14 18:47:11 crc kubenswrapper[4897]: I0214 18:47:11.799520 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88a85445-8209-4b30-a0e0-c0f14d790fb5" path="/var/lib/kubelet/pods/88a85445-8209-4b30-a0e0-c0f14d790fb5/volumes" Feb 14 18:47:11 crc kubenswrapper[4897]: I0214 18:47:11.858834 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 14 18:47:11 crc kubenswrapper[4897]: I0214 18:47:11.879927 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 14 18:47:12 crc kubenswrapper[4897]: I0214 18:47:12.241084 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 14 18:47:12 crc kubenswrapper[4897]: I0214 18:47:12.241549 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 14 18:47:12 crc kubenswrapper[4897]: I0214 18:47:12.242235 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 14 18:47:12 crc kubenswrapper[4897]: I0214 18:47:12.242241 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 14 18:47:12 crc kubenswrapper[4897]: I0214 18:47:12.242388 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 14 18:47:12 crc kubenswrapper[4897]: I0214 18:47:12.244254 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 14 18:47:12 crc kubenswrapper[4897]: I0214 18:47:12.244992 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 14 18:47:12 crc kubenswrapper[4897]: I0214 18:47:12.289897 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 18:47:12 crc kubenswrapper[4897]: I0214 18:47:12.335269 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 14 18:47:12 crc kubenswrapper[4897]: I0214 18:47:12.357409 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 14 18:47:12 crc kubenswrapper[4897]: I0214 18:47:12.375711 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 18:47:12 crc kubenswrapper[4897]: I0214 18:47:12.401437 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 14 18:47:12 crc kubenswrapper[4897]: I0214 18:47:12.468424 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 14 18:47:12 crc kubenswrapper[4897]: I0214 18:47:12.537288 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 14 18:47:12 crc kubenswrapper[4897]: I0214 18:47:12.614648 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 14 18:47:12 crc kubenswrapper[4897]: I0214 18:47:12.633050 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 14 18:47:12 crc kubenswrapper[4897]: I0214 18:47:12.705628 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 14 18:47:12 crc kubenswrapper[4897]: I0214 18:47:12.707776 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 14 18:47:12 crc kubenswrapper[4897]: I0214 18:47:12.730365 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 14 18:47:12 crc kubenswrapper[4897]: I0214 18:47:12.735770 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 14 18:47:12 crc kubenswrapper[4897]: I0214 18:47:12.837814 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 14 18:47:12 crc kubenswrapper[4897]: I0214 18:47:12.846595 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 14 18:47:12 crc kubenswrapper[4897]: I0214 18:47:12.870837 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 14 18:47:12 crc kubenswrapper[4897]: I0214 18:47:12.894000 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.107275 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.110795 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.129601 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.192997 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.223467 4897 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.223945 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://1f8aa3756daef6854a9d7b0ad138876a28c8a428e190ade29e50973cf208a022" gracePeriod=5 Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.307511 4897 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.368479 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.388073 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.410888 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.588613 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.638381 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.802078 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-868547c79-t4b6c"] Feb 14 18:47:13 crc kubenswrapper[4897]: E0214 18:47:13.802409 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e1bf34-207a-47f6-a31f-035e5e25b2d7" containerName="installer" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.802444 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e1bf34-207a-47f6-a31f-035e5e25b2d7" containerName="installer" Feb 14 18:47:13 crc kubenswrapper[4897]: E0214 18:47:13.802491 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88a85445-8209-4b30-a0e0-c0f14d790fb5" containerName="oauth-openshift" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.802510 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="88a85445-8209-4b30-a0e0-c0f14d790fb5" containerName="oauth-openshift" Feb 14 18:47:13 crc kubenswrapper[4897]: E0214 18:47:13.802540 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.802556 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.802794 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.802826 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e1bf34-207a-47f6-a31f-035e5e25b2d7" containerName="installer" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.802850 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="88a85445-8209-4b30-a0e0-c0f14d790fb5" containerName="oauth-openshift" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.803490 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.806830 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.808096 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.809335 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.809626 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.810955 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.812308 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.812349 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.812505 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.812612 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.812699 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.814351 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.822927 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.831732 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.834527 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-868547c79-t4b6c"] Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.841951 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.842513 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.842833 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.853847 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.863447 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.863541 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.863712 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.863758 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4mhh\" (UniqueName: \"kubernetes.io/projected/f5d97820-5ed5-4374-a152-5097c22fbe8b-kube-api-access-z4mhh\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.863799 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-system-service-ca\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.863843 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-user-template-login\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.863993 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.864087 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.864126 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-system-router-certs\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.864240 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5d97820-5ed5-4374-a152-5097c22fbe8b-audit-dir\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.864290 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f5d97820-5ed5-4374-a152-5097c22fbe8b-audit-policies\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.864386 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.864442 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-system-session\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.864481 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-user-template-error\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.966454 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f5d97820-5ed5-4374-a152-5097c22fbe8b-audit-policies\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.966573 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.966636 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-system-session\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.966680 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-user-template-error\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.966735 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.966782 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.966837 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.966875 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4mhh\" (UniqueName: \"kubernetes.io/projected/f5d97820-5ed5-4374-a152-5097c22fbe8b-kube-api-access-z4mhh\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.966910 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-system-service-ca\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.966994 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-user-template-login\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.967057 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.967095 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.967132 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-system-router-certs\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.967243 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5d97820-5ed5-4374-a152-5097c22fbe8b-audit-dir\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.967360 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5d97820-5ed5-4374-a152-5097c22fbe8b-audit-dir\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.967503 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f5d97820-5ed5-4374-a152-5097c22fbe8b-audit-policies\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.967987 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.968053 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.968525 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-system-service-ca\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.972620 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.972727 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-user-template-error\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.972826 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.973314 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-system-session\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.987307 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.987909 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-system-router-certs\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.988112 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.993328 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f5d97820-5ed5-4374-a152-5097c22fbe8b-v4-0-config-user-template-login\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:13 crc kubenswrapper[4897]: I0214 18:47:13.993413 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4mhh\" (UniqueName: \"kubernetes.io/projected/f5d97820-5ed5-4374-a152-5097c22fbe8b-kube-api-access-z4mhh\") pod \"oauth-openshift-868547c79-t4b6c\" (UID: \"f5d97820-5ed5-4374-a152-5097c22fbe8b\") " pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:14 crc kubenswrapper[4897]: I0214 18:47:14.007517 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 14 18:47:14 crc kubenswrapper[4897]: I0214 18:47:14.088629 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 14 18:47:14 crc kubenswrapper[4897]: I0214 18:47:14.134598 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:14 crc kubenswrapper[4897]: I0214 18:47:14.234258 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 14 18:47:14 crc kubenswrapper[4897]: I0214 18:47:14.363812 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 14 18:47:14 crc kubenswrapper[4897]: I0214 18:47:14.471733 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 14 18:47:14 crc kubenswrapper[4897]: I0214 18:47:14.555223 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 14 18:47:14 crc kubenswrapper[4897]: I0214 18:47:14.667993 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 14 18:47:14 crc kubenswrapper[4897]: I0214 18:47:14.668753 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 14 18:47:14 crc kubenswrapper[4897]: I0214 18:47:14.679448 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 14 18:47:14 crc kubenswrapper[4897]: I0214 18:47:14.756653 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 14 18:47:14 crc kubenswrapper[4897]: I0214 18:47:14.862124 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 14 18:47:14 crc kubenswrapper[4897]: I0214 18:47:14.936096 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 14 18:47:14 crc kubenswrapper[4897]: I0214 18:47:14.963174 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 14 18:47:15 crc kubenswrapper[4897]: I0214 18:47:15.063244 4897 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 14 18:47:15 crc kubenswrapper[4897]: I0214 18:47:15.082501 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 14 18:47:15 crc kubenswrapper[4897]: I0214 18:47:15.099516 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 14 18:47:15 crc kubenswrapper[4897]: I0214 18:47:15.371309 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 14 18:47:15 crc kubenswrapper[4897]: I0214 18:47:15.404203 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 14 18:47:15 crc kubenswrapper[4897]: I0214 18:47:15.637232 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 14 18:47:15 crc kubenswrapper[4897]: I0214 18:47:15.726802 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 18:47:15 crc kubenswrapper[4897]: I0214 18:47:15.859533 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-868547c79-t4b6c"] Feb 14 18:47:15 crc kubenswrapper[4897]: I0214 18:47:15.912513 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 14 18:47:16 crc kubenswrapper[4897]: I0214 18:47:16.037642 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 14 18:47:16 crc kubenswrapper[4897]: I0214 18:47:16.113941 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 14 18:47:16 crc kubenswrapper[4897]: I0214 18:47:16.278872 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" event={"ID":"f5d97820-5ed5-4374-a152-5097c22fbe8b","Type":"ContainerStarted","Data":"1f4906707bf6871dfb1929d23d69cc35d1c3793065017eac6c0cccaf68f6f9e0"} Feb 14 18:47:16 crc kubenswrapper[4897]: I0214 18:47:16.278947 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" event={"ID":"f5d97820-5ed5-4374-a152-5097c22fbe8b","Type":"ContainerStarted","Data":"6570746c8cc8cc752e3340dedb17398c468b308a006efa70b39b4afbf3ded0b6"} Feb 14 18:47:16 crc kubenswrapper[4897]: I0214 18:47:16.280521 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:16 crc kubenswrapper[4897]: I0214 18:47:16.312466 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" podStartSLOduration=58.312434992 podStartE2EDuration="58.312434992s" podCreationTimestamp="2026-02-14 18:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:47:16.308176306 +0000 UTC m=+289.284584859" watchObservedRunningTime="2026-02-14 18:47:16.312434992 +0000 UTC m=+289.288843505" Feb 14 18:47:16 crc kubenswrapper[4897]: I0214 18:47:16.377270 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 14 18:47:16 crc kubenswrapper[4897]: I0214 18:47:16.472346 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 14 18:47:16 crc kubenswrapper[4897]: I0214 18:47:16.511590 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 14 18:47:16 crc kubenswrapper[4897]: I0214 18:47:16.636452 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" Feb 14 18:47:18 crc kubenswrapper[4897]: I0214 18:47:18.298471 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 14 18:47:18 crc kubenswrapper[4897]: I0214 18:47:18.298881 4897 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="1f8aa3756daef6854a9d7b0ad138876a28c8a428e190ade29e50973cf208a022" exitCode=137 Feb 14 18:47:18 crc kubenswrapper[4897]: I0214 18:47:18.791849 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 14 18:47:18 crc kubenswrapper[4897]: I0214 18:47:18.791951 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 18:47:18 crc kubenswrapper[4897]: I0214 18:47:18.864115 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 14 18:47:18 crc kubenswrapper[4897]: I0214 18:47:18.864226 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 14 18:47:18 crc kubenswrapper[4897]: I0214 18:47:18.864296 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 14 18:47:18 crc kubenswrapper[4897]: I0214 18:47:18.864350 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 14 18:47:18 crc kubenswrapper[4897]: I0214 18:47:18.864388 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 14 18:47:18 crc kubenswrapper[4897]: I0214 18:47:18.864503 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:47:18 crc kubenswrapper[4897]: I0214 18:47:18.864503 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:47:18 crc kubenswrapper[4897]: I0214 18:47:18.864567 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:47:18 crc kubenswrapper[4897]: I0214 18:47:18.864644 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:47:18 crc kubenswrapper[4897]: I0214 18:47:18.864939 4897 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 14 18:47:18 crc kubenswrapper[4897]: I0214 18:47:18.864962 4897 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 14 18:47:18 crc kubenswrapper[4897]: I0214 18:47:18.864983 4897 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 14 18:47:18 crc kubenswrapper[4897]: I0214 18:47:18.865002 4897 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 14 18:47:18 crc kubenswrapper[4897]: I0214 18:47:18.877599 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:47:18 crc kubenswrapper[4897]: I0214 18:47:18.966001 4897 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 14 18:47:19 crc kubenswrapper[4897]: I0214 18:47:19.307766 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 14 18:47:19 crc kubenswrapper[4897]: I0214 18:47:19.307870 4897 scope.go:117] "RemoveContainer" containerID="1f8aa3756daef6854a9d7b0ad138876a28c8a428e190ade29e50973cf208a022" Feb 14 18:47:19 crc kubenswrapper[4897]: I0214 18:47:19.307944 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 18:47:19 crc kubenswrapper[4897]: I0214 18:47:19.804330 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 14 18:47:26 crc kubenswrapper[4897]: I0214 18:47:26.088719 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 14 18:47:27 crc kubenswrapper[4897]: I0214 18:47:27.561423 4897 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 14 18:47:28 crc kubenswrapper[4897]: I0214 18:47:28.765650 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 14 18:47:43 crc kubenswrapper[4897]: I0214 18:47:43.356474 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 14 18:47:46 crc kubenswrapper[4897]: I0214 18:47:46.807641 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 14 18:47:47 crc kubenswrapper[4897]: I0214 18:47:47.606236 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 14 18:47:48 crc kubenswrapper[4897]: I0214 18:47:48.220478 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 14 18:48:31 crc kubenswrapper[4897]: I0214 18:48:31.726525 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 18:48:31 crc kubenswrapper[4897]: I0214 18:48:31.727350 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.234636 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-zrmdr"] Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.235888 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.253337 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-zrmdr"] Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.341453 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/04a49346-5e0b-4511-8879-6d60e76e2464-registry-tls\") pod \"image-registry-66df7c8f76-zrmdr\" (UID: \"04a49346-5e0b-4511-8879-6d60e76e2464\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.341523 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/04a49346-5e0b-4511-8879-6d60e76e2464-ca-trust-extracted\") pod \"image-registry-66df7c8f76-zrmdr\" (UID: \"04a49346-5e0b-4511-8879-6d60e76e2464\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.341582 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-zrmdr\" (UID: \"04a49346-5e0b-4511-8879-6d60e76e2464\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.341620 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04a49346-5e0b-4511-8879-6d60e76e2464-bound-sa-token\") pod \"image-registry-66df7c8f76-zrmdr\" (UID: \"04a49346-5e0b-4511-8879-6d60e76e2464\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.341651 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04a49346-5e0b-4511-8879-6d60e76e2464-trusted-ca\") pod \"image-registry-66df7c8f76-zrmdr\" (UID: \"04a49346-5e0b-4511-8879-6d60e76e2464\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.341688 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/04a49346-5e0b-4511-8879-6d60e76e2464-registry-certificates\") pod \"image-registry-66df7c8f76-zrmdr\" (UID: \"04a49346-5e0b-4511-8879-6d60e76e2464\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.341713 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/04a49346-5e0b-4511-8879-6d60e76e2464-installation-pull-secrets\") pod \"image-registry-66df7c8f76-zrmdr\" (UID: \"04a49346-5e0b-4511-8879-6d60e76e2464\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.341737 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw4m2\" (UniqueName: \"kubernetes.io/projected/04a49346-5e0b-4511-8879-6d60e76e2464-kube-api-access-jw4m2\") pod \"image-registry-66df7c8f76-zrmdr\" (UID: \"04a49346-5e0b-4511-8879-6d60e76e2464\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.364524 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-zrmdr\" (UID: \"04a49346-5e0b-4511-8879-6d60e76e2464\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.442640 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04a49346-5e0b-4511-8879-6d60e76e2464-bound-sa-token\") pod \"image-registry-66df7c8f76-zrmdr\" (UID: \"04a49346-5e0b-4511-8879-6d60e76e2464\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.442959 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04a49346-5e0b-4511-8879-6d60e76e2464-trusted-ca\") pod \"image-registry-66df7c8f76-zrmdr\" (UID: \"04a49346-5e0b-4511-8879-6d60e76e2464\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.442989 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/04a49346-5e0b-4511-8879-6d60e76e2464-registry-certificates\") pod \"image-registry-66df7c8f76-zrmdr\" (UID: \"04a49346-5e0b-4511-8879-6d60e76e2464\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.443009 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/04a49346-5e0b-4511-8879-6d60e76e2464-installation-pull-secrets\") pod \"image-registry-66df7c8f76-zrmdr\" (UID: \"04a49346-5e0b-4511-8879-6d60e76e2464\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.443042 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw4m2\" (UniqueName: \"kubernetes.io/projected/04a49346-5e0b-4511-8879-6d60e76e2464-kube-api-access-jw4m2\") pod \"image-registry-66df7c8f76-zrmdr\" (UID: \"04a49346-5e0b-4511-8879-6d60e76e2464\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.443068 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/04a49346-5e0b-4511-8879-6d60e76e2464-registry-tls\") pod \"image-registry-66df7c8f76-zrmdr\" (UID: \"04a49346-5e0b-4511-8879-6d60e76e2464\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.443093 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/04a49346-5e0b-4511-8879-6d60e76e2464-ca-trust-extracted\") pod \"image-registry-66df7c8f76-zrmdr\" (UID: \"04a49346-5e0b-4511-8879-6d60e76e2464\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.443590 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/04a49346-5e0b-4511-8879-6d60e76e2464-ca-trust-extracted\") pod \"image-registry-66df7c8f76-zrmdr\" (UID: \"04a49346-5e0b-4511-8879-6d60e76e2464\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.444186 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/04a49346-5e0b-4511-8879-6d60e76e2464-registry-certificates\") pod \"image-registry-66df7c8f76-zrmdr\" (UID: \"04a49346-5e0b-4511-8879-6d60e76e2464\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.444726 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04a49346-5e0b-4511-8879-6d60e76e2464-trusted-ca\") pod \"image-registry-66df7c8f76-zrmdr\" (UID: \"04a49346-5e0b-4511-8879-6d60e76e2464\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.448679 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/04a49346-5e0b-4511-8879-6d60e76e2464-installation-pull-secrets\") pod \"image-registry-66df7c8f76-zrmdr\" (UID: \"04a49346-5e0b-4511-8879-6d60e76e2464\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.448697 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/04a49346-5e0b-4511-8879-6d60e76e2464-registry-tls\") pod \"image-registry-66df7c8f76-zrmdr\" (UID: \"04a49346-5e0b-4511-8879-6d60e76e2464\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.468591 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04a49346-5e0b-4511-8879-6d60e76e2464-bound-sa-token\") pod \"image-registry-66df7c8f76-zrmdr\" (UID: \"04a49346-5e0b-4511-8879-6d60e76e2464\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.471220 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw4m2\" (UniqueName: \"kubernetes.io/projected/04a49346-5e0b-4511-8879-6d60e76e2464-kube-api-access-jw4m2\") pod \"image-registry-66df7c8f76-zrmdr\" (UID: \"04a49346-5e0b-4511-8879-6d60e76e2464\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.590487 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" Feb 14 18:48:32 crc kubenswrapper[4897]: I0214 18:48:32.861520 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-zrmdr"] Feb 14 18:48:33 crc kubenswrapper[4897]: I0214 18:48:33.822095 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" event={"ID":"04a49346-5e0b-4511-8879-6d60e76e2464","Type":"ContainerStarted","Data":"88b3c1c0f324b1387e09e825c981f43a1d4d675987bd38edbfb9ed5df5981d35"} Feb 14 18:48:33 crc kubenswrapper[4897]: I0214 18:48:33.822645 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" Feb 14 18:48:33 crc kubenswrapper[4897]: I0214 18:48:33.822678 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" event={"ID":"04a49346-5e0b-4511-8879-6d60e76e2464","Type":"ContainerStarted","Data":"9c4a43856eda2366b3960aca685cebee0e21e1d8d7670bfbf194e1f616db7e7f"} Feb 14 18:48:33 crc kubenswrapper[4897]: I0214 18:48:33.850261 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" podStartSLOduration=1.8502318770000001 podStartE2EDuration="1.850231877s" podCreationTimestamp="2026-02-14 18:48:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:48:33.841204867 +0000 UTC m=+366.817613440" watchObservedRunningTime="2026-02-14 18:48:33.850231877 +0000 UTC m=+366.826640400" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.226717 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4c74d"] Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.227209 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4c74d" podUID="037c41d9-7976-43c9-baa6-57aec44c28de" containerName="registry-server" containerID="cri-o://7489420b8f7e6a38e90f23215d803ce5203e5f3494e5d61ea473bf4c4ffca102" gracePeriod=30 Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.239444 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rph5f"] Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.239666 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rph5f" podUID="d360c9a9-d428-4ca4-9379-e052a6e60b22" containerName="registry-server" containerID="cri-o://df2c0c009e1a1304236f9e5fd2b291a64ab0346e216c31892406bbbb5e496e77" gracePeriod=30 Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.255432 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9n8vm"] Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.255733 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-9n8vm" podUID="d62c28f1-696b-4b88-8f46-67abf833ee4c" containerName="marketplace-operator" containerID="cri-o://8be8227f8b296e15096f060fc7e74c2d49a8db27f71ad8b98a4b6bfeecc2f5d6" gracePeriod=30 Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.263490 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-twzlp"] Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.263700 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-twzlp" podUID="6cdafc37-f772-4b48-b1cf-29759861b373" containerName="registry-server" containerID="cri-o://85f0aaa618a56bc1bb5010a755815f41953d9ab2f19ed3d285556ffb37b071ad" gracePeriod=30 Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.277707 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bgv5g"] Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.277948 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bgv5g" podUID="7a553b46-b32c-435f-8e30-338b174cd444" containerName="registry-server" containerID="cri-o://8b0af50173192025174bf574ac8424a4eae8b650c256abc93133dda2481c0fb9" gracePeriod=30 Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.296915 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ndtpt"] Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.297745 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ndtpt" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.308011 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ndtpt"] Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.369490 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c87321f8-a781-4a08-93e8-2280f2ee57b8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ndtpt\" (UID: \"c87321f8-a781-4a08-93e8-2280f2ee57b8\") " pod="openshift-marketplace/marketplace-operator-79b997595-ndtpt" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.369541 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c87321f8-a781-4a08-93e8-2280f2ee57b8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ndtpt\" (UID: \"c87321f8-a781-4a08-93e8-2280f2ee57b8\") " pod="openshift-marketplace/marketplace-operator-79b997595-ndtpt" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.369567 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlm4x\" (UniqueName: \"kubernetes.io/projected/c87321f8-a781-4a08-93e8-2280f2ee57b8-kube-api-access-nlm4x\") pod \"marketplace-operator-79b997595-ndtpt\" (UID: \"c87321f8-a781-4a08-93e8-2280f2ee57b8\") " pod="openshift-marketplace/marketplace-operator-79b997595-ndtpt" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.470997 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c87321f8-a781-4a08-93e8-2280f2ee57b8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ndtpt\" (UID: \"c87321f8-a781-4a08-93e8-2280f2ee57b8\") " pod="openshift-marketplace/marketplace-operator-79b997595-ndtpt" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.471287 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlm4x\" (UniqueName: \"kubernetes.io/projected/c87321f8-a781-4a08-93e8-2280f2ee57b8-kube-api-access-nlm4x\") pod \"marketplace-operator-79b997595-ndtpt\" (UID: \"c87321f8-a781-4a08-93e8-2280f2ee57b8\") " pod="openshift-marketplace/marketplace-operator-79b997595-ndtpt" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.471366 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c87321f8-a781-4a08-93e8-2280f2ee57b8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ndtpt\" (UID: \"c87321f8-a781-4a08-93e8-2280f2ee57b8\") " pod="openshift-marketplace/marketplace-operator-79b997595-ndtpt" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.472507 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c87321f8-a781-4a08-93e8-2280f2ee57b8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ndtpt\" (UID: \"c87321f8-a781-4a08-93e8-2280f2ee57b8\") " pod="openshift-marketplace/marketplace-operator-79b997595-ndtpt" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.479940 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c87321f8-a781-4a08-93e8-2280f2ee57b8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ndtpt\" (UID: \"c87321f8-a781-4a08-93e8-2280f2ee57b8\") " pod="openshift-marketplace/marketplace-operator-79b997595-ndtpt" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.486928 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlm4x\" (UniqueName: \"kubernetes.io/projected/c87321f8-a781-4a08-93e8-2280f2ee57b8-kube-api-access-nlm4x\") pod \"marketplace-operator-79b997595-ndtpt\" (UID: \"c87321f8-a781-4a08-93e8-2280f2ee57b8\") " pod="openshift-marketplace/marketplace-operator-79b997595-ndtpt" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.700385 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ndtpt" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.715213 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4c74d" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.721720 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rph5f" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.726163 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twzlp" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.729906 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9n8vm" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.746695 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bgv5g" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.780253 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7v46\" (UniqueName: \"kubernetes.io/projected/7a553b46-b32c-435f-8e30-338b174cd444-kube-api-access-d7v46\") pod \"7a553b46-b32c-435f-8e30-338b174cd444\" (UID: \"7a553b46-b32c-435f-8e30-338b174cd444\") " Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.780298 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r86b6\" (UniqueName: \"kubernetes.io/projected/6cdafc37-f772-4b48-b1cf-29759861b373-kube-api-access-r86b6\") pod \"6cdafc37-f772-4b48-b1cf-29759861b373\" (UID: \"6cdafc37-f772-4b48-b1cf-29759861b373\") " Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.780336 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a553b46-b32c-435f-8e30-338b174cd444-utilities\") pod \"7a553b46-b32c-435f-8e30-338b174cd444\" (UID: \"7a553b46-b32c-435f-8e30-338b174cd444\") " Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.780367 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp9mp\" (UniqueName: \"kubernetes.io/projected/037c41d9-7976-43c9-baa6-57aec44c28de-kube-api-access-wp9mp\") pod \"037c41d9-7976-43c9-baa6-57aec44c28de\" (UID: \"037c41d9-7976-43c9-baa6-57aec44c28de\") " Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.780422 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d62c28f1-696b-4b88-8f46-67abf833ee4c-marketplace-trusted-ca\") pod \"d62c28f1-696b-4b88-8f46-67abf833ee4c\" (UID: \"d62c28f1-696b-4b88-8f46-67abf833ee4c\") " Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.780448 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a553b46-b32c-435f-8e30-338b174cd444-catalog-content\") pod \"7a553b46-b32c-435f-8e30-338b174cd444\" (UID: \"7a553b46-b32c-435f-8e30-338b174cd444\") " Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.780478 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvq8l\" (UniqueName: \"kubernetes.io/projected/d360c9a9-d428-4ca4-9379-e052a6e60b22-kube-api-access-xvq8l\") pod \"d360c9a9-d428-4ca4-9379-e052a6e60b22\" (UID: \"d360c9a9-d428-4ca4-9379-e052a6e60b22\") " Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.780511 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8s6n\" (UniqueName: \"kubernetes.io/projected/d62c28f1-696b-4b88-8f46-67abf833ee4c-kube-api-access-c8s6n\") pod \"d62c28f1-696b-4b88-8f46-67abf833ee4c\" (UID: \"d62c28f1-696b-4b88-8f46-67abf833ee4c\") " Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.780546 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d62c28f1-696b-4b88-8f46-67abf833ee4c-marketplace-operator-metrics\") pod \"d62c28f1-696b-4b88-8f46-67abf833ee4c\" (UID: \"d62c28f1-696b-4b88-8f46-67abf833ee4c\") " Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.780570 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/037c41d9-7976-43c9-baa6-57aec44c28de-catalog-content\") pod \"037c41d9-7976-43c9-baa6-57aec44c28de\" (UID: \"037c41d9-7976-43c9-baa6-57aec44c28de\") " Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.780593 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/037c41d9-7976-43c9-baa6-57aec44c28de-utilities\") pod \"037c41d9-7976-43c9-baa6-57aec44c28de\" (UID: \"037c41d9-7976-43c9-baa6-57aec44c28de\") " Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.780614 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cdafc37-f772-4b48-b1cf-29759861b373-catalog-content\") pod \"6cdafc37-f772-4b48-b1cf-29759861b373\" (UID: \"6cdafc37-f772-4b48-b1cf-29759861b373\") " Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.780642 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cdafc37-f772-4b48-b1cf-29759861b373-utilities\") pod \"6cdafc37-f772-4b48-b1cf-29759861b373\" (UID: \"6cdafc37-f772-4b48-b1cf-29759861b373\") " Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.780670 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d360c9a9-d428-4ca4-9379-e052a6e60b22-catalog-content\") pod \"d360c9a9-d428-4ca4-9379-e052a6e60b22\" (UID: \"d360c9a9-d428-4ca4-9379-e052a6e60b22\") " Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.780706 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d360c9a9-d428-4ca4-9379-e052a6e60b22-utilities\") pod \"d360c9a9-d428-4ca4-9379-e052a6e60b22\" (UID: \"d360c9a9-d428-4ca4-9379-e052a6e60b22\") " Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.781643 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/037c41d9-7976-43c9-baa6-57aec44c28de-utilities" (OuterVolumeSpecName: "utilities") pod "037c41d9-7976-43c9-baa6-57aec44c28de" (UID: "037c41d9-7976-43c9-baa6-57aec44c28de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.781832 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cdafc37-f772-4b48-b1cf-29759861b373-utilities" (OuterVolumeSpecName: "utilities") pod "6cdafc37-f772-4b48-b1cf-29759861b373" (UID: "6cdafc37-f772-4b48-b1cf-29759861b373"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.783441 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d62c28f1-696b-4b88-8f46-67abf833ee4c-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "d62c28f1-696b-4b88-8f46-67abf833ee4c" (UID: "d62c28f1-696b-4b88-8f46-67abf833ee4c"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.784652 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d62c28f1-696b-4b88-8f46-67abf833ee4c-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "d62c28f1-696b-4b88-8f46-67abf833ee4c" (UID: "d62c28f1-696b-4b88-8f46-67abf833ee4c"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.784921 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a553b46-b32c-435f-8e30-338b174cd444-utilities" (OuterVolumeSpecName: "utilities") pod "7a553b46-b32c-435f-8e30-338b174cd444" (UID: "7a553b46-b32c-435f-8e30-338b174cd444"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.787115 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d360c9a9-d428-4ca4-9379-e052a6e60b22-utilities" (OuterVolumeSpecName: "utilities") pod "d360c9a9-d428-4ca4-9379-e052a6e60b22" (UID: "d360c9a9-d428-4ca4-9379-e052a6e60b22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.788022 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/037c41d9-7976-43c9-baa6-57aec44c28de-kube-api-access-wp9mp" (OuterVolumeSpecName: "kube-api-access-wp9mp") pod "037c41d9-7976-43c9-baa6-57aec44c28de" (UID: "037c41d9-7976-43c9-baa6-57aec44c28de"). InnerVolumeSpecName "kube-api-access-wp9mp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.792452 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cdafc37-f772-4b48-b1cf-29759861b373-kube-api-access-r86b6" (OuterVolumeSpecName: "kube-api-access-r86b6") pod "6cdafc37-f772-4b48-b1cf-29759861b373" (UID: "6cdafc37-f772-4b48-b1cf-29759861b373"). InnerVolumeSpecName "kube-api-access-r86b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.794348 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d360c9a9-d428-4ca4-9379-e052a6e60b22-kube-api-access-xvq8l" (OuterVolumeSpecName: "kube-api-access-xvq8l") pod "d360c9a9-d428-4ca4-9379-e052a6e60b22" (UID: "d360c9a9-d428-4ca4-9379-e052a6e60b22"). InnerVolumeSpecName "kube-api-access-xvq8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.797946 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a553b46-b32c-435f-8e30-338b174cd444-kube-api-access-d7v46" (OuterVolumeSpecName: "kube-api-access-d7v46") pod "7a553b46-b32c-435f-8e30-338b174cd444" (UID: "7a553b46-b32c-435f-8e30-338b174cd444"). InnerVolumeSpecName "kube-api-access-d7v46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.799404 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d62c28f1-696b-4b88-8f46-67abf833ee4c-kube-api-access-c8s6n" (OuterVolumeSpecName: "kube-api-access-c8s6n") pod "d62c28f1-696b-4b88-8f46-67abf833ee4c" (UID: "d62c28f1-696b-4b88-8f46-67abf833ee4c"). InnerVolumeSpecName "kube-api-access-c8s6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.821516 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cdafc37-f772-4b48-b1cf-29759861b373-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6cdafc37-f772-4b48-b1cf-29759861b373" (UID: "6cdafc37-f772-4b48-b1cf-29759861b373"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.838601 4897 generic.go:334] "Generic (PLEG): container finished" podID="6cdafc37-f772-4b48-b1cf-29759861b373" containerID="85f0aaa618a56bc1bb5010a755815f41953d9ab2f19ed3d285556ffb37b071ad" exitCode=0 Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.838742 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twzlp" event={"ID":"6cdafc37-f772-4b48-b1cf-29759861b373","Type":"ContainerDied","Data":"85f0aaa618a56bc1bb5010a755815f41953d9ab2f19ed3d285556ffb37b071ad"} Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.838780 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twzlp" event={"ID":"6cdafc37-f772-4b48-b1cf-29759861b373","Type":"ContainerDied","Data":"b1566aeb2f80d4a19e2c18b65e59709e8e04ca9d8eaeefc09275b7e00dbc712f"} Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.838805 4897 scope.go:117] "RemoveContainer" containerID="85f0aaa618a56bc1bb5010a755815f41953d9ab2f19ed3d285556ffb37b071ad" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.839263 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twzlp" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.845424 4897 generic.go:334] "Generic (PLEG): container finished" podID="7a553b46-b32c-435f-8e30-338b174cd444" containerID="8b0af50173192025174bf574ac8424a4eae8b650c256abc93133dda2481c0fb9" exitCode=0 Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.845496 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgv5g" event={"ID":"7a553b46-b32c-435f-8e30-338b174cd444","Type":"ContainerDied","Data":"8b0af50173192025174bf574ac8424a4eae8b650c256abc93133dda2481c0fb9"} Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.845522 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgv5g" event={"ID":"7a553b46-b32c-435f-8e30-338b174cd444","Type":"ContainerDied","Data":"2709ebdcf9cefea9c61df9ec193c2b027b86835ab7464fe2a9ea8307cf26bdbd"} Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.845615 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bgv5g" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.862118 4897 generic.go:334] "Generic (PLEG): container finished" podID="037c41d9-7976-43c9-baa6-57aec44c28de" containerID="7489420b8f7e6a38e90f23215d803ce5203e5f3494e5d61ea473bf4c4ffca102" exitCode=0 Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.862209 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4c74d" event={"ID":"037c41d9-7976-43c9-baa6-57aec44c28de","Type":"ContainerDied","Data":"7489420b8f7e6a38e90f23215d803ce5203e5f3494e5d61ea473bf4c4ffca102"} Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.862247 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4c74d" event={"ID":"037c41d9-7976-43c9-baa6-57aec44c28de","Type":"ContainerDied","Data":"2ae33caf3fc86ea8248e57588cbc604a255b0a6f68037dd5fe9f850e31d29842"} Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.867507 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4c74d" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.875188 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d360c9a9-d428-4ca4-9379-e052a6e60b22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d360c9a9-d428-4ca4-9379-e052a6e60b22" (UID: "d360c9a9-d428-4ca4-9379-e052a6e60b22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.880924 4897 generic.go:334] "Generic (PLEG): container finished" podID="d360c9a9-d428-4ca4-9379-e052a6e60b22" containerID="df2c0c009e1a1304236f9e5fd2b291a64ab0346e216c31892406bbbb5e496e77" exitCode=0 Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.880946 4897 scope.go:117] "RemoveContainer" containerID="655ac657f55cf60ca8f4cf3187edfeabee4612635a9de808ed951a557955ce94" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.881054 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rph5f" event={"ID":"d360c9a9-d428-4ca4-9379-e052a6e60b22","Type":"ContainerDied","Data":"df2c0c009e1a1304236f9e5fd2b291a64ab0346e216c31892406bbbb5e496e77"} Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.881091 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rph5f" event={"ID":"d360c9a9-d428-4ca4-9379-e052a6e60b22","Type":"ContainerDied","Data":"71890c1e90a3c31dfbff477b26b57ec30f807fc2b82bb4d01ed3d7070a7aed7d"} Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.881213 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rph5f" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.881669 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvq8l\" (UniqueName: \"kubernetes.io/projected/d360c9a9-d428-4ca4-9379-e052a6e60b22-kube-api-access-xvq8l\") on node \"crc\" DevicePath \"\"" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.881692 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8s6n\" (UniqueName: \"kubernetes.io/projected/d62c28f1-696b-4b88-8f46-67abf833ee4c-kube-api-access-c8s6n\") on node \"crc\" DevicePath \"\"" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.881701 4897 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d62c28f1-696b-4b88-8f46-67abf833ee4c-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.881712 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/037c41d9-7976-43c9-baa6-57aec44c28de-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.881723 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cdafc37-f772-4b48-b1cf-29759861b373-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.881732 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cdafc37-f772-4b48-b1cf-29759861b373-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.881741 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d360c9a9-d428-4ca4-9379-e052a6e60b22-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.881749 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d360c9a9-d428-4ca4-9379-e052a6e60b22-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.881758 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7v46\" (UniqueName: \"kubernetes.io/projected/7a553b46-b32c-435f-8e30-338b174cd444-kube-api-access-d7v46\") on node \"crc\" DevicePath \"\"" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.881768 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r86b6\" (UniqueName: \"kubernetes.io/projected/6cdafc37-f772-4b48-b1cf-29759861b373-kube-api-access-r86b6\") on node \"crc\" DevicePath \"\"" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.881776 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a553b46-b32c-435f-8e30-338b174cd444-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.881785 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp9mp\" (UniqueName: \"kubernetes.io/projected/037c41d9-7976-43c9-baa6-57aec44c28de-kube-api-access-wp9mp\") on node \"crc\" DevicePath \"\"" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.881793 4897 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d62c28f1-696b-4b88-8f46-67abf833ee4c-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.884660 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/037c41d9-7976-43c9-baa6-57aec44c28de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "037c41d9-7976-43c9-baa6-57aec44c28de" (UID: "037c41d9-7976-43c9-baa6-57aec44c28de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.885815 4897 generic.go:334] "Generic (PLEG): container finished" podID="d62c28f1-696b-4b88-8f46-67abf833ee4c" containerID="8be8227f8b296e15096f060fc7e74c2d49a8db27f71ad8b98a4b6bfeecc2f5d6" exitCode=0 Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.885926 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9n8vm" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.886390 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9n8vm" event={"ID":"d62c28f1-696b-4b88-8f46-67abf833ee4c","Type":"ContainerDied","Data":"8be8227f8b296e15096f060fc7e74c2d49a8db27f71ad8b98a4b6bfeecc2f5d6"} Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.886420 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9n8vm" event={"ID":"d62c28f1-696b-4b88-8f46-67abf833ee4c","Type":"ContainerDied","Data":"34cee1d3547e92af91fa3962b5a8ab70eb58890ced4e523fb874277431ea7665"} Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.907826 4897 scope.go:117] "RemoveContainer" containerID="979383bee7b1c2732fe35b4cf4a78878c174cc9665367933947038e1404d4482" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.908371 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-twzlp"] Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.918457 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-twzlp"] Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.922364 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rph5f"] Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.926445 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rph5f"] Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.930141 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9n8vm"] Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.933156 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9n8vm"] Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.936282 4897 scope.go:117] "RemoveContainer" containerID="85f0aaa618a56bc1bb5010a755815f41953d9ab2f19ed3d285556ffb37b071ad" Feb 14 18:48:34 crc kubenswrapper[4897]: E0214 18:48:34.937067 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85f0aaa618a56bc1bb5010a755815f41953d9ab2f19ed3d285556ffb37b071ad\": container with ID starting with 85f0aaa618a56bc1bb5010a755815f41953d9ab2f19ed3d285556ffb37b071ad not found: ID does not exist" containerID="85f0aaa618a56bc1bb5010a755815f41953d9ab2f19ed3d285556ffb37b071ad" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.937094 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85f0aaa618a56bc1bb5010a755815f41953d9ab2f19ed3d285556ffb37b071ad"} err="failed to get container status \"85f0aaa618a56bc1bb5010a755815f41953d9ab2f19ed3d285556ffb37b071ad\": rpc error: code = NotFound desc = could not find container \"85f0aaa618a56bc1bb5010a755815f41953d9ab2f19ed3d285556ffb37b071ad\": container with ID starting with 85f0aaa618a56bc1bb5010a755815f41953d9ab2f19ed3d285556ffb37b071ad not found: ID does not exist" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.937114 4897 scope.go:117] "RemoveContainer" containerID="655ac657f55cf60ca8f4cf3187edfeabee4612635a9de808ed951a557955ce94" Feb 14 18:48:34 crc kubenswrapper[4897]: E0214 18:48:34.937502 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"655ac657f55cf60ca8f4cf3187edfeabee4612635a9de808ed951a557955ce94\": container with ID starting with 655ac657f55cf60ca8f4cf3187edfeabee4612635a9de808ed951a557955ce94 not found: ID does not exist" containerID="655ac657f55cf60ca8f4cf3187edfeabee4612635a9de808ed951a557955ce94" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.937519 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"655ac657f55cf60ca8f4cf3187edfeabee4612635a9de808ed951a557955ce94"} err="failed to get container status \"655ac657f55cf60ca8f4cf3187edfeabee4612635a9de808ed951a557955ce94\": rpc error: code = NotFound desc = could not find container \"655ac657f55cf60ca8f4cf3187edfeabee4612635a9de808ed951a557955ce94\": container with ID starting with 655ac657f55cf60ca8f4cf3187edfeabee4612635a9de808ed951a557955ce94 not found: ID does not exist" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.937533 4897 scope.go:117] "RemoveContainer" containerID="979383bee7b1c2732fe35b4cf4a78878c174cc9665367933947038e1404d4482" Feb 14 18:48:34 crc kubenswrapper[4897]: E0214 18:48:34.937983 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"979383bee7b1c2732fe35b4cf4a78878c174cc9665367933947038e1404d4482\": container with ID starting with 979383bee7b1c2732fe35b4cf4a78878c174cc9665367933947038e1404d4482 not found: ID does not exist" containerID="979383bee7b1c2732fe35b4cf4a78878c174cc9665367933947038e1404d4482" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.938000 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"979383bee7b1c2732fe35b4cf4a78878c174cc9665367933947038e1404d4482"} err="failed to get container status \"979383bee7b1c2732fe35b4cf4a78878c174cc9665367933947038e1404d4482\": rpc error: code = NotFound desc = could not find container \"979383bee7b1c2732fe35b4cf4a78878c174cc9665367933947038e1404d4482\": container with ID starting with 979383bee7b1c2732fe35b4cf4a78878c174cc9665367933947038e1404d4482 not found: ID does not exist" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.938013 4897 scope.go:117] "RemoveContainer" containerID="8b0af50173192025174bf574ac8424a4eae8b650c256abc93133dda2481c0fb9" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.953391 4897 scope.go:117] "RemoveContainer" containerID="be3651668419941867ed3235576acc493035845436659f94a300ca64e7a2c8f6" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.967892 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ndtpt"] Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.971323 4897 scope.go:117] "RemoveContainer" containerID="ff96f816bfa54510f698dff41c313699d16a6c4e7f32e794c0bf47b33ce80a0b" Feb 14 18:48:34 crc kubenswrapper[4897]: W0214 18:48:34.975175 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc87321f8_a781_4a08_93e8_2280f2ee57b8.slice/crio-41c33acabc7a37f3bd2331c72961f0be37ad8a199c5006bf3213409622a7d244 WatchSource:0}: Error finding container 41c33acabc7a37f3bd2331c72961f0be37ad8a199c5006bf3213409622a7d244: Status 404 returned error can't find the container with id 41c33acabc7a37f3bd2331c72961f0be37ad8a199c5006bf3213409622a7d244 Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.983476 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/037c41d9-7976-43c9-baa6-57aec44c28de-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.985346 4897 scope.go:117] "RemoveContainer" containerID="8b0af50173192025174bf574ac8424a4eae8b650c256abc93133dda2481c0fb9" Feb 14 18:48:34 crc kubenswrapper[4897]: E0214 18:48:34.985625 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b0af50173192025174bf574ac8424a4eae8b650c256abc93133dda2481c0fb9\": container with ID starting with 8b0af50173192025174bf574ac8424a4eae8b650c256abc93133dda2481c0fb9 not found: ID does not exist" containerID="8b0af50173192025174bf574ac8424a4eae8b650c256abc93133dda2481c0fb9" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.985660 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b0af50173192025174bf574ac8424a4eae8b650c256abc93133dda2481c0fb9"} err="failed to get container status \"8b0af50173192025174bf574ac8424a4eae8b650c256abc93133dda2481c0fb9\": rpc error: code = NotFound desc = could not find container \"8b0af50173192025174bf574ac8424a4eae8b650c256abc93133dda2481c0fb9\": container with ID starting with 8b0af50173192025174bf574ac8424a4eae8b650c256abc93133dda2481c0fb9 not found: ID does not exist" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.985686 4897 scope.go:117] "RemoveContainer" containerID="be3651668419941867ed3235576acc493035845436659f94a300ca64e7a2c8f6" Feb 14 18:48:34 crc kubenswrapper[4897]: E0214 18:48:34.985871 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be3651668419941867ed3235576acc493035845436659f94a300ca64e7a2c8f6\": container with ID starting with be3651668419941867ed3235576acc493035845436659f94a300ca64e7a2c8f6 not found: ID does not exist" containerID="be3651668419941867ed3235576acc493035845436659f94a300ca64e7a2c8f6" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.985890 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be3651668419941867ed3235576acc493035845436659f94a300ca64e7a2c8f6"} err="failed to get container status \"be3651668419941867ed3235576acc493035845436659f94a300ca64e7a2c8f6\": rpc error: code = NotFound desc = could not find container \"be3651668419941867ed3235576acc493035845436659f94a300ca64e7a2c8f6\": container with ID starting with be3651668419941867ed3235576acc493035845436659f94a300ca64e7a2c8f6 not found: ID does not exist" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.985904 4897 scope.go:117] "RemoveContainer" containerID="ff96f816bfa54510f698dff41c313699d16a6c4e7f32e794c0bf47b33ce80a0b" Feb 14 18:48:34 crc kubenswrapper[4897]: E0214 18:48:34.986195 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff96f816bfa54510f698dff41c313699d16a6c4e7f32e794c0bf47b33ce80a0b\": container with ID starting with ff96f816bfa54510f698dff41c313699d16a6c4e7f32e794c0bf47b33ce80a0b not found: ID does not exist" containerID="ff96f816bfa54510f698dff41c313699d16a6c4e7f32e794c0bf47b33ce80a0b" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.986215 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff96f816bfa54510f698dff41c313699d16a6c4e7f32e794c0bf47b33ce80a0b"} err="failed to get container status \"ff96f816bfa54510f698dff41c313699d16a6c4e7f32e794c0bf47b33ce80a0b\": rpc error: code = NotFound desc = could not find container \"ff96f816bfa54510f698dff41c313699d16a6c4e7f32e794c0bf47b33ce80a0b\": container with ID starting with ff96f816bfa54510f698dff41c313699d16a6c4e7f32e794c0bf47b33ce80a0b not found: ID does not exist" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.986228 4897 scope.go:117] "RemoveContainer" containerID="7489420b8f7e6a38e90f23215d803ce5203e5f3494e5d61ea473bf4c4ffca102" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.994349 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a553b46-b32c-435f-8e30-338b174cd444-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a553b46-b32c-435f-8e30-338b174cd444" (UID: "7a553b46-b32c-435f-8e30-338b174cd444"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:48:34 crc kubenswrapper[4897]: I0214 18:48:34.999526 4897 scope.go:117] "RemoveContainer" containerID="90567fa9cca2f7340fc6646d72e23d706212039b065c0a15da26caf1000ca514" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.019441 4897 scope.go:117] "RemoveContainer" containerID="af6222ce2bdd1205d9c09546c23a6ea1ffda59cc2c23a4bdf06eb856fb6f8d7e" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.038860 4897 scope.go:117] "RemoveContainer" containerID="7489420b8f7e6a38e90f23215d803ce5203e5f3494e5d61ea473bf4c4ffca102" Feb 14 18:48:35 crc kubenswrapper[4897]: E0214 18:48:35.039350 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7489420b8f7e6a38e90f23215d803ce5203e5f3494e5d61ea473bf4c4ffca102\": container with ID starting with 7489420b8f7e6a38e90f23215d803ce5203e5f3494e5d61ea473bf4c4ffca102 not found: ID does not exist" containerID="7489420b8f7e6a38e90f23215d803ce5203e5f3494e5d61ea473bf4c4ffca102" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.039386 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7489420b8f7e6a38e90f23215d803ce5203e5f3494e5d61ea473bf4c4ffca102"} err="failed to get container status \"7489420b8f7e6a38e90f23215d803ce5203e5f3494e5d61ea473bf4c4ffca102\": rpc error: code = NotFound desc = could not find container \"7489420b8f7e6a38e90f23215d803ce5203e5f3494e5d61ea473bf4c4ffca102\": container with ID starting with 7489420b8f7e6a38e90f23215d803ce5203e5f3494e5d61ea473bf4c4ffca102 not found: ID does not exist" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.039411 4897 scope.go:117] "RemoveContainer" containerID="90567fa9cca2f7340fc6646d72e23d706212039b065c0a15da26caf1000ca514" Feb 14 18:48:35 crc kubenswrapper[4897]: E0214 18:48:35.039880 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90567fa9cca2f7340fc6646d72e23d706212039b065c0a15da26caf1000ca514\": container with ID starting with 90567fa9cca2f7340fc6646d72e23d706212039b065c0a15da26caf1000ca514 not found: ID does not exist" containerID="90567fa9cca2f7340fc6646d72e23d706212039b065c0a15da26caf1000ca514" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.039945 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90567fa9cca2f7340fc6646d72e23d706212039b065c0a15da26caf1000ca514"} err="failed to get container status \"90567fa9cca2f7340fc6646d72e23d706212039b065c0a15da26caf1000ca514\": rpc error: code = NotFound desc = could not find container \"90567fa9cca2f7340fc6646d72e23d706212039b065c0a15da26caf1000ca514\": container with ID starting with 90567fa9cca2f7340fc6646d72e23d706212039b065c0a15da26caf1000ca514 not found: ID does not exist" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.039959 4897 scope.go:117] "RemoveContainer" containerID="af6222ce2bdd1205d9c09546c23a6ea1ffda59cc2c23a4bdf06eb856fb6f8d7e" Feb 14 18:48:35 crc kubenswrapper[4897]: E0214 18:48:35.040431 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af6222ce2bdd1205d9c09546c23a6ea1ffda59cc2c23a4bdf06eb856fb6f8d7e\": container with ID starting with af6222ce2bdd1205d9c09546c23a6ea1ffda59cc2c23a4bdf06eb856fb6f8d7e not found: ID does not exist" containerID="af6222ce2bdd1205d9c09546c23a6ea1ffda59cc2c23a4bdf06eb856fb6f8d7e" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.040503 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af6222ce2bdd1205d9c09546c23a6ea1ffda59cc2c23a4bdf06eb856fb6f8d7e"} err="failed to get container status \"af6222ce2bdd1205d9c09546c23a6ea1ffda59cc2c23a4bdf06eb856fb6f8d7e\": rpc error: code = NotFound desc = could not find container \"af6222ce2bdd1205d9c09546c23a6ea1ffda59cc2c23a4bdf06eb856fb6f8d7e\": container with ID starting with af6222ce2bdd1205d9c09546c23a6ea1ffda59cc2c23a4bdf06eb856fb6f8d7e not found: ID does not exist" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.040545 4897 scope.go:117] "RemoveContainer" containerID="df2c0c009e1a1304236f9e5fd2b291a64ab0346e216c31892406bbbb5e496e77" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.071327 4897 scope.go:117] "RemoveContainer" containerID="7880691ac52f749b5389dcc750c89461df2786eaf24dae02299bdc702972c810" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.084262 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a553b46-b32c-435f-8e30-338b174cd444-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.084560 4897 scope.go:117] "RemoveContainer" containerID="06c0a7140f775be4f1022e407c8942dbb90d63eacb3b6af8c236b44e2b16ccab" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.101427 4897 scope.go:117] "RemoveContainer" containerID="df2c0c009e1a1304236f9e5fd2b291a64ab0346e216c31892406bbbb5e496e77" Feb 14 18:48:35 crc kubenswrapper[4897]: E0214 18:48:35.101765 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df2c0c009e1a1304236f9e5fd2b291a64ab0346e216c31892406bbbb5e496e77\": container with ID starting with df2c0c009e1a1304236f9e5fd2b291a64ab0346e216c31892406bbbb5e496e77 not found: ID does not exist" containerID="df2c0c009e1a1304236f9e5fd2b291a64ab0346e216c31892406bbbb5e496e77" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.101795 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df2c0c009e1a1304236f9e5fd2b291a64ab0346e216c31892406bbbb5e496e77"} err="failed to get container status \"df2c0c009e1a1304236f9e5fd2b291a64ab0346e216c31892406bbbb5e496e77\": rpc error: code = NotFound desc = could not find container \"df2c0c009e1a1304236f9e5fd2b291a64ab0346e216c31892406bbbb5e496e77\": container with ID starting with df2c0c009e1a1304236f9e5fd2b291a64ab0346e216c31892406bbbb5e496e77 not found: ID does not exist" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.101815 4897 scope.go:117] "RemoveContainer" containerID="7880691ac52f749b5389dcc750c89461df2786eaf24dae02299bdc702972c810" Feb 14 18:48:35 crc kubenswrapper[4897]: E0214 18:48:35.102461 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7880691ac52f749b5389dcc750c89461df2786eaf24dae02299bdc702972c810\": container with ID starting with 7880691ac52f749b5389dcc750c89461df2786eaf24dae02299bdc702972c810 not found: ID does not exist" containerID="7880691ac52f749b5389dcc750c89461df2786eaf24dae02299bdc702972c810" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.102480 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7880691ac52f749b5389dcc750c89461df2786eaf24dae02299bdc702972c810"} err="failed to get container status \"7880691ac52f749b5389dcc750c89461df2786eaf24dae02299bdc702972c810\": rpc error: code = NotFound desc = could not find container \"7880691ac52f749b5389dcc750c89461df2786eaf24dae02299bdc702972c810\": container with ID starting with 7880691ac52f749b5389dcc750c89461df2786eaf24dae02299bdc702972c810 not found: ID does not exist" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.102493 4897 scope.go:117] "RemoveContainer" containerID="06c0a7140f775be4f1022e407c8942dbb90d63eacb3b6af8c236b44e2b16ccab" Feb 14 18:48:35 crc kubenswrapper[4897]: E0214 18:48:35.102863 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06c0a7140f775be4f1022e407c8942dbb90d63eacb3b6af8c236b44e2b16ccab\": container with ID starting with 06c0a7140f775be4f1022e407c8942dbb90d63eacb3b6af8c236b44e2b16ccab not found: ID does not exist" containerID="06c0a7140f775be4f1022e407c8942dbb90d63eacb3b6af8c236b44e2b16ccab" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.102881 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06c0a7140f775be4f1022e407c8942dbb90d63eacb3b6af8c236b44e2b16ccab"} err="failed to get container status \"06c0a7140f775be4f1022e407c8942dbb90d63eacb3b6af8c236b44e2b16ccab\": rpc error: code = NotFound desc = could not find container \"06c0a7140f775be4f1022e407c8942dbb90d63eacb3b6af8c236b44e2b16ccab\": container with ID starting with 06c0a7140f775be4f1022e407c8942dbb90d63eacb3b6af8c236b44e2b16ccab not found: ID does not exist" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.102892 4897 scope.go:117] "RemoveContainer" containerID="8be8227f8b296e15096f060fc7e74c2d49a8db27f71ad8b98a4b6bfeecc2f5d6" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.121675 4897 scope.go:117] "RemoveContainer" containerID="8be8227f8b296e15096f060fc7e74c2d49a8db27f71ad8b98a4b6bfeecc2f5d6" Feb 14 18:48:35 crc kubenswrapper[4897]: E0214 18:48:35.123414 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8be8227f8b296e15096f060fc7e74c2d49a8db27f71ad8b98a4b6bfeecc2f5d6\": container with ID starting with 8be8227f8b296e15096f060fc7e74c2d49a8db27f71ad8b98a4b6bfeecc2f5d6 not found: ID does not exist" containerID="8be8227f8b296e15096f060fc7e74c2d49a8db27f71ad8b98a4b6bfeecc2f5d6" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.123453 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8be8227f8b296e15096f060fc7e74c2d49a8db27f71ad8b98a4b6bfeecc2f5d6"} err="failed to get container status \"8be8227f8b296e15096f060fc7e74c2d49a8db27f71ad8b98a4b6bfeecc2f5d6\": rpc error: code = NotFound desc = could not find container \"8be8227f8b296e15096f060fc7e74c2d49a8db27f71ad8b98a4b6bfeecc2f5d6\": container with ID starting with 8be8227f8b296e15096f060fc7e74c2d49a8db27f71ad8b98a4b6bfeecc2f5d6 not found: ID does not exist" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.174052 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bgv5g"] Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.181309 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bgv5g"] Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.194916 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4c74d"] Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.204772 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4c74d"] Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.800895 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="037c41d9-7976-43c9-baa6-57aec44c28de" path="/var/lib/kubelet/pods/037c41d9-7976-43c9-baa6-57aec44c28de/volumes" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.802768 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cdafc37-f772-4b48-b1cf-29759861b373" path="/var/lib/kubelet/pods/6cdafc37-f772-4b48-b1cf-29759861b373/volumes" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.804564 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a553b46-b32c-435f-8e30-338b174cd444" path="/var/lib/kubelet/pods/7a553b46-b32c-435f-8e30-338b174cd444/volumes" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.807085 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d360c9a9-d428-4ca4-9379-e052a6e60b22" path="/var/lib/kubelet/pods/d360c9a9-d428-4ca4-9379-e052a6e60b22/volumes" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.808564 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d62c28f1-696b-4b88-8f46-67abf833ee4c" path="/var/lib/kubelet/pods/d62c28f1-696b-4b88-8f46-67abf833ee4c/volumes" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.895102 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ndtpt" event={"ID":"c87321f8-a781-4a08-93e8-2280f2ee57b8","Type":"ContainerStarted","Data":"2f59206a049037cd942e28d42c1f94be9a7cf419bae7a16c50f43c8c708a7cd2"} Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.895748 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ndtpt" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.895797 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ndtpt" event={"ID":"c87321f8-a781-4a08-93e8-2280f2ee57b8","Type":"ContainerStarted","Data":"41c33acabc7a37f3bd2331c72961f0be37ad8a199c5006bf3213409622a7d244"} Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.899346 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ndtpt" Feb 14 18:48:35 crc kubenswrapper[4897]: I0214 18:48:35.926121 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ndtpt" podStartSLOduration=1.926091156 podStartE2EDuration="1.926091156s" podCreationTimestamp="2026-02-14 18:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:48:35.917916504 +0000 UTC m=+368.894325067" watchObservedRunningTime="2026-02-14 18:48:35.926091156 +0000 UTC m=+368.902499679" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.048253 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zqcpc"] Feb 14 18:48:36 crc kubenswrapper[4897]: E0214 18:48:36.048457 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d360c9a9-d428-4ca4-9379-e052a6e60b22" containerName="extract-utilities" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.048468 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d360c9a9-d428-4ca4-9379-e052a6e60b22" containerName="extract-utilities" Feb 14 18:48:36 crc kubenswrapper[4897]: E0214 18:48:36.048477 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a553b46-b32c-435f-8e30-338b174cd444" containerName="extract-utilities" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.048482 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a553b46-b32c-435f-8e30-338b174cd444" containerName="extract-utilities" Feb 14 18:48:36 crc kubenswrapper[4897]: E0214 18:48:36.048493 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="037c41d9-7976-43c9-baa6-57aec44c28de" containerName="extract-content" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.048499 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="037c41d9-7976-43c9-baa6-57aec44c28de" containerName="extract-content" Feb 14 18:48:36 crc kubenswrapper[4897]: E0214 18:48:36.048509 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cdafc37-f772-4b48-b1cf-29759861b373" containerName="extract-content" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.048515 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cdafc37-f772-4b48-b1cf-29759861b373" containerName="extract-content" Feb 14 18:48:36 crc kubenswrapper[4897]: E0214 18:48:36.048522 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a553b46-b32c-435f-8e30-338b174cd444" containerName="registry-server" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.048527 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a553b46-b32c-435f-8e30-338b174cd444" containerName="registry-server" Feb 14 18:48:36 crc kubenswrapper[4897]: E0214 18:48:36.048537 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="037c41d9-7976-43c9-baa6-57aec44c28de" containerName="registry-server" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.048542 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="037c41d9-7976-43c9-baa6-57aec44c28de" containerName="registry-server" Feb 14 18:48:36 crc kubenswrapper[4897]: E0214 18:48:36.048551 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d360c9a9-d428-4ca4-9379-e052a6e60b22" containerName="registry-server" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.048556 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d360c9a9-d428-4ca4-9379-e052a6e60b22" containerName="registry-server" Feb 14 18:48:36 crc kubenswrapper[4897]: E0214 18:48:36.048567 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cdafc37-f772-4b48-b1cf-29759861b373" containerName="registry-server" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.048573 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cdafc37-f772-4b48-b1cf-29759861b373" containerName="registry-server" Feb 14 18:48:36 crc kubenswrapper[4897]: E0214 18:48:36.048582 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="037c41d9-7976-43c9-baa6-57aec44c28de" containerName="extract-utilities" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.048588 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="037c41d9-7976-43c9-baa6-57aec44c28de" containerName="extract-utilities" Feb 14 18:48:36 crc kubenswrapper[4897]: E0214 18:48:36.048594 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d62c28f1-696b-4b88-8f46-67abf833ee4c" containerName="marketplace-operator" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.048599 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d62c28f1-696b-4b88-8f46-67abf833ee4c" containerName="marketplace-operator" Feb 14 18:48:36 crc kubenswrapper[4897]: E0214 18:48:36.048609 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a553b46-b32c-435f-8e30-338b174cd444" containerName="extract-content" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.048615 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a553b46-b32c-435f-8e30-338b174cd444" containerName="extract-content" Feb 14 18:48:36 crc kubenswrapper[4897]: E0214 18:48:36.048623 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d360c9a9-d428-4ca4-9379-e052a6e60b22" containerName="extract-content" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.048629 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d360c9a9-d428-4ca4-9379-e052a6e60b22" containerName="extract-content" Feb 14 18:48:36 crc kubenswrapper[4897]: E0214 18:48:36.048636 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cdafc37-f772-4b48-b1cf-29759861b373" containerName="extract-utilities" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.048643 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cdafc37-f772-4b48-b1cf-29759861b373" containerName="extract-utilities" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.048724 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cdafc37-f772-4b48-b1cf-29759861b373" containerName="registry-server" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.048732 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a553b46-b32c-435f-8e30-338b174cd444" containerName="registry-server" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.048741 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="d360c9a9-d428-4ca4-9379-e052a6e60b22" containerName="registry-server" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.048747 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="d62c28f1-696b-4b88-8f46-67abf833ee4c" containerName="marketplace-operator" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.048756 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="037c41d9-7976-43c9-baa6-57aec44c28de" containerName="registry-server" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.049400 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zqcpc" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.051564 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.069390 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zqcpc"] Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.096363 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac059afa-1f7b-480b-8650-c227c33ba696-utilities\") pod \"redhat-marketplace-zqcpc\" (UID: \"ac059afa-1f7b-480b-8650-c227c33ba696\") " pod="openshift-marketplace/redhat-marketplace-zqcpc" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.096424 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d5q7\" (UniqueName: \"kubernetes.io/projected/ac059afa-1f7b-480b-8650-c227c33ba696-kube-api-access-6d5q7\") pod \"redhat-marketplace-zqcpc\" (UID: \"ac059afa-1f7b-480b-8650-c227c33ba696\") " pod="openshift-marketplace/redhat-marketplace-zqcpc" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.096464 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac059afa-1f7b-480b-8650-c227c33ba696-catalog-content\") pod \"redhat-marketplace-zqcpc\" (UID: \"ac059afa-1f7b-480b-8650-c227c33ba696\") " pod="openshift-marketplace/redhat-marketplace-zqcpc" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.197465 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac059afa-1f7b-480b-8650-c227c33ba696-utilities\") pod \"redhat-marketplace-zqcpc\" (UID: \"ac059afa-1f7b-480b-8650-c227c33ba696\") " pod="openshift-marketplace/redhat-marketplace-zqcpc" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.197702 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d5q7\" (UniqueName: \"kubernetes.io/projected/ac059afa-1f7b-480b-8650-c227c33ba696-kube-api-access-6d5q7\") pod \"redhat-marketplace-zqcpc\" (UID: \"ac059afa-1f7b-480b-8650-c227c33ba696\") " pod="openshift-marketplace/redhat-marketplace-zqcpc" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.198312 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac059afa-1f7b-480b-8650-c227c33ba696-catalog-content\") pod \"redhat-marketplace-zqcpc\" (UID: \"ac059afa-1f7b-480b-8650-c227c33ba696\") " pod="openshift-marketplace/redhat-marketplace-zqcpc" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.198494 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac059afa-1f7b-480b-8650-c227c33ba696-utilities\") pod \"redhat-marketplace-zqcpc\" (UID: \"ac059afa-1f7b-480b-8650-c227c33ba696\") " pod="openshift-marketplace/redhat-marketplace-zqcpc" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.199106 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac059afa-1f7b-480b-8650-c227c33ba696-catalog-content\") pod \"redhat-marketplace-zqcpc\" (UID: \"ac059afa-1f7b-480b-8650-c227c33ba696\") " pod="openshift-marketplace/redhat-marketplace-zqcpc" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.221707 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d5q7\" (UniqueName: \"kubernetes.io/projected/ac059afa-1f7b-480b-8650-c227c33ba696-kube-api-access-6d5q7\") pod \"redhat-marketplace-zqcpc\" (UID: \"ac059afa-1f7b-480b-8650-c227c33ba696\") " pod="openshift-marketplace/redhat-marketplace-zqcpc" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.373013 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zqcpc" Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.629098 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zqcpc"] Feb 14 18:48:36 crc kubenswrapper[4897]: W0214 18:48:36.639223 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac059afa_1f7b_480b_8650_c227c33ba696.slice/crio-5bb9e38a9d76c108611eaea787364d3b5ee9243a3a72302fbe2c1f578d83e745 WatchSource:0}: Error finding container 5bb9e38a9d76c108611eaea787364d3b5ee9243a3a72302fbe2c1f578d83e745: Status 404 returned error can't find the container with id 5bb9e38a9d76c108611eaea787364d3b5ee9243a3a72302fbe2c1f578d83e745 Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.911962 4897 generic.go:334] "Generic (PLEG): container finished" podID="ac059afa-1f7b-480b-8650-c227c33ba696" containerID="40f52dd4ab80bfff1e0cc58e09aeae59ed2dc1c495720dbe096488007f1bbac7" exitCode=0 Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.912963 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqcpc" event={"ID":"ac059afa-1f7b-480b-8650-c227c33ba696","Type":"ContainerDied","Data":"40f52dd4ab80bfff1e0cc58e09aeae59ed2dc1c495720dbe096488007f1bbac7"} Feb 14 18:48:36 crc kubenswrapper[4897]: I0214 18:48:36.913054 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqcpc" event={"ID":"ac059afa-1f7b-480b-8650-c227c33ba696","Type":"ContainerStarted","Data":"5bb9e38a9d76c108611eaea787364d3b5ee9243a3a72302fbe2c1f578d83e745"} Feb 14 18:48:37 crc kubenswrapper[4897]: I0214 18:48:37.042496 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vgcv6"] Feb 14 18:48:37 crc kubenswrapper[4897]: I0214 18:48:37.046245 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vgcv6" Feb 14 18:48:37 crc kubenswrapper[4897]: I0214 18:48:37.048845 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 14 18:48:37 crc kubenswrapper[4897]: I0214 18:48:37.058233 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vgcv6"] Feb 14 18:48:37 crc kubenswrapper[4897]: I0214 18:48:37.113976 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-588tz\" (UniqueName: \"kubernetes.io/projected/3e2a05b2-5d93-4252-a08b-6b35f225e167-kube-api-access-588tz\") pod \"certified-operators-vgcv6\" (UID: \"3e2a05b2-5d93-4252-a08b-6b35f225e167\") " pod="openshift-marketplace/certified-operators-vgcv6" Feb 14 18:48:37 crc kubenswrapper[4897]: I0214 18:48:37.114234 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e2a05b2-5d93-4252-a08b-6b35f225e167-utilities\") pod \"certified-operators-vgcv6\" (UID: \"3e2a05b2-5d93-4252-a08b-6b35f225e167\") " pod="openshift-marketplace/certified-operators-vgcv6" Feb 14 18:48:37 crc kubenswrapper[4897]: I0214 18:48:37.114402 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e2a05b2-5d93-4252-a08b-6b35f225e167-catalog-content\") pod \"certified-operators-vgcv6\" (UID: \"3e2a05b2-5d93-4252-a08b-6b35f225e167\") " pod="openshift-marketplace/certified-operators-vgcv6" Feb 14 18:48:37 crc kubenswrapper[4897]: I0214 18:48:37.215525 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-588tz\" (UniqueName: \"kubernetes.io/projected/3e2a05b2-5d93-4252-a08b-6b35f225e167-kube-api-access-588tz\") pod \"certified-operators-vgcv6\" (UID: \"3e2a05b2-5d93-4252-a08b-6b35f225e167\") " pod="openshift-marketplace/certified-operators-vgcv6" Feb 14 18:48:37 crc kubenswrapper[4897]: I0214 18:48:37.215598 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e2a05b2-5d93-4252-a08b-6b35f225e167-utilities\") pod \"certified-operators-vgcv6\" (UID: \"3e2a05b2-5d93-4252-a08b-6b35f225e167\") " pod="openshift-marketplace/certified-operators-vgcv6" Feb 14 18:48:37 crc kubenswrapper[4897]: I0214 18:48:37.215654 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e2a05b2-5d93-4252-a08b-6b35f225e167-catalog-content\") pod \"certified-operators-vgcv6\" (UID: \"3e2a05b2-5d93-4252-a08b-6b35f225e167\") " pod="openshift-marketplace/certified-operators-vgcv6" Feb 14 18:48:37 crc kubenswrapper[4897]: I0214 18:48:37.216276 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e2a05b2-5d93-4252-a08b-6b35f225e167-catalog-content\") pod \"certified-operators-vgcv6\" (UID: \"3e2a05b2-5d93-4252-a08b-6b35f225e167\") " pod="openshift-marketplace/certified-operators-vgcv6" Feb 14 18:48:37 crc kubenswrapper[4897]: I0214 18:48:37.216485 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e2a05b2-5d93-4252-a08b-6b35f225e167-utilities\") pod \"certified-operators-vgcv6\" (UID: \"3e2a05b2-5d93-4252-a08b-6b35f225e167\") " pod="openshift-marketplace/certified-operators-vgcv6" Feb 14 18:48:37 crc kubenswrapper[4897]: I0214 18:48:37.236018 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-588tz\" (UniqueName: \"kubernetes.io/projected/3e2a05b2-5d93-4252-a08b-6b35f225e167-kube-api-access-588tz\") pod \"certified-operators-vgcv6\" (UID: \"3e2a05b2-5d93-4252-a08b-6b35f225e167\") " pod="openshift-marketplace/certified-operators-vgcv6" Feb 14 18:48:37 crc kubenswrapper[4897]: I0214 18:48:37.363876 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vgcv6" Feb 14 18:48:37 crc kubenswrapper[4897]: I0214 18:48:37.579683 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vgcv6"] Feb 14 18:48:37 crc kubenswrapper[4897]: W0214 18:48:37.588260 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e2a05b2_5d93_4252_a08b_6b35f225e167.slice/crio-922ce39a708ef779f22370736712c8206c20b15b1887592a6b28a783e2e25408 WatchSource:0}: Error finding container 922ce39a708ef779f22370736712c8206c20b15b1887592a6b28a783e2e25408: Status 404 returned error can't find the container with id 922ce39a708ef779f22370736712c8206c20b15b1887592a6b28a783e2e25408 Feb 14 18:48:37 crc kubenswrapper[4897]: I0214 18:48:37.920407 4897 generic.go:334] "Generic (PLEG): container finished" podID="3e2a05b2-5d93-4252-a08b-6b35f225e167" containerID="bcda8d490390a43bdfb4dd8f3cdc716ba788f7a6c50e5633d64a67457d01f6e2" exitCode=0 Feb 14 18:48:37 crc kubenswrapper[4897]: I0214 18:48:37.920490 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgcv6" event={"ID":"3e2a05b2-5d93-4252-a08b-6b35f225e167","Type":"ContainerDied","Data":"bcda8d490390a43bdfb4dd8f3cdc716ba788f7a6c50e5633d64a67457d01f6e2"} Feb 14 18:48:37 crc kubenswrapper[4897]: I0214 18:48:37.920517 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgcv6" event={"ID":"3e2a05b2-5d93-4252-a08b-6b35f225e167","Type":"ContainerStarted","Data":"922ce39a708ef779f22370736712c8206c20b15b1887592a6b28a783e2e25408"} Feb 14 18:48:37 crc kubenswrapper[4897]: I0214 18:48:37.923808 4897 generic.go:334] "Generic (PLEG): container finished" podID="ac059afa-1f7b-480b-8650-c227c33ba696" containerID="9c147c8d99e1e5d36f63b031c7736c0bc8597aaacb13f9f0926928e45b0eb022" exitCode=0 Feb 14 18:48:37 crc kubenswrapper[4897]: I0214 18:48:37.923949 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqcpc" event={"ID":"ac059afa-1f7b-480b-8650-c227c33ba696","Type":"ContainerDied","Data":"9c147c8d99e1e5d36f63b031c7736c0bc8597aaacb13f9f0926928e45b0eb022"} Feb 14 18:48:38 crc kubenswrapper[4897]: I0214 18:48:38.443317 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-79v5s"] Feb 14 18:48:38 crc kubenswrapper[4897]: I0214 18:48:38.445020 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-79v5s" Feb 14 18:48:38 crc kubenswrapper[4897]: I0214 18:48:38.447467 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 14 18:48:38 crc kubenswrapper[4897]: I0214 18:48:38.459632 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-79v5s"] Feb 14 18:48:38 crc kubenswrapper[4897]: I0214 18:48:38.531466 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170e914d-6f55-4d61-bb7d-36dae4e4b002-catalog-content\") pod \"redhat-operators-79v5s\" (UID: \"170e914d-6f55-4d61-bb7d-36dae4e4b002\") " pod="openshift-marketplace/redhat-operators-79v5s" Feb 14 18:48:38 crc kubenswrapper[4897]: I0214 18:48:38.531735 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170e914d-6f55-4d61-bb7d-36dae4e4b002-utilities\") pod \"redhat-operators-79v5s\" (UID: \"170e914d-6f55-4d61-bb7d-36dae4e4b002\") " pod="openshift-marketplace/redhat-operators-79v5s" Feb 14 18:48:38 crc kubenswrapper[4897]: I0214 18:48:38.531885 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfznh\" (UniqueName: \"kubernetes.io/projected/170e914d-6f55-4d61-bb7d-36dae4e4b002-kube-api-access-vfznh\") pod \"redhat-operators-79v5s\" (UID: \"170e914d-6f55-4d61-bb7d-36dae4e4b002\") " pod="openshift-marketplace/redhat-operators-79v5s" Feb 14 18:48:38 crc kubenswrapper[4897]: I0214 18:48:38.632976 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170e914d-6f55-4d61-bb7d-36dae4e4b002-utilities\") pod \"redhat-operators-79v5s\" (UID: \"170e914d-6f55-4d61-bb7d-36dae4e4b002\") " pod="openshift-marketplace/redhat-operators-79v5s" Feb 14 18:48:38 crc kubenswrapper[4897]: I0214 18:48:38.633089 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfznh\" (UniqueName: \"kubernetes.io/projected/170e914d-6f55-4d61-bb7d-36dae4e4b002-kube-api-access-vfznh\") pod \"redhat-operators-79v5s\" (UID: \"170e914d-6f55-4d61-bb7d-36dae4e4b002\") " pod="openshift-marketplace/redhat-operators-79v5s" Feb 14 18:48:38 crc kubenswrapper[4897]: I0214 18:48:38.633140 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170e914d-6f55-4d61-bb7d-36dae4e4b002-catalog-content\") pod \"redhat-operators-79v5s\" (UID: \"170e914d-6f55-4d61-bb7d-36dae4e4b002\") " pod="openshift-marketplace/redhat-operators-79v5s" Feb 14 18:48:38 crc kubenswrapper[4897]: I0214 18:48:38.633522 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170e914d-6f55-4d61-bb7d-36dae4e4b002-utilities\") pod \"redhat-operators-79v5s\" (UID: \"170e914d-6f55-4d61-bb7d-36dae4e4b002\") " pod="openshift-marketplace/redhat-operators-79v5s" Feb 14 18:48:38 crc kubenswrapper[4897]: I0214 18:48:38.633697 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170e914d-6f55-4d61-bb7d-36dae4e4b002-catalog-content\") pod \"redhat-operators-79v5s\" (UID: \"170e914d-6f55-4d61-bb7d-36dae4e4b002\") " pod="openshift-marketplace/redhat-operators-79v5s" Feb 14 18:48:38 crc kubenswrapper[4897]: I0214 18:48:38.656331 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfznh\" (UniqueName: \"kubernetes.io/projected/170e914d-6f55-4d61-bb7d-36dae4e4b002-kube-api-access-vfznh\") pod \"redhat-operators-79v5s\" (UID: \"170e914d-6f55-4d61-bb7d-36dae4e4b002\") " pod="openshift-marketplace/redhat-operators-79v5s" Feb 14 18:48:38 crc kubenswrapper[4897]: I0214 18:48:38.774386 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-79v5s" Feb 14 18:48:38 crc kubenswrapper[4897]: I0214 18:48:38.933052 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgcv6" event={"ID":"3e2a05b2-5d93-4252-a08b-6b35f225e167","Type":"ContainerStarted","Data":"19c8501a006f7c3c45cdf610d89f8fa289e02d883fe3536c0688ab9298b3542d"} Feb 14 18:48:38 crc kubenswrapper[4897]: I0214 18:48:38.936437 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqcpc" event={"ID":"ac059afa-1f7b-480b-8650-c227c33ba696","Type":"ContainerStarted","Data":"fc51320c63985b819edb2d9631828e439598ad27720e3c609ddc64c2d9377d96"} Feb 14 18:48:38 crc kubenswrapper[4897]: I0214 18:48:38.973003 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zqcpc" podStartSLOduration=1.569656586 podStartE2EDuration="2.972984945s" podCreationTimestamp="2026-02-14 18:48:36 +0000 UTC" firstStartedPulling="2026-02-14 18:48:36.914448821 +0000 UTC m=+369.890857344" lastFinishedPulling="2026-02-14 18:48:38.31777721 +0000 UTC m=+371.294185703" observedRunningTime="2026-02-14 18:48:38.969959078 +0000 UTC m=+371.946367571" watchObservedRunningTime="2026-02-14 18:48:38.972984945 +0000 UTC m=+371.949393428" Feb 14 18:48:38 crc kubenswrapper[4897]: I0214 18:48:38.982639 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-79v5s"] Feb 14 18:48:39 crc kubenswrapper[4897]: I0214 18:48:39.438997 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w9dlm"] Feb 14 18:48:39 crc kubenswrapper[4897]: I0214 18:48:39.441931 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w9dlm" Feb 14 18:48:39 crc kubenswrapper[4897]: I0214 18:48:39.443837 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 14 18:48:39 crc kubenswrapper[4897]: I0214 18:48:39.460225 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w9dlm"] Feb 14 18:48:39 crc kubenswrapper[4897]: I0214 18:48:39.560775 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93aca208-9cef-49a3-917c-2bb7c314d537-utilities\") pod \"community-operators-w9dlm\" (UID: \"93aca208-9cef-49a3-917c-2bb7c314d537\") " pod="openshift-marketplace/community-operators-w9dlm" Feb 14 18:48:39 crc kubenswrapper[4897]: I0214 18:48:39.560862 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93aca208-9cef-49a3-917c-2bb7c314d537-catalog-content\") pod \"community-operators-w9dlm\" (UID: \"93aca208-9cef-49a3-917c-2bb7c314d537\") " pod="openshift-marketplace/community-operators-w9dlm" Feb 14 18:48:39 crc kubenswrapper[4897]: I0214 18:48:39.560921 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkfkr\" (UniqueName: \"kubernetes.io/projected/93aca208-9cef-49a3-917c-2bb7c314d537-kube-api-access-lkfkr\") pod \"community-operators-w9dlm\" (UID: \"93aca208-9cef-49a3-917c-2bb7c314d537\") " pod="openshift-marketplace/community-operators-w9dlm" Feb 14 18:48:39 crc kubenswrapper[4897]: I0214 18:48:39.662493 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93aca208-9cef-49a3-917c-2bb7c314d537-utilities\") pod \"community-operators-w9dlm\" (UID: \"93aca208-9cef-49a3-917c-2bb7c314d537\") " pod="openshift-marketplace/community-operators-w9dlm" Feb 14 18:48:39 crc kubenswrapper[4897]: I0214 18:48:39.662600 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93aca208-9cef-49a3-917c-2bb7c314d537-catalog-content\") pod \"community-operators-w9dlm\" (UID: \"93aca208-9cef-49a3-917c-2bb7c314d537\") " pod="openshift-marketplace/community-operators-w9dlm" Feb 14 18:48:39 crc kubenswrapper[4897]: I0214 18:48:39.662667 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkfkr\" (UniqueName: \"kubernetes.io/projected/93aca208-9cef-49a3-917c-2bb7c314d537-kube-api-access-lkfkr\") pod \"community-operators-w9dlm\" (UID: \"93aca208-9cef-49a3-917c-2bb7c314d537\") " pod="openshift-marketplace/community-operators-w9dlm" Feb 14 18:48:39 crc kubenswrapper[4897]: I0214 18:48:39.663179 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93aca208-9cef-49a3-917c-2bb7c314d537-utilities\") pod \"community-operators-w9dlm\" (UID: \"93aca208-9cef-49a3-917c-2bb7c314d537\") " pod="openshift-marketplace/community-operators-w9dlm" Feb 14 18:48:39 crc kubenswrapper[4897]: I0214 18:48:39.663344 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93aca208-9cef-49a3-917c-2bb7c314d537-catalog-content\") pod \"community-operators-w9dlm\" (UID: \"93aca208-9cef-49a3-917c-2bb7c314d537\") " pod="openshift-marketplace/community-operators-w9dlm" Feb 14 18:48:39 crc kubenswrapper[4897]: I0214 18:48:39.688317 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkfkr\" (UniqueName: \"kubernetes.io/projected/93aca208-9cef-49a3-917c-2bb7c314d537-kube-api-access-lkfkr\") pod \"community-operators-w9dlm\" (UID: \"93aca208-9cef-49a3-917c-2bb7c314d537\") " pod="openshift-marketplace/community-operators-w9dlm" Feb 14 18:48:39 crc kubenswrapper[4897]: I0214 18:48:39.771330 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w9dlm" Feb 14 18:48:39 crc kubenswrapper[4897]: I0214 18:48:39.944678 4897 generic.go:334] "Generic (PLEG): container finished" podID="3e2a05b2-5d93-4252-a08b-6b35f225e167" containerID="19c8501a006f7c3c45cdf610d89f8fa289e02d883fe3536c0688ab9298b3542d" exitCode=0 Feb 14 18:48:39 crc kubenswrapper[4897]: I0214 18:48:39.944734 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgcv6" event={"ID":"3e2a05b2-5d93-4252-a08b-6b35f225e167","Type":"ContainerDied","Data":"19c8501a006f7c3c45cdf610d89f8fa289e02d883fe3536c0688ab9298b3542d"} Feb 14 18:48:39 crc kubenswrapper[4897]: I0214 18:48:39.948424 4897 generic.go:334] "Generic (PLEG): container finished" podID="170e914d-6f55-4d61-bb7d-36dae4e4b002" containerID="ac848cf039384ff0b58942818cad7118514daf0e44313a1cf5d69d2503c7d5ed" exitCode=0 Feb 14 18:48:39 crc kubenswrapper[4897]: I0214 18:48:39.949145 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79v5s" event={"ID":"170e914d-6f55-4d61-bb7d-36dae4e4b002","Type":"ContainerDied","Data":"ac848cf039384ff0b58942818cad7118514daf0e44313a1cf5d69d2503c7d5ed"} Feb 14 18:48:39 crc kubenswrapper[4897]: I0214 18:48:39.949209 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79v5s" event={"ID":"170e914d-6f55-4d61-bb7d-36dae4e4b002","Type":"ContainerStarted","Data":"ec4543107f8d427649e46dcc9f248844d0b134afde6221db996638acb57a9593"} Feb 14 18:48:40 crc kubenswrapper[4897]: I0214 18:48:40.179433 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w9dlm"] Feb 14 18:48:40 crc kubenswrapper[4897]: W0214 18:48:40.181377 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93aca208_9cef_49a3_917c_2bb7c314d537.slice/crio-885b9cb1caa82cd44999dc056c3d02534f9d98ad55da5da38ca4e513ba7690de WatchSource:0}: Error finding container 885b9cb1caa82cd44999dc056c3d02534f9d98ad55da5da38ca4e513ba7690de: Status 404 returned error can't find the container with id 885b9cb1caa82cd44999dc056c3d02534f9d98ad55da5da38ca4e513ba7690de Feb 14 18:48:40 crc kubenswrapper[4897]: I0214 18:48:40.955571 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgcv6" event={"ID":"3e2a05b2-5d93-4252-a08b-6b35f225e167","Type":"ContainerStarted","Data":"42c5f67aa153de546914f06ddba56809da590eec1d0ac2548f8a0542f6977cf7"} Feb 14 18:48:40 crc kubenswrapper[4897]: I0214 18:48:40.957968 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79v5s" event={"ID":"170e914d-6f55-4d61-bb7d-36dae4e4b002","Type":"ContainerStarted","Data":"3ed11e9c628e7935df572e633e05810f14b475e28a89f769e5b576bd1a4034ed"} Feb 14 18:48:40 crc kubenswrapper[4897]: I0214 18:48:40.959422 4897 generic.go:334] "Generic (PLEG): container finished" podID="93aca208-9cef-49a3-917c-2bb7c314d537" containerID="f6aa59a205b3b5bd7f2eeb6838b78a6faa31a9f5e0187a30ca61f0c1d1cbc0c3" exitCode=0 Feb 14 18:48:40 crc kubenswrapper[4897]: I0214 18:48:40.959485 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9dlm" event={"ID":"93aca208-9cef-49a3-917c-2bb7c314d537","Type":"ContainerDied","Data":"f6aa59a205b3b5bd7f2eeb6838b78a6faa31a9f5e0187a30ca61f0c1d1cbc0c3"} Feb 14 18:48:40 crc kubenswrapper[4897]: I0214 18:48:40.959520 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9dlm" event={"ID":"93aca208-9cef-49a3-917c-2bb7c314d537","Type":"ContainerStarted","Data":"885b9cb1caa82cd44999dc056c3d02534f9d98ad55da5da38ca4e513ba7690de"} Feb 14 18:48:40 crc kubenswrapper[4897]: I0214 18:48:40.977148 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vgcv6" podStartSLOduration=1.573653244 podStartE2EDuration="3.977121715s" podCreationTimestamp="2026-02-14 18:48:37 +0000 UTC" firstStartedPulling="2026-02-14 18:48:37.923944745 +0000 UTC m=+370.900353228" lastFinishedPulling="2026-02-14 18:48:40.327413216 +0000 UTC m=+373.303821699" observedRunningTime="2026-02-14 18:48:40.976318028 +0000 UTC m=+373.952726511" watchObservedRunningTime="2026-02-14 18:48:40.977121715 +0000 UTC m=+373.953530228" Feb 14 18:48:41 crc kubenswrapper[4897]: I0214 18:48:41.965854 4897 generic.go:334] "Generic (PLEG): container finished" podID="170e914d-6f55-4d61-bb7d-36dae4e4b002" containerID="3ed11e9c628e7935df572e633e05810f14b475e28a89f769e5b576bd1a4034ed" exitCode=0 Feb 14 18:48:41 crc kubenswrapper[4897]: I0214 18:48:41.965920 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79v5s" event={"ID":"170e914d-6f55-4d61-bb7d-36dae4e4b002","Type":"ContainerDied","Data":"3ed11e9c628e7935df572e633e05810f14b475e28a89f769e5b576bd1a4034ed"} Feb 14 18:48:41 crc kubenswrapper[4897]: I0214 18:48:41.969270 4897 generic.go:334] "Generic (PLEG): container finished" podID="93aca208-9cef-49a3-917c-2bb7c314d537" containerID="1044aeaaca761c8b6cb27a8202ceca471de25a569dd0ec76c2723bf13a105e39" exitCode=0 Feb 14 18:48:41 crc kubenswrapper[4897]: I0214 18:48:41.969325 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9dlm" event={"ID":"93aca208-9cef-49a3-917c-2bb7c314d537","Type":"ContainerDied","Data":"1044aeaaca761c8b6cb27a8202ceca471de25a569dd0ec76c2723bf13a105e39"} Feb 14 18:48:42 crc kubenswrapper[4897]: I0214 18:48:42.978623 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79v5s" event={"ID":"170e914d-6f55-4d61-bb7d-36dae4e4b002","Type":"ContainerStarted","Data":"d3dff89f4e30c22b98ecaefbb9878a36acb581c71fba0acc66bb3e363dccf36c"} Feb 14 18:48:42 crc kubenswrapper[4897]: I0214 18:48:42.980828 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9dlm" event={"ID":"93aca208-9cef-49a3-917c-2bb7c314d537","Type":"ContainerStarted","Data":"11dc8a2f2d84151d270a46dd3766022ca57691124e3ce51e040eabb25e56ca7c"} Feb 14 18:48:42 crc kubenswrapper[4897]: I0214 18:48:42.992870 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-79v5s" podStartSLOduration=2.4587739969999998 podStartE2EDuration="4.992850606s" podCreationTimestamp="2026-02-14 18:48:38 +0000 UTC" firstStartedPulling="2026-02-14 18:48:39.949971026 +0000 UTC m=+372.926379509" lastFinishedPulling="2026-02-14 18:48:42.484047635 +0000 UTC m=+375.460456118" observedRunningTime="2026-02-14 18:48:42.991847673 +0000 UTC m=+375.968256166" watchObservedRunningTime="2026-02-14 18:48:42.992850606 +0000 UTC m=+375.969259079" Feb 14 18:48:43 crc kubenswrapper[4897]: I0214 18:48:43.009147 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w9dlm" podStartSLOduration=2.604986834 podStartE2EDuration="4.009129208s" podCreationTimestamp="2026-02-14 18:48:39 +0000 UTC" firstStartedPulling="2026-02-14 18:48:40.96075671 +0000 UTC m=+373.937165223" lastFinishedPulling="2026-02-14 18:48:42.364899124 +0000 UTC m=+375.341307597" observedRunningTime="2026-02-14 18:48:43.007264078 +0000 UTC m=+375.983672591" watchObservedRunningTime="2026-02-14 18:48:43.009129208 +0000 UTC m=+375.985537691" Feb 14 18:48:46 crc kubenswrapper[4897]: I0214 18:48:46.373236 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zqcpc" Feb 14 18:48:46 crc kubenswrapper[4897]: I0214 18:48:46.373608 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zqcpc" Feb 14 18:48:46 crc kubenswrapper[4897]: I0214 18:48:46.428129 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zqcpc" Feb 14 18:48:47 crc kubenswrapper[4897]: I0214 18:48:47.048933 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zqcpc" Feb 14 18:48:47 crc kubenswrapper[4897]: I0214 18:48:47.364689 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vgcv6" Feb 14 18:48:47 crc kubenswrapper[4897]: I0214 18:48:47.364820 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vgcv6" Feb 14 18:48:47 crc kubenswrapper[4897]: I0214 18:48:47.409366 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vgcv6" Feb 14 18:48:48 crc kubenswrapper[4897]: I0214 18:48:48.101289 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vgcv6" Feb 14 18:48:48 crc kubenswrapper[4897]: I0214 18:48:48.775221 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-79v5s" Feb 14 18:48:48 crc kubenswrapper[4897]: I0214 18:48:48.776075 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-79v5s" Feb 14 18:48:48 crc kubenswrapper[4897]: I0214 18:48:48.816829 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-79v5s" Feb 14 18:48:49 crc kubenswrapper[4897]: I0214 18:48:49.069986 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-79v5s" Feb 14 18:48:49 crc kubenswrapper[4897]: I0214 18:48:49.772547 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w9dlm" Feb 14 18:48:49 crc kubenswrapper[4897]: I0214 18:48:49.772625 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w9dlm" Feb 14 18:48:49 crc kubenswrapper[4897]: I0214 18:48:49.811215 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w9dlm" Feb 14 18:48:50 crc kubenswrapper[4897]: I0214 18:48:50.064605 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w9dlm" Feb 14 18:48:52 crc kubenswrapper[4897]: I0214 18:48:52.595309 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" Feb 14 18:48:52 crc kubenswrapper[4897]: I0214 18:48:52.647955 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fq4zf"] Feb 14 18:49:01 crc kubenswrapper[4897]: I0214 18:49:01.725882 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 18:49:01 crc kubenswrapper[4897]: I0214 18:49:01.727131 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 18:49:06 crc kubenswrapper[4897]: I0214 18:49:06.884197 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-k2ln8"] Feb 14 18:49:06 crc kubenswrapper[4897]: I0214 18:49:06.885913 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-k2ln8" Feb 14 18:49:06 crc kubenswrapper[4897]: I0214 18:49:06.889792 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Feb 14 18:49:06 crc kubenswrapper[4897]: I0214 18:49:06.889871 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 14 18:49:06 crc kubenswrapper[4897]: I0214 18:49:06.890144 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 14 18:49:06 crc kubenswrapper[4897]: I0214 18:49:06.890201 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 14 18:49:06 crc kubenswrapper[4897]: I0214 18:49:06.890479 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 14 18:49:06 crc kubenswrapper[4897]: I0214 18:49:06.892113 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-k2ln8"] Feb 14 18:49:06 crc kubenswrapper[4897]: I0214 18:49:06.929074 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w6f8\" (UniqueName: \"kubernetes.io/projected/a71e814f-5ebd-4332-8b7e-e505515be819-kube-api-access-9w6f8\") pod \"cluster-monitoring-operator-6d5b84845-k2ln8\" (UID: \"a71e814f-5ebd-4332-8b7e-e505515be819\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-k2ln8" Feb 14 18:49:06 crc kubenswrapper[4897]: I0214 18:49:06.929277 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/a71e814f-5ebd-4332-8b7e-e505515be819-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-k2ln8\" (UID: \"a71e814f-5ebd-4332-8b7e-e505515be819\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-k2ln8" Feb 14 18:49:06 crc kubenswrapper[4897]: I0214 18:49:06.929659 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/a71e814f-5ebd-4332-8b7e-e505515be819-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-k2ln8\" (UID: \"a71e814f-5ebd-4332-8b7e-e505515be819\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-k2ln8" Feb 14 18:49:07 crc kubenswrapper[4897]: I0214 18:49:07.031227 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/a71e814f-5ebd-4332-8b7e-e505515be819-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-k2ln8\" (UID: \"a71e814f-5ebd-4332-8b7e-e505515be819\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-k2ln8" Feb 14 18:49:07 crc kubenswrapper[4897]: I0214 18:49:07.031317 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w6f8\" (UniqueName: \"kubernetes.io/projected/a71e814f-5ebd-4332-8b7e-e505515be819-kube-api-access-9w6f8\") pod \"cluster-monitoring-operator-6d5b84845-k2ln8\" (UID: \"a71e814f-5ebd-4332-8b7e-e505515be819\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-k2ln8" Feb 14 18:49:07 crc kubenswrapper[4897]: I0214 18:49:07.031405 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/a71e814f-5ebd-4332-8b7e-e505515be819-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-k2ln8\" (UID: \"a71e814f-5ebd-4332-8b7e-e505515be819\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-k2ln8" Feb 14 18:49:07 crc kubenswrapper[4897]: I0214 18:49:07.032315 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/a71e814f-5ebd-4332-8b7e-e505515be819-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-k2ln8\" (UID: \"a71e814f-5ebd-4332-8b7e-e505515be819\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-k2ln8" Feb 14 18:49:07 crc kubenswrapper[4897]: I0214 18:49:07.038685 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/a71e814f-5ebd-4332-8b7e-e505515be819-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-k2ln8\" (UID: \"a71e814f-5ebd-4332-8b7e-e505515be819\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-k2ln8" Feb 14 18:49:07 crc kubenswrapper[4897]: I0214 18:49:07.053568 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w6f8\" (UniqueName: \"kubernetes.io/projected/a71e814f-5ebd-4332-8b7e-e505515be819-kube-api-access-9w6f8\") pod \"cluster-monitoring-operator-6d5b84845-k2ln8\" (UID: \"a71e814f-5ebd-4332-8b7e-e505515be819\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-k2ln8" Feb 14 18:49:07 crc kubenswrapper[4897]: I0214 18:49:07.211423 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-k2ln8" Feb 14 18:49:07 crc kubenswrapper[4897]: I0214 18:49:07.516263 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-k2ln8"] Feb 14 18:49:08 crc kubenswrapper[4897]: I0214 18:49:08.122451 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-k2ln8" event={"ID":"a71e814f-5ebd-4332-8b7e-e505515be819","Type":"ContainerStarted","Data":"4acf422f9398ca05697b9ed539023f14677d220790b6feb49ceb4bd57efa67c7"} Feb 14 18:49:09 crc kubenswrapper[4897]: I0214 18:49:09.885464 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7"] Feb 14 18:49:09 crc kubenswrapper[4897]: I0214 18:49:09.886709 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" Feb 14 18:49:09 crc kubenswrapper[4897]: I0214 18:49:09.888745 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-6hzl2" Feb 14 18:49:09 crc kubenswrapper[4897]: I0214 18:49:09.889850 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 14 18:49:09 crc kubenswrapper[4897]: I0214 18:49:09.890595 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7"] Feb 14 18:49:09 crc kubenswrapper[4897]: I0214 18:49:09.977503 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/4dab2db8-b8bf-4421-a71e-fb52c69e8a8e-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-tllh7\" (UID: \"4dab2db8-b8bf-4421-a71e-fb52c69e8a8e\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" Feb 14 18:49:10 crc kubenswrapper[4897]: I0214 18:49:10.078868 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/4dab2db8-b8bf-4421-a71e-fb52c69e8a8e-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-tllh7\" (UID: \"4dab2db8-b8bf-4421-a71e-fb52c69e8a8e\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" Feb 14 18:49:10 crc kubenswrapper[4897]: I0214 18:49:10.087989 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/4dab2db8-b8bf-4421-a71e-fb52c69e8a8e-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-tllh7\" (UID: \"4dab2db8-b8bf-4421-a71e-fb52c69e8a8e\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" Feb 14 18:49:10 crc kubenswrapper[4897]: I0214 18:49:10.138297 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-k2ln8" event={"ID":"a71e814f-5ebd-4332-8b7e-e505515be819","Type":"ContainerStarted","Data":"b7763d0ed12a3531ab3baaa8c7088408192f878b62bcc380bafd7d1c486bdec1"} Feb 14 18:49:10 crc kubenswrapper[4897]: I0214 18:49:10.162283 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-k2ln8" podStartSLOduration=2.516929934 podStartE2EDuration="4.162258383s" podCreationTimestamp="2026-02-14 18:49:06 +0000 UTC" firstStartedPulling="2026-02-14 18:49:07.5238652 +0000 UTC m=+400.500273703" lastFinishedPulling="2026-02-14 18:49:09.169193649 +0000 UTC m=+402.145602152" observedRunningTime="2026-02-14 18:49:10.161675465 +0000 UTC m=+403.138083978" watchObservedRunningTime="2026-02-14 18:49:10.162258383 +0000 UTC m=+403.138666896" Feb 14 18:49:10 crc kubenswrapper[4897]: I0214 18:49:10.203183 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" Feb 14 18:49:10 crc kubenswrapper[4897]: I0214 18:49:10.488347 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7"] Feb 14 18:49:11 crc kubenswrapper[4897]: I0214 18:49:11.145635 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" event={"ID":"4dab2db8-b8bf-4421-a71e-fb52c69e8a8e","Type":"ContainerStarted","Data":"8d1c946547f8a68504b719ed26ad4476babc545682d40324c7a9dabc3c15ac5c"} Feb 14 18:49:12 crc kubenswrapper[4897]: I0214 18:49:12.155901 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" event={"ID":"4dab2db8-b8bf-4421-a71e-fb52c69e8a8e","Type":"ContainerStarted","Data":"3380404fc0cd5c09902e963dbc200baed8bc7182fbd34afb88a9d5a09d0fc3b2"} Feb 14 18:49:12 crc kubenswrapper[4897]: I0214 18:49:12.156610 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" Feb 14 18:49:12 crc kubenswrapper[4897]: I0214 18:49:12.165639 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" Feb 14 18:49:12 crc kubenswrapper[4897]: I0214 18:49:12.180386 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" podStartSLOduration=1.824515222 podStartE2EDuration="3.180353348s" podCreationTimestamp="2026-02-14 18:49:09 +0000 UTC" firstStartedPulling="2026-02-14 18:49:10.496011198 +0000 UTC m=+403.472419701" lastFinishedPulling="2026-02-14 18:49:11.851849354 +0000 UTC m=+404.828257827" observedRunningTime="2026-02-14 18:49:12.176613724 +0000 UTC m=+405.153022267" watchObservedRunningTime="2026-02-14 18:49:12.180353348 +0000 UTC m=+405.156761861" Feb 14 18:49:12 crc kubenswrapper[4897]: I0214 18:49:12.951170 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-nw4hd"] Feb 14 18:49:12 crc kubenswrapper[4897]: I0214 18:49:12.952784 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-nw4hd" Feb 14 18:49:12 crc kubenswrapper[4897]: I0214 18:49:12.955885 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 14 18:49:12 crc kubenswrapper[4897]: I0214 18:49:12.956779 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 14 18:49:12 crc kubenswrapper[4897]: I0214 18:49:12.957561 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-rbvlw" Feb 14 18:49:12 crc kubenswrapper[4897]: I0214 18:49:12.958101 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 14 18:49:12 crc kubenswrapper[4897]: I0214 18:49:12.962253 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ea21323d-7af3-478b-bb81-9dfe45dd182e-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-nw4hd\" (UID: \"ea21323d-7af3-478b-bb81-9dfe45dd182e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nw4hd" Feb 14 18:49:12 crc kubenswrapper[4897]: I0214 18:49:12.962329 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qg92\" (UniqueName: \"kubernetes.io/projected/ea21323d-7af3-478b-bb81-9dfe45dd182e-kube-api-access-6qg92\") pod \"prometheus-operator-db54df47d-nw4hd\" (UID: \"ea21323d-7af3-478b-bb81-9dfe45dd182e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nw4hd" Feb 14 18:49:12 crc kubenswrapper[4897]: I0214 18:49:12.962985 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ea21323d-7af3-478b-bb81-9dfe45dd182e-metrics-client-ca\") pod \"prometheus-operator-db54df47d-nw4hd\" (UID: \"ea21323d-7af3-478b-bb81-9dfe45dd182e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nw4hd" Feb 14 18:49:12 crc kubenswrapper[4897]: I0214 18:49:12.963120 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea21323d-7af3-478b-bb81-9dfe45dd182e-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-nw4hd\" (UID: \"ea21323d-7af3-478b-bb81-9dfe45dd182e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nw4hd" Feb 14 18:49:12 crc kubenswrapper[4897]: I0214 18:49:12.976127 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-nw4hd"] Feb 14 18:49:13 crc kubenswrapper[4897]: I0214 18:49:13.063617 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ea21323d-7af3-478b-bb81-9dfe45dd182e-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-nw4hd\" (UID: \"ea21323d-7af3-478b-bb81-9dfe45dd182e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nw4hd" Feb 14 18:49:13 crc kubenswrapper[4897]: I0214 18:49:13.064179 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qg92\" (UniqueName: \"kubernetes.io/projected/ea21323d-7af3-478b-bb81-9dfe45dd182e-kube-api-access-6qg92\") pod \"prometheus-operator-db54df47d-nw4hd\" (UID: \"ea21323d-7af3-478b-bb81-9dfe45dd182e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nw4hd" Feb 14 18:49:13 crc kubenswrapper[4897]: I0214 18:49:13.064306 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ea21323d-7af3-478b-bb81-9dfe45dd182e-metrics-client-ca\") pod \"prometheus-operator-db54df47d-nw4hd\" (UID: \"ea21323d-7af3-478b-bb81-9dfe45dd182e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nw4hd" Feb 14 18:49:13 crc kubenswrapper[4897]: I0214 18:49:13.064454 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea21323d-7af3-478b-bb81-9dfe45dd182e-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-nw4hd\" (UID: \"ea21323d-7af3-478b-bb81-9dfe45dd182e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nw4hd" Feb 14 18:49:13 crc kubenswrapper[4897]: I0214 18:49:13.066290 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ea21323d-7af3-478b-bb81-9dfe45dd182e-metrics-client-ca\") pod \"prometheus-operator-db54df47d-nw4hd\" (UID: \"ea21323d-7af3-478b-bb81-9dfe45dd182e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nw4hd" Feb 14 18:49:13 crc kubenswrapper[4897]: I0214 18:49:13.081574 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea21323d-7af3-478b-bb81-9dfe45dd182e-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-nw4hd\" (UID: \"ea21323d-7af3-478b-bb81-9dfe45dd182e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nw4hd" Feb 14 18:49:13 crc kubenswrapper[4897]: I0214 18:49:13.082427 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ea21323d-7af3-478b-bb81-9dfe45dd182e-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-nw4hd\" (UID: \"ea21323d-7af3-478b-bb81-9dfe45dd182e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nw4hd" Feb 14 18:49:13 crc kubenswrapper[4897]: I0214 18:49:13.083730 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qg92\" (UniqueName: \"kubernetes.io/projected/ea21323d-7af3-478b-bb81-9dfe45dd182e-kube-api-access-6qg92\") pod \"prometheus-operator-db54df47d-nw4hd\" (UID: \"ea21323d-7af3-478b-bb81-9dfe45dd182e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nw4hd" Feb 14 18:49:13 crc kubenswrapper[4897]: I0214 18:49:13.276754 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-nw4hd" Feb 14 18:49:13 crc kubenswrapper[4897]: I0214 18:49:13.513846 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-nw4hd"] Feb 14 18:49:13 crc kubenswrapper[4897]: W0214 18:49:13.521461 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea21323d_7af3_478b_bb81_9dfe45dd182e.slice/crio-474c9657cf4632738c92b92d0bb08a50aff37c397d6eed798e9c5daed91e2abc WatchSource:0}: Error finding container 474c9657cf4632738c92b92d0bb08a50aff37c397d6eed798e9c5daed91e2abc: Status 404 returned error can't find the container with id 474c9657cf4632738c92b92d0bb08a50aff37c397d6eed798e9c5daed91e2abc Feb 14 18:49:14 crc kubenswrapper[4897]: I0214 18:49:14.172350 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-nw4hd" event={"ID":"ea21323d-7af3-478b-bb81-9dfe45dd182e","Type":"ContainerStarted","Data":"474c9657cf4632738c92b92d0bb08a50aff37c397d6eed798e9c5daed91e2abc"} Feb 14 18:49:16 crc kubenswrapper[4897]: I0214 18:49:16.186326 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-nw4hd" event={"ID":"ea21323d-7af3-478b-bb81-9dfe45dd182e","Type":"ContainerStarted","Data":"cd3d5ce89ad73a56a2033507a6dfe4e7f478129e792c422c40f688897c5e968f"} Feb 14 18:49:17 crc kubenswrapper[4897]: I0214 18:49:17.195055 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-nw4hd" event={"ID":"ea21323d-7af3-478b-bb81-9dfe45dd182e","Type":"ContainerStarted","Data":"565a7d8590eebedf9253cc3ab1712a0f44c6c006a111f39595563d74fd726de4"} Feb 14 18:49:17 crc kubenswrapper[4897]: I0214 18:49:17.687494 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" podUID="c8aadef2-477c-4699-9a1b-dd557ad9e273" containerName="registry" containerID="cri-o://232f3c737ff8b8ee99153a62d77f08996a90061918e84f647703f787e430ee25" gracePeriod=30 Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.203333 4897 generic.go:334] "Generic (PLEG): container finished" podID="c8aadef2-477c-4699-9a1b-dd557ad9e273" containerID="232f3c737ff8b8ee99153a62d77f08996a90061918e84f647703f787e430ee25" exitCode=0 Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.203532 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" event={"ID":"c8aadef2-477c-4699-9a1b-dd557ad9e273","Type":"ContainerDied","Data":"232f3c737ff8b8ee99153a62d77f08996a90061918e84f647703f787e430ee25"} Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.204927 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" event={"ID":"c8aadef2-477c-4699-9a1b-dd557ad9e273","Type":"ContainerDied","Data":"9129ed5e2f54103df5f6c7696ef97cafcda0704b98e910f51685a1d49f7aa462"} Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.205065 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9129ed5e2f54103df5f6c7696ef97cafcda0704b98e910f51685a1d49f7aa462" Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.224723 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.256440 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-nw4hd" podStartSLOduration=3.994581717 podStartE2EDuration="6.256411228s" podCreationTimestamp="2026-02-14 18:49:12 +0000 UTC" firstStartedPulling="2026-02-14 18:49:13.523339871 +0000 UTC m=+406.499748354" lastFinishedPulling="2026-02-14 18:49:15.785169382 +0000 UTC m=+408.761577865" observedRunningTime="2026-02-14 18:49:17.218812083 +0000 UTC m=+410.195220646" watchObservedRunningTime="2026-02-14 18:49:18.256411228 +0000 UTC m=+411.232819711" Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.362831 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8aadef2-477c-4699-9a1b-dd557ad9e273-trusted-ca\") pod \"c8aadef2-477c-4699-9a1b-dd557ad9e273\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.363166 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"c8aadef2-477c-4699-9a1b-dd557ad9e273\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.363213 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c8aadef2-477c-4699-9a1b-dd557ad9e273-registry-certificates\") pod \"c8aadef2-477c-4699-9a1b-dd557ad9e273\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.363268 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c8aadef2-477c-4699-9a1b-dd557ad9e273-installation-pull-secrets\") pod \"c8aadef2-477c-4699-9a1b-dd557ad9e273\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.363303 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c8aadef2-477c-4699-9a1b-dd557ad9e273-registry-tls\") pod \"c8aadef2-477c-4699-9a1b-dd557ad9e273\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.363371 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c8aadef2-477c-4699-9a1b-dd557ad9e273-ca-trust-extracted\") pod \"c8aadef2-477c-4699-9a1b-dd557ad9e273\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.363425 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8aadef2-477c-4699-9a1b-dd557ad9e273-bound-sa-token\") pod \"c8aadef2-477c-4699-9a1b-dd557ad9e273\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.363510 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58zds\" (UniqueName: \"kubernetes.io/projected/c8aadef2-477c-4699-9a1b-dd557ad9e273-kube-api-access-58zds\") pod \"c8aadef2-477c-4699-9a1b-dd557ad9e273\" (UID: \"c8aadef2-477c-4699-9a1b-dd557ad9e273\") " Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.363934 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8aadef2-477c-4699-9a1b-dd557ad9e273-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c8aadef2-477c-4699-9a1b-dd557ad9e273" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.365392 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8aadef2-477c-4699-9a1b-dd557ad9e273-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c8aadef2-477c-4699-9a1b-dd557ad9e273" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.374183 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8aadef2-477c-4699-9a1b-dd557ad9e273-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c8aadef2-477c-4699-9a1b-dd557ad9e273" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.374465 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8aadef2-477c-4699-9a1b-dd557ad9e273-kube-api-access-58zds" (OuterVolumeSpecName: "kube-api-access-58zds") pod "c8aadef2-477c-4699-9a1b-dd557ad9e273" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273"). InnerVolumeSpecName "kube-api-access-58zds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.374559 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8aadef2-477c-4699-9a1b-dd557ad9e273-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c8aadef2-477c-4699-9a1b-dd557ad9e273" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.376213 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8aadef2-477c-4699-9a1b-dd557ad9e273-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c8aadef2-477c-4699-9a1b-dd557ad9e273" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.377590 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "c8aadef2-477c-4699-9a1b-dd557ad9e273" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.391412 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8aadef2-477c-4699-9a1b-dd557ad9e273-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c8aadef2-477c-4699-9a1b-dd557ad9e273" (UID: "c8aadef2-477c-4699-9a1b-dd557ad9e273"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.465379 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58zds\" (UniqueName: \"kubernetes.io/projected/c8aadef2-477c-4699-9a1b-dd557ad9e273-kube-api-access-58zds\") on node \"crc\" DevicePath \"\"" Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.465435 4897 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8aadef2-477c-4699-9a1b-dd557ad9e273-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.465459 4897 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c8aadef2-477c-4699-9a1b-dd557ad9e273-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.465479 4897 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c8aadef2-477c-4699-9a1b-dd557ad9e273-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.465497 4897 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c8aadef2-477c-4699-9a1b-dd557ad9e273-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.465515 4897 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c8aadef2-477c-4699-9a1b-dd557ad9e273-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 14 18:49:18 crc kubenswrapper[4897]: I0214 18:49:18.465534 4897 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8aadef2-477c-4699-9a1b-dd557ad9e273-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.208930 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fq4zf" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.238405 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fq4zf"] Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.241226 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fq4zf"] Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.298773 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-pzmbg"] Feb 14 18:49:19 crc kubenswrapper[4897]: E0214 18:49:19.298978 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8aadef2-477c-4699-9a1b-dd557ad9e273" containerName="registry" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.298991 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8aadef2-477c-4699-9a1b-dd557ad9e273" containerName="registry" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.299113 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8aadef2-477c-4699-9a1b-dd557ad9e273" containerName="registry" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.300091 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-pzmbg" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.303650 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.303682 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-6wmg6" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.303877 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.327788 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-7g4lj"] Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.328814 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-7g4lj" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.332242 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-nzx8p" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.335014 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.343145 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.372660 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-pzmbg"] Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.375634 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l"] Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.376601 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.377696 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psvfs\" (UniqueName: \"kubernetes.io/projected/c40b9022-9f67-409d-9537-f7c51cbde229-kube-api-access-psvfs\") pod \"openshift-state-metrics-566fddb674-pzmbg\" (UID: \"c40b9022-9f67-409d-9537-f7c51cbde229\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-pzmbg" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.377742 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c40b9022-9f67-409d-9537-f7c51cbde229-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-pzmbg\" (UID: \"c40b9022-9f67-409d-9537-f7c51cbde229\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-pzmbg" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.377772 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c40b9022-9f67-409d-9537-f7c51cbde229-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-pzmbg\" (UID: \"c40b9022-9f67-409d-9537-f7c51cbde229\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-pzmbg" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.377907 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c40b9022-9f67-409d-9537-f7c51cbde229-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-pzmbg\" (UID: \"c40b9022-9f67-409d-9537-f7c51cbde229\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-pzmbg" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.380178 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.380222 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.380274 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-xlq5q" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.380535 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.471156 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l"] Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.478589 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c40b9022-9f67-409d-9537-f7c51cbde229-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-pzmbg\" (UID: \"c40b9022-9f67-409d-9537-f7c51cbde229\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-pzmbg" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.478651 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4t6s\" (UniqueName: \"kubernetes.io/projected/03cd71ad-ad51-43fb-b82b-f7c366799f65-kube-api-access-s4t6s\") pod \"kube-state-metrics-777cb5bd5d-tkm4l\" (UID: \"03cd71ad-ad51-43fb-b82b-f7c366799f65\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.478677 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/9c5f4625-c2b6-4f58-9cc3-588405b0fbae-root\") pod \"node-exporter-7g4lj\" (UID: \"9c5f4625-c2b6-4f58-9cc3-588405b0fbae\") " pod="openshift-monitoring/node-exporter-7g4lj" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.478702 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/9c5f4625-c2b6-4f58-9cc3-588405b0fbae-node-exporter-tls\") pod \"node-exporter-7g4lj\" (UID: \"9c5f4625-c2b6-4f58-9cc3-588405b0fbae\") " pod="openshift-monitoring/node-exporter-7g4lj" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.478726 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9c5f4625-c2b6-4f58-9cc3-588405b0fbae-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-7g4lj\" (UID: \"9c5f4625-c2b6-4f58-9cc3-588405b0fbae\") " pod="openshift-monitoring/node-exporter-7g4lj" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.478753 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/9c5f4625-c2b6-4f58-9cc3-588405b0fbae-node-exporter-textfile\") pod \"node-exporter-7g4lj\" (UID: \"9c5f4625-c2b6-4f58-9cc3-588405b0fbae\") " pod="openshift-monitoring/node-exporter-7g4lj" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.478771 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/03cd71ad-ad51-43fb-b82b-f7c366799f65-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-tkm4l\" (UID: \"03cd71ad-ad51-43fb-b82b-f7c366799f65\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l" Feb 14 18:49:19 crc kubenswrapper[4897]: E0214 18:49:19.478780 4897 secret.go:188] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.478791 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/03cd71ad-ad51-43fb-b82b-f7c366799f65-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-tkm4l\" (UID: \"03cd71ad-ad51-43fb-b82b-f7c366799f65\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l" Feb 14 18:49:19 crc kubenswrapper[4897]: E0214 18:49:19.479101 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c40b9022-9f67-409d-9537-f7c51cbde229-openshift-state-metrics-tls podName:c40b9022-9f67-409d-9537-f7c51cbde229 nodeName:}" failed. No retries permitted until 2026-02-14 18:49:19.979083076 +0000 UTC m=+412.955491559 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/c40b9022-9f67-409d-9537-f7c51cbde229-openshift-state-metrics-tls") pod "openshift-state-metrics-566fddb674-pzmbg" (UID: "c40b9022-9f67-409d-9537-f7c51cbde229") : secret "openshift-state-metrics-tls" not found Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.479325 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/03cd71ad-ad51-43fb-b82b-f7c366799f65-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-tkm4l\" (UID: \"03cd71ad-ad51-43fb-b82b-f7c366799f65\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.479392 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/03cd71ad-ad51-43fb-b82b-f7c366799f65-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-tkm4l\" (UID: \"03cd71ad-ad51-43fb-b82b-f7c366799f65\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.479467 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c40b9022-9f67-409d-9537-f7c51cbde229-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-pzmbg\" (UID: \"c40b9022-9f67-409d-9537-f7c51cbde229\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-pzmbg" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.479541 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9c5f4625-c2b6-4f58-9cc3-588405b0fbae-sys\") pod \"node-exporter-7g4lj\" (UID: \"9c5f4625-c2b6-4f58-9cc3-588405b0fbae\") " pod="openshift-monitoring/node-exporter-7g4lj" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.479584 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4n9n\" (UniqueName: \"kubernetes.io/projected/9c5f4625-c2b6-4f58-9cc3-588405b0fbae-kube-api-access-k4n9n\") pod \"node-exporter-7g4lj\" (UID: \"9c5f4625-c2b6-4f58-9cc3-588405b0fbae\") " pod="openshift-monitoring/node-exporter-7g4lj" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.479622 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psvfs\" (UniqueName: \"kubernetes.io/projected/c40b9022-9f67-409d-9537-f7c51cbde229-kube-api-access-psvfs\") pod \"openshift-state-metrics-566fddb674-pzmbg\" (UID: \"c40b9022-9f67-409d-9537-f7c51cbde229\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-pzmbg" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.479670 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/03cd71ad-ad51-43fb-b82b-f7c366799f65-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-tkm4l\" (UID: \"03cd71ad-ad51-43fb-b82b-f7c366799f65\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.479836 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c40b9022-9f67-409d-9537-f7c51cbde229-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-pzmbg\" (UID: \"c40b9022-9f67-409d-9537-f7c51cbde229\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-pzmbg" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.479877 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/9c5f4625-c2b6-4f58-9cc3-588405b0fbae-node-exporter-wtmp\") pod \"node-exporter-7g4lj\" (UID: \"9c5f4625-c2b6-4f58-9cc3-588405b0fbae\") " pod="openshift-monitoring/node-exporter-7g4lj" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.479916 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9c5f4625-c2b6-4f58-9cc3-588405b0fbae-metrics-client-ca\") pod \"node-exporter-7g4lj\" (UID: \"9c5f4625-c2b6-4f58-9cc3-588405b0fbae\") " pod="openshift-monitoring/node-exporter-7g4lj" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.480907 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c40b9022-9f67-409d-9537-f7c51cbde229-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-pzmbg\" (UID: \"c40b9022-9f67-409d-9537-f7c51cbde229\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-pzmbg" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.484322 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c40b9022-9f67-409d-9537-f7c51cbde229-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-pzmbg\" (UID: \"c40b9022-9f67-409d-9537-f7c51cbde229\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-pzmbg" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.497407 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psvfs\" (UniqueName: \"kubernetes.io/projected/c40b9022-9f67-409d-9537-f7c51cbde229-kube-api-access-psvfs\") pod \"openshift-state-metrics-566fddb674-pzmbg\" (UID: \"c40b9022-9f67-409d-9537-f7c51cbde229\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-pzmbg" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.580700 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/9c5f4625-c2b6-4f58-9cc3-588405b0fbae-node-exporter-textfile\") pod \"node-exporter-7g4lj\" (UID: \"9c5f4625-c2b6-4f58-9cc3-588405b0fbae\") " pod="openshift-monitoring/node-exporter-7g4lj" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.580751 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/03cd71ad-ad51-43fb-b82b-f7c366799f65-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-tkm4l\" (UID: \"03cd71ad-ad51-43fb-b82b-f7c366799f65\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.580785 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/03cd71ad-ad51-43fb-b82b-f7c366799f65-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-tkm4l\" (UID: \"03cd71ad-ad51-43fb-b82b-f7c366799f65\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.580814 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/03cd71ad-ad51-43fb-b82b-f7c366799f65-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-tkm4l\" (UID: \"03cd71ad-ad51-43fb-b82b-f7c366799f65\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.580842 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/03cd71ad-ad51-43fb-b82b-f7c366799f65-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-tkm4l\" (UID: \"03cd71ad-ad51-43fb-b82b-f7c366799f65\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.580877 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9c5f4625-c2b6-4f58-9cc3-588405b0fbae-sys\") pod \"node-exporter-7g4lj\" (UID: \"9c5f4625-c2b6-4f58-9cc3-588405b0fbae\") " pod="openshift-monitoring/node-exporter-7g4lj" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.580903 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4n9n\" (UniqueName: \"kubernetes.io/projected/9c5f4625-c2b6-4f58-9cc3-588405b0fbae-kube-api-access-k4n9n\") pod \"node-exporter-7g4lj\" (UID: \"9c5f4625-c2b6-4f58-9cc3-588405b0fbae\") " pod="openshift-monitoring/node-exporter-7g4lj" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.580938 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/03cd71ad-ad51-43fb-b82b-f7c366799f65-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-tkm4l\" (UID: \"03cd71ad-ad51-43fb-b82b-f7c366799f65\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.580973 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/9c5f4625-c2b6-4f58-9cc3-588405b0fbae-node-exporter-wtmp\") pod \"node-exporter-7g4lj\" (UID: \"9c5f4625-c2b6-4f58-9cc3-588405b0fbae\") " pod="openshift-monitoring/node-exporter-7g4lj" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.581003 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9c5f4625-c2b6-4f58-9cc3-588405b0fbae-metrics-client-ca\") pod \"node-exporter-7g4lj\" (UID: \"9c5f4625-c2b6-4f58-9cc3-588405b0fbae\") " pod="openshift-monitoring/node-exporter-7g4lj" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.581062 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4t6s\" (UniqueName: \"kubernetes.io/projected/03cd71ad-ad51-43fb-b82b-f7c366799f65-kube-api-access-s4t6s\") pod \"kube-state-metrics-777cb5bd5d-tkm4l\" (UID: \"03cd71ad-ad51-43fb-b82b-f7c366799f65\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.581084 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/9c5f4625-c2b6-4f58-9cc3-588405b0fbae-root\") pod \"node-exporter-7g4lj\" (UID: \"9c5f4625-c2b6-4f58-9cc3-588405b0fbae\") " pod="openshift-monitoring/node-exporter-7g4lj" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.581109 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/9c5f4625-c2b6-4f58-9cc3-588405b0fbae-node-exporter-tls\") pod \"node-exporter-7g4lj\" (UID: \"9c5f4625-c2b6-4f58-9cc3-588405b0fbae\") " pod="openshift-monitoring/node-exporter-7g4lj" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.581117 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/9c5f4625-c2b6-4f58-9cc3-588405b0fbae-node-exporter-textfile\") pod \"node-exporter-7g4lj\" (UID: \"9c5f4625-c2b6-4f58-9cc3-588405b0fbae\") " pod="openshift-monitoring/node-exporter-7g4lj" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.581140 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9c5f4625-c2b6-4f58-9cc3-588405b0fbae-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-7g4lj\" (UID: \"9c5f4625-c2b6-4f58-9cc3-588405b0fbae\") " pod="openshift-monitoring/node-exporter-7g4lj" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.581425 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/03cd71ad-ad51-43fb-b82b-f7c366799f65-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-tkm4l\" (UID: \"03cd71ad-ad51-43fb-b82b-f7c366799f65\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.581962 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/9c5f4625-c2b6-4f58-9cc3-588405b0fbae-root\") pod \"node-exporter-7g4lj\" (UID: \"9c5f4625-c2b6-4f58-9cc3-588405b0fbae\") " pod="openshift-monitoring/node-exporter-7g4lj" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.582209 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/9c5f4625-c2b6-4f58-9cc3-588405b0fbae-node-exporter-wtmp\") pod \"node-exporter-7g4lj\" (UID: \"9c5f4625-c2b6-4f58-9cc3-588405b0fbae\") " pod="openshift-monitoring/node-exporter-7g4lj" Feb 14 18:49:19 crc kubenswrapper[4897]: E0214 18:49:19.582383 4897 secret.go:188] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Feb 14 18:49:19 crc kubenswrapper[4897]: E0214 18:49:19.582536 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c5f4625-c2b6-4f58-9cc3-588405b0fbae-node-exporter-tls podName:9c5f4625-c2b6-4f58-9cc3-588405b0fbae nodeName:}" failed. No retries permitted until 2026-02-14 18:49:20.082509015 +0000 UTC m=+413.058917498 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/9c5f4625-c2b6-4f58-9cc3-588405b0fbae-node-exporter-tls") pod "node-exporter-7g4lj" (UID: "9c5f4625-c2b6-4f58-9cc3-588405b0fbae") : secret "node-exporter-tls" not found Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.582773 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9c5f4625-c2b6-4f58-9cc3-588405b0fbae-metrics-client-ca\") pod \"node-exporter-7g4lj\" (UID: \"9c5f4625-c2b6-4f58-9cc3-588405b0fbae\") " pod="openshift-monitoring/node-exporter-7g4lj" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.582828 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9c5f4625-c2b6-4f58-9cc3-588405b0fbae-sys\") pod \"node-exporter-7g4lj\" (UID: \"9c5f4625-c2b6-4f58-9cc3-588405b0fbae\") " pod="openshift-monitoring/node-exporter-7g4lj" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.583550 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/03cd71ad-ad51-43fb-b82b-f7c366799f65-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-tkm4l\" (UID: \"03cd71ad-ad51-43fb-b82b-f7c366799f65\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.585208 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/03cd71ad-ad51-43fb-b82b-f7c366799f65-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-tkm4l\" (UID: \"03cd71ad-ad51-43fb-b82b-f7c366799f65\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.585446 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/03cd71ad-ad51-43fb-b82b-f7c366799f65-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-tkm4l\" (UID: \"03cd71ad-ad51-43fb-b82b-f7c366799f65\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.604872 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/03cd71ad-ad51-43fb-b82b-f7c366799f65-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-tkm4l\" (UID: \"03cd71ad-ad51-43fb-b82b-f7c366799f65\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.609256 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9c5f4625-c2b6-4f58-9cc3-588405b0fbae-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-7g4lj\" (UID: \"9c5f4625-c2b6-4f58-9cc3-588405b0fbae\") " pod="openshift-monitoring/node-exporter-7g4lj" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.609321 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4t6s\" (UniqueName: \"kubernetes.io/projected/03cd71ad-ad51-43fb-b82b-f7c366799f65-kube-api-access-s4t6s\") pod \"kube-state-metrics-777cb5bd5d-tkm4l\" (UID: \"03cd71ad-ad51-43fb-b82b-f7c366799f65\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.609438 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4n9n\" (UniqueName: \"kubernetes.io/projected/9c5f4625-c2b6-4f58-9cc3-588405b0fbae-kube-api-access-k4n9n\") pod \"node-exporter-7g4lj\" (UID: \"9c5f4625-c2b6-4f58-9cc3-588405b0fbae\") " pod="openshift-monitoring/node-exporter-7g4lj" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.691771 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.814758 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8aadef2-477c-4699-9a1b-dd557ad9e273" path="/var/lib/kubelet/pods/c8aadef2-477c-4699-9a1b-dd557ad9e273/volumes" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.985760 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c40b9022-9f67-409d-9537-f7c51cbde229-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-pzmbg\" (UID: \"c40b9022-9f67-409d-9537-f7c51cbde229\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-pzmbg" Feb 14 18:49:19 crc kubenswrapper[4897]: I0214 18:49:19.990584 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c40b9022-9f67-409d-9537-f7c51cbde229-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-pzmbg\" (UID: \"c40b9022-9f67-409d-9537-f7c51cbde229\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-pzmbg" Feb 14 18:49:20 crc kubenswrapper[4897]: I0214 18:49:20.087078 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/9c5f4625-c2b6-4f58-9cc3-588405b0fbae-node-exporter-tls\") pod \"node-exporter-7g4lj\" (UID: \"9c5f4625-c2b6-4f58-9cc3-588405b0fbae\") " pod="openshift-monitoring/node-exporter-7g4lj" Feb 14 18:49:20 crc kubenswrapper[4897]: I0214 18:49:20.090161 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/9c5f4625-c2b6-4f58-9cc3-588405b0fbae-node-exporter-tls\") pod \"node-exporter-7g4lj\" (UID: \"9c5f4625-c2b6-4f58-9cc3-588405b0fbae\") " pod="openshift-monitoring/node-exporter-7g4lj" Feb 14 18:49:20 crc kubenswrapper[4897]: I0214 18:49:20.144153 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l"] Feb 14 18:49:20 crc kubenswrapper[4897]: W0214 18:49:20.146822 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03cd71ad_ad51_43fb_b82b_f7c366799f65.slice/crio-319b33ddc642bfc3d4a602fec3499b503daf242ceb65f44a796deecb2daa0058 WatchSource:0}: Error finding container 319b33ddc642bfc3d4a602fec3499b503daf242ceb65f44a796deecb2daa0058: Status 404 returned error can't find the container with id 319b33ddc642bfc3d4a602fec3499b503daf242ceb65f44a796deecb2daa0058 Feb 14 18:49:20 crc kubenswrapper[4897]: I0214 18:49:20.215066 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-pzmbg" Feb 14 18:49:20 crc kubenswrapper[4897]: I0214 18:49:20.217413 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l" event={"ID":"03cd71ad-ad51-43fb-b82b-f7c366799f65","Type":"ContainerStarted","Data":"319b33ddc642bfc3d4a602fec3499b503daf242ceb65f44a796deecb2daa0058"} Feb 14 18:49:20 crc kubenswrapper[4897]: I0214 18:49:20.243875 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-7g4lj" Feb 14 18:49:20 crc kubenswrapper[4897]: W0214 18:49:20.310783 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c5f4625_c2b6_4f58_9cc3_588405b0fbae.slice/crio-bb251de3f0c351b81a471825c3e28f792048d2f063833013d62406809ac49cf4 WatchSource:0}: Error finding container bb251de3f0c351b81a471825c3e28f792048d2f063833013d62406809ac49cf4: Status 404 returned error can't find the container with id bb251de3f0c351b81a471825c3e28f792048d2f063833013d62406809ac49cf4 Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.425005 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.427400 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.429391 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-mvjdl" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.429619 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.429735 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.429861 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.430143 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.430629 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.430853 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.431587 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.438361 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.440948 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 14 18:49:23 crc kubenswrapper[4897]: W0214 18:49:20.464040 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc40b9022_9f67_409d_9537_f7c51cbde229.slice/crio-b5e3ce22773cd7a878d6a771a98c6de99d38c44efaf948232416847bc6b0cf56 WatchSource:0}: Error finding container b5e3ce22773cd7a878d6a771a98c6de99d38c44efaf948232416847bc6b0cf56: Status 404 returned error can't find the container with id b5e3ce22773cd7a878d6a771a98c6de99d38c44efaf948232416847bc6b0cf56 Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.464181 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-pzmbg"] Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.496624 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/7718217e-05bd-4eed-a87d-34ddc193453f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.496693 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/7718217e-05bd-4eed-a87d-34ddc193453f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.496740 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7718217e-05bd-4eed-a87d-34ddc193453f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.496941 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2vwq\" (UniqueName: \"kubernetes.io/projected/7718217e-05bd-4eed-a87d-34ddc193453f-kube-api-access-r2vwq\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.496999 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/7718217e-05bd-4eed-a87d-34ddc193453f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.497118 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7718217e-05bd-4eed-a87d-34ddc193453f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.497165 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7718217e-05bd-4eed-a87d-34ddc193453f-web-config\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.497203 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/7718217e-05bd-4eed-a87d-34ddc193453f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.497235 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7718217e-05bd-4eed-a87d-34ddc193453f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.497269 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7718217e-05bd-4eed-a87d-34ddc193453f-config-out\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.497294 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/7718217e-05bd-4eed-a87d-34ddc193453f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.497321 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/7718217e-05bd-4eed-a87d-34ddc193453f-config-volume\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.597837 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/7718217e-05bd-4eed-a87d-34ddc193453f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.598129 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7718217e-05bd-4eed-a87d-34ddc193453f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.598156 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7718217e-05bd-4eed-a87d-34ddc193453f-config-out\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: E0214 18:49:20.598154 4897 secret.go:188] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.598178 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/7718217e-05bd-4eed-a87d-34ddc193453f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.598198 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/7718217e-05bd-4eed-a87d-34ddc193453f-config-volume\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.598219 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/7718217e-05bd-4eed-a87d-34ddc193453f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: E0214 18:49:20.598244 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7718217e-05bd-4eed-a87d-34ddc193453f-secret-alertmanager-main-tls podName:7718217e-05bd-4eed-a87d-34ddc193453f nodeName:}" failed. No retries permitted until 2026-02-14 18:49:21.098217401 +0000 UTC m=+414.074625984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/7718217e-05bd-4eed-a87d-34ddc193453f-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "7718217e-05bd-4eed-a87d-34ddc193453f") : secret "alertmanager-main-tls" not found Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.598282 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/7718217e-05bd-4eed-a87d-34ddc193453f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.598375 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7718217e-05bd-4eed-a87d-34ddc193453f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.598424 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2vwq\" (UniqueName: \"kubernetes.io/projected/7718217e-05bd-4eed-a87d-34ddc193453f-kube-api-access-r2vwq\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.598492 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/7718217e-05bd-4eed-a87d-34ddc193453f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.598514 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7718217e-05bd-4eed-a87d-34ddc193453f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.598559 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7718217e-05bd-4eed-a87d-34ddc193453f-web-config\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.599416 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7718217e-05bd-4eed-a87d-34ddc193453f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.599543 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/7718217e-05bd-4eed-a87d-34ddc193453f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.600672 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7718217e-05bd-4eed-a87d-34ddc193453f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.605511 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7718217e-05bd-4eed-a87d-34ddc193453f-config-out\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.605945 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/7718217e-05bd-4eed-a87d-34ddc193453f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.605963 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7718217e-05bd-4eed-a87d-34ddc193453f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.606169 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/7718217e-05bd-4eed-a87d-34ddc193453f-config-volume\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.606348 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/7718217e-05bd-4eed-a87d-34ddc193453f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.606722 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/7718217e-05bd-4eed-a87d-34ddc193453f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.608010 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7718217e-05bd-4eed-a87d-34ddc193453f-web-config\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:20.616900 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2vwq\" (UniqueName: \"kubernetes.io/projected/7718217e-05bd-4eed-a87d-34ddc193453f-kube-api-access-r2vwq\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.109197 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/7718217e-05bd-4eed-a87d-34ddc193453f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.115069 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/7718217e-05bd-4eed-a87d-34ddc193453f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"7718217e-05bd-4eed-a87d-34ddc193453f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.224258 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-pzmbg" event={"ID":"c40b9022-9f67-409d-9537-f7c51cbde229","Type":"ContainerStarted","Data":"dc027a344e588e0287cf91922b737a0a23aa3d341dfcc4b14949a8967acde623"} Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.224317 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-pzmbg" event={"ID":"c40b9022-9f67-409d-9537-f7c51cbde229","Type":"ContainerStarted","Data":"b5e3ce22773cd7a878d6a771a98c6de99d38c44efaf948232416847bc6b0cf56"} Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.225370 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-7g4lj" event={"ID":"9c5f4625-c2b6-4f58-9cc3-588405b0fbae","Type":"ContainerStarted","Data":"bb251de3f0c351b81a471825c3e28f792048d2f063833013d62406809ac49cf4"} Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.311225 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c"] Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.313476 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.317353 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.318362 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.318609 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-drl4shhbe089o" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.319601 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.319737 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.319943 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.321929 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-dz75t" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.327858 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c"] Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.346805 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.416734 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/15de099a-88c7-4c7c-9b4e-8d10c1e392f3-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-86c7f7cb9c-fsl5c\" (UID: \"15de099a-88c7-4c7c-9b4e-8d10c1e392f3\") " pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.416783 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfds8\" (UniqueName: \"kubernetes.io/projected/15de099a-88c7-4c7c-9b4e-8d10c1e392f3-kube-api-access-lfds8\") pod \"thanos-querier-86c7f7cb9c-fsl5c\" (UID: \"15de099a-88c7-4c7c-9b4e-8d10c1e392f3\") " pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.416808 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/15de099a-88c7-4c7c-9b4e-8d10c1e392f3-metrics-client-ca\") pod \"thanos-querier-86c7f7cb9c-fsl5c\" (UID: \"15de099a-88c7-4c7c-9b4e-8d10c1e392f3\") " pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.416840 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/15de099a-88c7-4c7c-9b4e-8d10c1e392f3-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-86c7f7cb9c-fsl5c\" (UID: \"15de099a-88c7-4c7c-9b4e-8d10c1e392f3\") " pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.417597 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/15de099a-88c7-4c7c-9b4e-8d10c1e392f3-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-86c7f7cb9c-fsl5c\" (UID: \"15de099a-88c7-4c7c-9b4e-8d10c1e392f3\") " pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.417654 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/15de099a-88c7-4c7c-9b4e-8d10c1e392f3-secret-grpc-tls\") pod \"thanos-querier-86c7f7cb9c-fsl5c\" (UID: \"15de099a-88c7-4c7c-9b4e-8d10c1e392f3\") " pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.417710 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/15de099a-88c7-4c7c-9b4e-8d10c1e392f3-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-86c7f7cb9c-fsl5c\" (UID: \"15de099a-88c7-4c7c-9b4e-8d10c1e392f3\") " pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.418123 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/15de099a-88c7-4c7c-9b4e-8d10c1e392f3-secret-thanos-querier-tls\") pod \"thanos-querier-86c7f7cb9c-fsl5c\" (UID: \"15de099a-88c7-4c7c-9b4e-8d10c1e392f3\") " pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.520297 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/15de099a-88c7-4c7c-9b4e-8d10c1e392f3-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-86c7f7cb9c-fsl5c\" (UID: \"15de099a-88c7-4c7c-9b4e-8d10c1e392f3\") " pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.520408 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfds8\" (UniqueName: \"kubernetes.io/projected/15de099a-88c7-4c7c-9b4e-8d10c1e392f3-kube-api-access-lfds8\") pod \"thanos-querier-86c7f7cb9c-fsl5c\" (UID: \"15de099a-88c7-4c7c-9b4e-8d10c1e392f3\") " pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.520442 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/15de099a-88c7-4c7c-9b4e-8d10c1e392f3-metrics-client-ca\") pod \"thanos-querier-86c7f7cb9c-fsl5c\" (UID: \"15de099a-88c7-4c7c-9b4e-8d10c1e392f3\") " pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.520476 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/15de099a-88c7-4c7c-9b4e-8d10c1e392f3-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-86c7f7cb9c-fsl5c\" (UID: \"15de099a-88c7-4c7c-9b4e-8d10c1e392f3\") " pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.520514 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/15de099a-88c7-4c7c-9b4e-8d10c1e392f3-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-86c7f7cb9c-fsl5c\" (UID: \"15de099a-88c7-4c7c-9b4e-8d10c1e392f3\") " pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.520536 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/15de099a-88c7-4c7c-9b4e-8d10c1e392f3-secret-grpc-tls\") pod \"thanos-querier-86c7f7cb9c-fsl5c\" (UID: \"15de099a-88c7-4c7c-9b4e-8d10c1e392f3\") " pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.520590 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/15de099a-88c7-4c7c-9b4e-8d10c1e392f3-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-86c7f7cb9c-fsl5c\" (UID: \"15de099a-88c7-4c7c-9b4e-8d10c1e392f3\") " pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.520618 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/15de099a-88c7-4c7c-9b4e-8d10c1e392f3-secret-thanos-querier-tls\") pod \"thanos-querier-86c7f7cb9c-fsl5c\" (UID: \"15de099a-88c7-4c7c-9b4e-8d10c1e392f3\") " pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.522163 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/15de099a-88c7-4c7c-9b4e-8d10c1e392f3-metrics-client-ca\") pod \"thanos-querier-86c7f7cb9c-fsl5c\" (UID: \"15de099a-88c7-4c7c-9b4e-8d10c1e392f3\") " pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.524217 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/15de099a-88c7-4c7c-9b4e-8d10c1e392f3-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-86c7f7cb9c-fsl5c\" (UID: \"15de099a-88c7-4c7c-9b4e-8d10c1e392f3\") " pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.527652 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/15de099a-88c7-4c7c-9b4e-8d10c1e392f3-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-86c7f7cb9c-fsl5c\" (UID: \"15de099a-88c7-4c7c-9b4e-8d10c1e392f3\") " pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.527821 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/15de099a-88c7-4c7c-9b4e-8d10c1e392f3-secret-grpc-tls\") pod \"thanos-querier-86c7f7cb9c-fsl5c\" (UID: \"15de099a-88c7-4c7c-9b4e-8d10c1e392f3\") " pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.528353 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/15de099a-88c7-4c7c-9b4e-8d10c1e392f3-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-86c7f7cb9c-fsl5c\" (UID: \"15de099a-88c7-4c7c-9b4e-8d10c1e392f3\") " pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.528461 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/15de099a-88c7-4c7c-9b4e-8d10c1e392f3-secret-thanos-querier-tls\") pod \"thanos-querier-86c7f7cb9c-fsl5c\" (UID: \"15de099a-88c7-4c7c-9b4e-8d10c1e392f3\") " pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.537926 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/15de099a-88c7-4c7c-9b4e-8d10c1e392f3-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-86c7f7cb9c-fsl5c\" (UID: \"15de099a-88c7-4c7c-9b4e-8d10c1e392f3\") " pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.550862 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfds8\" (UniqueName: \"kubernetes.io/projected/15de099a-88c7-4c7c-9b4e-8d10c1e392f3-kube-api-access-lfds8\") pod \"thanos-querier-86c7f7cb9c-fsl5c\" (UID: \"15de099a-88c7-4c7c-9b4e-8d10c1e392f3\") " pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:21.639644 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:23 crc kubenswrapper[4897]: I0214 18:49:22.236808 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-pzmbg" event={"ID":"c40b9022-9f67-409d-9537-f7c51cbde229","Type":"ContainerStarted","Data":"8361e5ee2cfd44ba2352217cbf9c0c26c79fcda9f9989c380e0259ee9998f015"} Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.162078 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-66b5df45c6-rrlk8"] Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.162950 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.167267 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.174050 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c"] Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.212352 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-66b5df45c6-rrlk8"] Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.361928 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9490a7a2-1c74-4391-b113-5a37b912de71-console-config\") pod \"console-66b5df45c6-rrlk8\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.362023 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9490a7a2-1c74-4391-b113-5a37b912de71-oauth-serving-cert\") pod \"console-66b5df45c6-rrlk8\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.362100 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9490a7a2-1c74-4391-b113-5a37b912de71-trusted-ca-bundle\") pod \"console-66b5df45c6-rrlk8\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.363049 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9490a7a2-1c74-4391-b113-5a37b912de71-console-oauth-config\") pod \"console-66b5df45c6-rrlk8\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.363165 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9490a7a2-1c74-4391-b113-5a37b912de71-service-ca\") pod \"console-66b5df45c6-rrlk8\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.363250 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wfsq\" (UniqueName: \"kubernetes.io/projected/9490a7a2-1c74-4391-b113-5a37b912de71-kube-api-access-9wfsq\") pod \"console-66b5df45c6-rrlk8\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.363318 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9490a7a2-1c74-4391-b113-5a37b912de71-console-serving-cert\") pod \"console-66b5df45c6-rrlk8\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.465285 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9490a7a2-1c74-4391-b113-5a37b912de71-trusted-ca-bundle\") pod \"console-66b5df45c6-rrlk8\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.465376 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9490a7a2-1c74-4391-b113-5a37b912de71-console-oauth-config\") pod \"console-66b5df45c6-rrlk8\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.465491 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9490a7a2-1c74-4391-b113-5a37b912de71-service-ca\") pod \"console-66b5df45c6-rrlk8\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.465542 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wfsq\" (UniqueName: \"kubernetes.io/projected/9490a7a2-1c74-4391-b113-5a37b912de71-kube-api-access-9wfsq\") pod \"console-66b5df45c6-rrlk8\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.465623 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9490a7a2-1c74-4391-b113-5a37b912de71-console-serving-cert\") pod \"console-66b5df45c6-rrlk8\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.465662 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9490a7a2-1c74-4391-b113-5a37b912de71-console-config\") pod \"console-66b5df45c6-rrlk8\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.465712 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9490a7a2-1c74-4391-b113-5a37b912de71-oauth-serving-cert\") pod \"console-66b5df45c6-rrlk8\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.466588 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9490a7a2-1c74-4391-b113-5a37b912de71-trusted-ca-bundle\") pod \"console-66b5df45c6-rrlk8\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.466889 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9490a7a2-1c74-4391-b113-5a37b912de71-console-config\") pod \"console-66b5df45c6-rrlk8\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.467088 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9490a7a2-1c74-4391-b113-5a37b912de71-oauth-serving-cert\") pod \"console-66b5df45c6-rrlk8\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.467336 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9490a7a2-1c74-4391-b113-5a37b912de71-service-ca\") pod \"console-66b5df45c6-rrlk8\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.475571 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9490a7a2-1c74-4391-b113-5a37b912de71-console-serving-cert\") pod \"console-66b5df45c6-rrlk8\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.476576 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9490a7a2-1c74-4391-b113-5a37b912de71-console-oauth-config\") pod \"console-66b5df45c6-rrlk8\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.493278 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wfsq\" (UniqueName: \"kubernetes.io/projected/9490a7a2-1c74-4391-b113-5a37b912de71-kube-api-access-9wfsq\") pod \"console-66b5df45c6-rrlk8\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.684503 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-7cfcf6657f-wsnmf"] Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.686199 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.689115 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.690230 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-c3g2gq75h56ak" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.690379 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-xmx5n" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.690510 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.690720 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.690797 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.694856 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7cfcf6657f-wsnmf"] Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.780825 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.871918 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9748a754-75f5-4f7d-9e7b-a6135dd3778d-secret-metrics-client-certs\") pod \"metrics-server-7cfcf6657f-wsnmf\" (UID: \"9748a754-75f5-4f7d-9e7b-a6135dd3778d\") " pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.871973 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9748a754-75f5-4f7d-9e7b-a6135dd3778d-client-ca-bundle\") pod \"metrics-server-7cfcf6657f-wsnmf\" (UID: \"9748a754-75f5-4f7d-9e7b-a6135dd3778d\") " pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.872390 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/9748a754-75f5-4f7d-9e7b-a6135dd3778d-metrics-server-audit-profiles\") pod \"metrics-server-7cfcf6657f-wsnmf\" (UID: \"9748a754-75f5-4f7d-9e7b-a6135dd3778d\") " pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.872486 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh9k9\" (UniqueName: \"kubernetes.io/projected/9748a754-75f5-4f7d-9e7b-a6135dd3778d-kube-api-access-hh9k9\") pod \"metrics-server-7cfcf6657f-wsnmf\" (UID: \"9748a754-75f5-4f7d-9e7b-a6135dd3778d\") " pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.872543 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/9748a754-75f5-4f7d-9e7b-a6135dd3778d-audit-log\") pod \"metrics-server-7cfcf6657f-wsnmf\" (UID: \"9748a754-75f5-4f7d-9e7b-a6135dd3778d\") " pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.872566 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/9748a754-75f5-4f7d-9e7b-a6135dd3778d-secret-metrics-server-tls\") pod \"metrics-server-7cfcf6657f-wsnmf\" (UID: \"9748a754-75f5-4f7d-9e7b-a6135dd3778d\") " pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.872590 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9748a754-75f5-4f7d-9e7b-a6135dd3778d-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7cfcf6657f-wsnmf\" (UID: \"9748a754-75f5-4f7d-9e7b-a6135dd3778d\") " pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.973813 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9748a754-75f5-4f7d-9e7b-a6135dd3778d-secret-metrics-client-certs\") pod \"metrics-server-7cfcf6657f-wsnmf\" (UID: \"9748a754-75f5-4f7d-9e7b-a6135dd3778d\") " pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.973888 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9748a754-75f5-4f7d-9e7b-a6135dd3778d-client-ca-bundle\") pod \"metrics-server-7cfcf6657f-wsnmf\" (UID: \"9748a754-75f5-4f7d-9e7b-a6135dd3778d\") " pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.973946 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/9748a754-75f5-4f7d-9e7b-a6135dd3778d-metrics-server-audit-profiles\") pod \"metrics-server-7cfcf6657f-wsnmf\" (UID: \"9748a754-75f5-4f7d-9e7b-a6135dd3778d\") " pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.973974 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh9k9\" (UniqueName: \"kubernetes.io/projected/9748a754-75f5-4f7d-9e7b-a6135dd3778d-kube-api-access-hh9k9\") pod \"metrics-server-7cfcf6657f-wsnmf\" (UID: \"9748a754-75f5-4f7d-9e7b-a6135dd3778d\") " pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.974000 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/9748a754-75f5-4f7d-9e7b-a6135dd3778d-audit-log\") pod \"metrics-server-7cfcf6657f-wsnmf\" (UID: \"9748a754-75f5-4f7d-9e7b-a6135dd3778d\") " pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.974016 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/9748a754-75f5-4f7d-9e7b-a6135dd3778d-secret-metrics-server-tls\") pod \"metrics-server-7cfcf6657f-wsnmf\" (UID: \"9748a754-75f5-4f7d-9e7b-a6135dd3778d\") " pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.974044 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9748a754-75f5-4f7d-9e7b-a6135dd3778d-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7cfcf6657f-wsnmf\" (UID: \"9748a754-75f5-4f7d-9e7b-a6135dd3778d\") " pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.975270 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/9748a754-75f5-4f7d-9e7b-a6135dd3778d-audit-log\") pod \"metrics-server-7cfcf6657f-wsnmf\" (UID: \"9748a754-75f5-4f7d-9e7b-a6135dd3778d\") " pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.976627 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/9748a754-75f5-4f7d-9e7b-a6135dd3778d-metrics-server-audit-profiles\") pod \"metrics-server-7cfcf6657f-wsnmf\" (UID: \"9748a754-75f5-4f7d-9e7b-a6135dd3778d\") " pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.978557 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9748a754-75f5-4f7d-9e7b-a6135dd3778d-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7cfcf6657f-wsnmf\" (UID: \"9748a754-75f5-4f7d-9e7b-a6135dd3778d\") " pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.979671 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/9748a754-75f5-4f7d-9e7b-a6135dd3778d-secret-metrics-server-tls\") pod \"metrics-server-7cfcf6657f-wsnmf\" (UID: \"9748a754-75f5-4f7d-9e7b-a6135dd3778d\") " pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.983936 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9748a754-75f5-4f7d-9e7b-a6135dd3778d-secret-metrics-client-certs\") pod \"metrics-server-7cfcf6657f-wsnmf\" (UID: \"9748a754-75f5-4f7d-9e7b-a6135dd3778d\") " pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 18:49:24 crc kubenswrapper[4897]: I0214 18:49:24.984917 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9748a754-75f5-4f7d-9e7b-a6135dd3778d-client-ca-bundle\") pod \"metrics-server-7cfcf6657f-wsnmf\" (UID: \"9748a754-75f5-4f7d-9e7b-a6135dd3778d\") " pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.002548 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh9k9\" (UniqueName: \"kubernetes.io/projected/9748a754-75f5-4f7d-9e7b-a6135dd3778d-kube-api-access-hh9k9\") pod \"metrics-server-7cfcf6657f-wsnmf\" (UID: \"9748a754-75f5-4f7d-9e7b-a6135dd3778d\") " pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.014417 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.100728 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-79d749bcb5-rfm5g"] Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.101602 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-79d749bcb5-rfm5g" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.104569 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.105065 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.108615 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-79d749bcb5-rfm5g"] Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.263615 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" event={"ID":"15de099a-88c7-4c7c-9b4e-8d10c1e392f3","Type":"ContainerStarted","Data":"437fedea71afd800ff896b68e99104d97be6ee452c435f678a280134cd713c29"} Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.276592 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"7718217e-05bd-4eed-a87d-34ddc193453f","Type":"ContainerStarted","Data":"5bb5157865dbcdeaa2fc0e092085748414fe1d701384736124b30439749ca51c"} Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.278559 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/959d187e-bbbf-4e61-b0d7-67a6b30529a4-monitoring-plugin-cert\") pod \"monitoring-plugin-79d749bcb5-rfm5g\" (UID: \"959d187e-bbbf-4e61-b0d7-67a6b30529a4\") " pod="openshift-monitoring/monitoring-plugin-79d749bcb5-rfm5g" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.380311 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/959d187e-bbbf-4e61-b0d7-67a6b30529a4-monitoring-plugin-cert\") pod \"monitoring-plugin-79d749bcb5-rfm5g\" (UID: \"959d187e-bbbf-4e61-b0d7-67a6b30529a4\") " pod="openshift-monitoring/monitoring-plugin-79d749bcb5-rfm5g" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.384813 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/959d187e-bbbf-4e61-b0d7-67a6b30529a4-monitoring-plugin-cert\") pod \"monitoring-plugin-79d749bcb5-rfm5g\" (UID: \"959d187e-bbbf-4e61-b0d7-67a6b30529a4\") " pod="openshift-monitoring/monitoring-plugin-79d749bcb5-rfm5g" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.429773 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-79d749bcb5-rfm5g" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.669337 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-66b5df45c6-rrlk8"] Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.715972 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7cfcf6657f-wsnmf"] Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.724757 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.726663 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.729527 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.730169 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.730318 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-8r28sb1gk7pdu" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.730630 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.731521 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.733358 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.733472 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-wk4kb" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.733688 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.733769 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.734108 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.734645 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.734963 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.739079 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.763533 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.786395 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d2bac22b-985e-423c-8765-df9df37cee02-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.786444 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d2bac22b-985e-423c-8765-df9df37cee02-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.786472 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d2bac22b-985e-423c-8765-df9df37cee02-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.786494 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d2bac22b-985e-423c-8765-df9df37cee02-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.786535 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d2bac22b-985e-423c-8765-df9df37cee02-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.786558 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d2bac22b-985e-423c-8765-df9df37cee02-config\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.786576 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkg86\" (UniqueName: \"kubernetes.io/projected/d2bac22b-985e-423c-8765-df9df37cee02-kube-api-access-gkg86\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.786602 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/d2bac22b-985e-423c-8765-df9df37cee02-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.786624 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2bac22b-985e-423c-8765-df9df37cee02-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.786649 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/d2bac22b-985e-423c-8765-df9df37cee02-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.786673 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/d2bac22b-985e-423c-8765-df9df37cee02-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.786708 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d2bac22b-985e-423c-8765-df9df37cee02-web-config\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.786733 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2bac22b-985e-423c-8765-df9df37cee02-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.786756 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d2bac22b-985e-423c-8765-df9df37cee02-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.786791 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d2bac22b-985e-423c-8765-df9df37cee02-config-out\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.786813 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d2bac22b-985e-423c-8765-df9df37cee02-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.786845 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2bac22b-985e-423c-8765-df9df37cee02-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.786872 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/d2bac22b-985e-423c-8765-df9df37cee02-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.887395 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d2bac22b-985e-423c-8765-df9df37cee02-web-config\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.887448 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2bac22b-985e-423c-8765-df9df37cee02-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.887479 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d2bac22b-985e-423c-8765-df9df37cee02-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.887508 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d2bac22b-985e-423c-8765-df9df37cee02-config-out\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.887531 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d2bac22b-985e-423c-8765-df9df37cee02-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.887566 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2bac22b-985e-423c-8765-df9df37cee02-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.887595 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/d2bac22b-985e-423c-8765-df9df37cee02-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.887625 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d2bac22b-985e-423c-8765-df9df37cee02-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.888552 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d2bac22b-985e-423c-8765-df9df37cee02-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.888591 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d2bac22b-985e-423c-8765-df9df37cee02-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.888614 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d2bac22b-985e-423c-8765-df9df37cee02-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.888657 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d2bac22b-985e-423c-8765-df9df37cee02-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.888682 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkg86\" (UniqueName: \"kubernetes.io/projected/d2bac22b-985e-423c-8765-df9df37cee02-kube-api-access-gkg86\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.888707 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d2bac22b-985e-423c-8765-df9df37cee02-config\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.888744 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/d2bac22b-985e-423c-8765-df9df37cee02-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.888766 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2bac22b-985e-423c-8765-df9df37cee02-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.888706 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2bac22b-985e-423c-8765-df9df37cee02-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.888776 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2bac22b-985e-423c-8765-df9df37cee02-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.889398 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/d2bac22b-985e-423c-8765-df9df37cee02-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.889433 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/d2bac22b-985e-423c-8765-df9df37cee02-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.891223 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/d2bac22b-985e-423c-8765-df9df37cee02-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.891226 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2bac22b-985e-423c-8765-df9df37cee02-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.891586 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d2bac22b-985e-423c-8765-df9df37cee02-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.893015 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d2bac22b-985e-423c-8765-df9df37cee02-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.893719 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d2bac22b-985e-423c-8765-df9df37cee02-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.895632 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d2bac22b-985e-423c-8765-df9df37cee02-config\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.895907 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/d2bac22b-985e-423c-8765-df9df37cee02-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.896045 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d2bac22b-985e-423c-8765-df9df37cee02-config-out\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.896445 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/d2bac22b-985e-423c-8765-df9df37cee02-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.896472 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d2bac22b-985e-423c-8765-df9df37cee02-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.896745 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d2bac22b-985e-423c-8765-df9df37cee02-web-config\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.897748 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d2bac22b-985e-423c-8765-df9df37cee02-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.898547 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/d2bac22b-985e-423c-8765-df9df37cee02-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.899155 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d2bac22b-985e-423c-8765-df9df37cee02-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.907330 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkg86\" (UniqueName: \"kubernetes.io/projected/d2bac22b-985e-423c-8765-df9df37cee02-kube-api-access-gkg86\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:25 crc kubenswrapper[4897]: I0214 18:49:25.982614 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-79d749bcb5-rfm5g"] Feb 14 18:49:26 crc kubenswrapper[4897]: I0214 18:49:26.018658 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d2bac22b-985e-423c-8765-df9df37cee02-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"d2bac22b-985e-423c-8765-df9df37cee02\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:26 crc kubenswrapper[4897]: I0214 18:49:26.053626 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:26 crc kubenswrapper[4897]: I0214 18:49:26.284395 4897 generic.go:334] "Generic (PLEG): container finished" podID="9c5f4625-c2b6-4f58-9cc3-588405b0fbae" containerID="8e73d0a0f96890e54a51e02c16a83e3176a4bc1ddd055967e8d6c33d7c7f42b4" exitCode=0 Feb 14 18:49:26 crc kubenswrapper[4897]: I0214 18:49:26.284436 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-7g4lj" event={"ID":"9c5f4625-c2b6-4f58-9cc3-588405b0fbae","Type":"ContainerDied","Data":"8e73d0a0f96890e54a51e02c16a83e3176a4bc1ddd055967e8d6c33d7c7f42b4"} Feb 14 18:49:26 crc kubenswrapper[4897]: W0214 18:49:26.687102 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9490a7a2_1c74_4391_b113_5a37b912de71.slice/crio-591df4d01dd838544463f24b831e1eae5831053ce0a1e65ef71722407b5f860b WatchSource:0}: Error finding container 591df4d01dd838544463f24b831e1eae5831053ce0a1e65ef71722407b5f860b: Status 404 returned error can't find the container with id 591df4d01dd838544463f24b831e1eae5831053ce0a1e65ef71722407b5f860b Feb 14 18:49:27 crc kubenswrapper[4897]: I0214 18:49:27.295060 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66b5df45c6-rrlk8" event={"ID":"9490a7a2-1c74-4391-b113-5a37b912de71","Type":"ContainerStarted","Data":"591df4d01dd838544463f24b831e1eae5831053ce0a1e65ef71722407b5f860b"} Feb 14 18:49:27 crc kubenswrapper[4897]: I0214 18:49:27.299393 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-79d749bcb5-rfm5g" event={"ID":"959d187e-bbbf-4e61-b0d7-67a6b30529a4","Type":"ContainerStarted","Data":"b7a01fc6e30a97cbe5af6190938f1dc8fa146133128b2c40f07697f0372b80e4"} Feb 14 18:49:27 crc kubenswrapper[4897]: I0214 18:49:27.307818 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" event={"ID":"9748a754-75f5-4f7d-9e7b-a6135dd3778d","Type":"ContainerStarted","Data":"c10a08c792d509d323ba27a1966a9c1f25078ef4baa2bd9f3624f0d7dd187596"} Feb 14 18:49:28 crc kubenswrapper[4897]: I0214 18:49:28.140806 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 14 18:49:28 crc kubenswrapper[4897]: I0214 18:49:28.315748 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d2bac22b-985e-423c-8765-df9df37cee02","Type":"ContainerStarted","Data":"a778a762263e7b167d34c764ad5d2980e755a39a567951296bd035f9703b9454"} Feb 14 18:49:28 crc kubenswrapper[4897]: I0214 18:49:28.317673 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-pzmbg" event={"ID":"c40b9022-9f67-409d-9537-f7c51cbde229","Type":"ContainerStarted","Data":"3fd1e76149052b39235c5f5bc2c24141778be19ae24da66b59d0bc26b6818a5e"} Feb 14 18:49:28 crc kubenswrapper[4897]: I0214 18:49:28.320536 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-7g4lj" event={"ID":"9c5f4625-c2b6-4f58-9cc3-588405b0fbae","Type":"ContainerStarted","Data":"08c3e777cba3b5d95ce4c06b1a9694bc7043b25f7ede118b706a8e59b71d35b2"} Feb 14 18:49:28 crc kubenswrapper[4897]: I0214 18:49:28.325982 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66b5df45c6-rrlk8" event={"ID":"9490a7a2-1c74-4391-b113-5a37b912de71","Type":"ContainerStarted","Data":"4cff2c12e4c96835b98d2fce1605aca8429eb13667cfba1d91e28faf9dc320b2"} Feb 14 18:49:28 crc kubenswrapper[4897]: I0214 18:49:28.329805 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l" event={"ID":"03cd71ad-ad51-43fb-b82b-f7c366799f65","Type":"ContainerStarted","Data":"f179d3f8ae9e6d654cc7de228a1d2c54d841157b7f57f663cb299fb7690616b4"} Feb 14 18:49:28 crc kubenswrapper[4897]: I0214 18:49:28.371895 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-pzmbg" podStartSLOduration=3.515134996 podStartE2EDuration="9.371879628s" podCreationTimestamp="2026-02-14 18:49:19 +0000 UTC" firstStartedPulling="2026-02-14 18:49:22.151708704 +0000 UTC m=+415.128117197" lastFinishedPulling="2026-02-14 18:49:28.008453336 +0000 UTC m=+420.984861829" observedRunningTime="2026-02-14 18:49:28.341953773 +0000 UTC m=+421.318362256" watchObservedRunningTime="2026-02-14 18:49:28.371879628 +0000 UTC m=+421.348288111" Feb 14 18:49:28 crc kubenswrapper[4897]: I0214 18:49:28.373296 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-66b5df45c6-rrlk8" podStartSLOduration=4.373289171 podStartE2EDuration="4.373289171s" podCreationTimestamp="2026-02-14 18:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:49:28.37031879 +0000 UTC m=+421.346727283" watchObservedRunningTime="2026-02-14 18:49:28.373289171 +0000 UTC m=+421.349697654" Feb 14 18:49:29 crc kubenswrapper[4897]: I0214 18:49:29.344080 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-7g4lj" event={"ID":"9c5f4625-c2b6-4f58-9cc3-588405b0fbae","Type":"ContainerStarted","Data":"0221cca23f2d6a2ecd8cda8147d2da1d84d3591c2150aa28bcdbdfaa3612b58b"} Feb 14 18:49:29 crc kubenswrapper[4897]: I0214 18:49:29.347554 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l" event={"ID":"03cd71ad-ad51-43fb-b82b-f7c366799f65","Type":"ContainerStarted","Data":"dcbc48d5c4e1f83d5df45bb48a2698dc1c908683bc87ddba25e7043d42769f21"} Feb 14 18:49:29 crc kubenswrapper[4897]: I0214 18:49:29.347594 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l" event={"ID":"03cd71ad-ad51-43fb-b82b-f7c366799f65","Type":"ContainerStarted","Data":"2f0af7e425b41883fb7ef1cc2bde4bd145d779b63cc6540dfcec84e91f775bd8"} Feb 14 18:49:29 crc kubenswrapper[4897]: I0214 18:49:29.376701 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-7g4lj" podStartSLOduration=5.26142829 podStartE2EDuration="10.37599887s" podCreationTimestamp="2026-02-14 18:49:19 +0000 UTC" firstStartedPulling="2026-02-14 18:49:20.318302071 +0000 UTC m=+413.294710554" lastFinishedPulling="2026-02-14 18:49:25.432872611 +0000 UTC m=+418.409281134" observedRunningTime="2026-02-14 18:49:29.36912405 +0000 UTC m=+422.345532553" watchObservedRunningTime="2026-02-14 18:49:29.37599887 +0000 UTC m=+422.352407353" Feb 14 18:49:29 crc kubenswrapper[4897]: I0214 18:49:29.394454 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-tkm4l" podStartSLOduration=2.65130766 podStartE2EDuration="10.394434243s" podCreationTimestamp="2026-02-14 18:49:19 +0000 UTC" firstStartedPulling="2026-02-14 18:49:20.149393632 +0000 UTC m=+413.125802115" lastFinishedPulling="2026-02-14 18:49:27.892520175 +0000 UTC m=+420.868928698" observedRunningTime="2026-02-14 18:49:29.385234562 +0000 UTC m=+422.361643045" watchObservedRunningTime="2026-02-14 18:49:29.394434243 +0000 UTC m=+422.370842726" Feb 14 18:49:31 crc kubenswrapper[4897]: I0214 18:49:31.361929 4897 generic.go:334] "Generic (PLEG): container finished" podID="d2bac22b-985e-423c-8765-df9df37cee02" containerID="acaa4030b6c3474d339ec6f6d39c269188ddb2c9bcf9654b00d509b8241e492b" exitCode=0 Feb 14 18:49:31 crc kubenswrapper[4897]: I0214 18:49:31.361983 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d2bac22b-985e-423c-8765-df9df37cee02","Type":"ContainerDied","Data":"acaa4030b6c3474d339ec6f6d39c269188ddb2c9bcf9654b00d509b8241e492b"} Feb 14 18:49:31 crc kubenswrapper[4897]: I0214 18:49:31.366628 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" event={"ID":"15de099a-88c7-4c7c-9b4e-8d10c1e392f3","Type":"ContainerStarted","Data":"8e6224d73808f69ad8fe5ec2ba1c05820b07e03b45418bfb34df40f3e4e31b43"} Feb 14 18:49:31 crc kubenswrapper[4897]: I0214 18:49:31.366692 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" event={"ID":"15de099a-88c7-4c7c-9b4e-8d10c1e392f3","Type":"ContainerStarted","Data":"e0f12d0cf67d60335759c4166dd6151f83e4a40697b2860af4abe0420888ee3c"} Feb 14 18:49:31 crc kubenswrapper[4897]: I0214 18:49:31.370791 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-79d749bcb5-rfm5g" event={"ID":"959d187e-bbbf-4e61-b0d7-67a6b30529a4","Type":"ContainerStarted","Data":"80e29b1a0654efcfc598710f56bc22166213b93d93abe086ea9fd3cd7e80ee4d"} Feb 14 18:49:31 crc kubenswrapper[4897]: I0214 18:49:31.371020 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-79d749bcb5-rfm5g" Feb 14 18:49:31 crc kubenswrapper[4897]: I0214 18:49:31.373368 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" event={"ID":"9748a754-75f5-4f7d-9e7b-a6135dd3778d","Type":"ContainerStarted","Data":"4a2ab4d17858d582748edaafa439d45f133d6351b1fe7558ddad33188c7b1b13"} Feb 14 18:49:31 crc kubenswrapper[4897]: I0214 18:49:31.377519 4897 generic.go:334] "Generic (PLEG): container finished" podID="7718217e-05bd-4eed-a87d-34ddc193453f" containerID="9523b3fc5ddf3bf1825563ee9658d22593f4e115f64c5521aebe652327b31bd8" exitCode=0 Feb 14 18:49:31 crc kubenswrapper[4897]: I0214 18:49:31.377573 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"7718217e-05bd-4eed-a87d-34ddc193453f","Type":"ContainerDied","Data":"9523b3fc5ddf3bf1825563ee9658d22593f4e115f64c5521aebe652327b31bd8"} Feb 14 18:49:31 crc kubenswrapper[4897]: I0214 18:49:31.402406 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-79d749bcb5-rfm5g" Feb 14 18:49:31 crc kubenswrapper[4897]: I0214 18:49:31.436103 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" podStartSLOduration=3.323312148 podStartE2EDuration="7.436077027s" podCreationTimestamp="2026-02-14 18:49:24 +0000 UTC" firstStartedPulling="2026-02-14 18:49:26.694454969 +0000 UTC m=+419.670863492" lastFinishedPulling="2026-02-14 18:49:30.807219888 +0000 UTC m=+423.783628371" observedRunningTime="2026-02-14 18:49:31.434961053 +0000 UTC m=+424.411369566" watchObservedRunningTime="2026-02-14 18:49:31.436077027 +0000 UTC m=+424.412485550" Feb 14 18:49:31 crc kubenswrapper[4897]: I0214 18:49:31.454177 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-79d749bcb5-rfm5g" podStartSLOduration=2.341962759 podStartE2EDuration="6.45414856s" podCreationTimestamp="2026-02-14 18:49:25 +0000 UTC" firstStartedPulling="2026-02-14 18:49:26.69448139 +0000 UTC m=+419.670889913" lastFinishedPulling="2026-02-14 18:49:30.806667201 +0000 UTC m=+423.783075714" observedRunningTime="2026-02-14 18:49:31.451839939 +0000 UTC m=+424.428248452" watchObservedRunningTime="2026-02-14 18:49:31.45414856 +0000 UTC m=+424.430557073" Feb 14 18:49:31 crc kubenswrapper[4897]: I0214 18:49:31.726283 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 18:49:31 crc kubenswrapper[4897]: I0214 18:49:31.726513 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 18:49:31 crc kubenswrapper[4897]: I0214 18:49:31.726590 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 18:49:31 crc kubenswrapper[4897]: I0214 18:49:31.727533 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac685437ddee138a3eaa2a50823011ad70b1b32e6d58f93b6f0439596a8822de"} pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 18:49:31 crc kubenswrapper[4897]: I0214 18:49:31.727648 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" containerID="cri-o://ac685437ddee138a3eaa2a50823011ad70b1b32e6d58f93b6f0439596a8822de" gracePeriod=600 Feb 14 18:49:31 crc kubenswrapper[4897]: E0214 18:49:31.790133 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f885c6c_b913_48e3_93fc_abf932515ea9.slice/crio-conmon-ac685437ddee138a3eaa2a50823011ad70b1b32e6d58f93b6f0439596a8822de.scope\": RecentStats: unable to find data in memory cache]" Feb 14 18:49:32 crc kubenswrapper[4897]: I0214 18:49:32.386803 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" event={"ID":"15de099a-88c7-4c7c-9b4e-8d10c1e392f3","Type":"ContainerStarted","Data":"00b0d2ad316c38254d68b776a930c8f032e050bf1ee71cf98196206a57b9e927"} Feb 14 18:49:32 crc kubenswrapper[4897]: I0214 18:49:32.389336 4897 generic.go:334] "Generic (PLEG): container finished" podID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerID="ac685437ddee138a3eaa2a50823011ad70b1b32e6d58f93b6f0439596a8822de" exitCode=0 Feb 14 18:49:32 crc kubenswrapper[4897]: I0214 18:49:32.389420 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerDied","Data":"ac685437ddee138a3eaa2a50823011ad70b1b32e6d58f93b6f0439596a8822de"} Feb 14 18:49:32 crc kubenswrapper[4897]: I0214 18:49:32.389458 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerStarted","Data":"2c722bb3847b6caa173e38da195a6a74bd7b3547a2d4d41a8a85c1c5e17187d8"} Feb 14 18:49:32 crc kubenswrapper[4897]: I0214 18:49:32.389479 4897 scope.go:117] "RemoveContainer" containerID="b02396cbdc2046f7a26832ce302e1f74f885d649c68e676d589f781bf1db97af" Feb 14 18:49:33 crc kubenswrapper[4897]: I0214 18:49:33.400006 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" event={"ID":"15de099a-88c7-4c7c-9b4e-8d10c1e392f3","Type":"ContainerStarted","Data":"cd993807227bca53a190b1f3d674d1ec8340ec3cfa1c81cb95d0689c82e19803"} Feb 14 18:49:33 crc kubenswrapper[4897]: I0214 18:49:33.400322 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" event={"ID":"15de099a-88c7-4c7c-9b4e-8d10c1e392f3","Type":"ContainerStarted","Data":"55c58d48e0f9d9520d77748864b3ba7e9f23b4146ffa8d8012cca1154216e2c2"} Feb 14 18:49:33 crc kubenswrapper[4897]: I0214 18:49:33.403611 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"7718217e-05bd-4eed-a87d-34ddc193453f","Type":"ContainerStarted","Data":"bf06080629f8a32f360f4838cdc2135f883941b63c3c952449ff0c02f04b21b6"} Feb 14 18:49:34 crc kubenswrapper[4897]: I0214 18:49:34.414860 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" event={"ID":"15de099a-88c7-4c7c-9b4e-8d10c1e392f3","Type":"ContainerStarted","Data":"4c2f9cc155ab53f17604fd4352d4416ea212fc375cfec8521a67df889f25840c"} Feb 14 18:49:34 crc kubenswrapper[4897]: I0214 18:49:34.415664 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:34 crc kubenswrapper[4897]: I0214 18:49:34.433375 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"7718217e-05bd-4eed-a87d-34ddc193453f","Type":"ContainerStarted","Data":"c881018ea8524f9379931c9d9f5483590ff6fd3a0ad02ad7c3d5541b6d62aa75"} Feb 14 18:49:34 crc kubenswrapper[4897]: I0214 18:49:34.435398 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d2bac22b-985e-423c-8765-df9df37cee02","Type":"ContainerStarted","Data":"7adb570885b726dbdc2bacbbdfa2bdbddce5aa32a1fc5dc641b24c29aca6838a"} Feb 14 18:49:34 crc kubenswrapper[4897]: I0214 18:49:34.450998 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" podStartSLOduration=6.212571194 podStartE2EDuration="13.450972331s" podCreationTimestamp="2026-02-14 18:49:21 +0000 UTC" firstStartedPulling="2026-02-14 18:49:25.245763146 +0000 UTC m=+418.222171619" lastFinishedPulling="2026-02-14 18:49:32.484164263 +0000 UTC m=+425.460572756" observedRunningTime="2026-02-14 18:49:34.444793332 +0000 UTC m=+427.421201865" watchObservedRunningTime="2026-02-14 18:49:34.450972331 +0000 UTC m=+427.427380834" Feb 14 18:49:34 crc kubenswrapper[4897]: I0214 18:49:34.781521 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:49:34 crc kubenswrapper[4897]: I0214 18:49:34.781564 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:49:34 crc kubenswrapper[4897]: I0214 18:49:34.786467 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:49:35 crc kubenswrapper[4897]: I0214 18:49:35.457233 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"7718217e-05bd-4eed-a87d-34ddc193453f","Type":"ContainerStarted","Data":"ae0233c873b7895ab5ec8466a5d8d92d9e6802b55ddbaffbe0a5b0a74c5e3463"} Feb 14 18:49:35 crc kubenswrapper[4897]: I0214 18:49:35.457310 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"7718217e-05bd-4eed-a87d-34ddc193453f","Type":"ContainerStarted","Data":"f6f4375f0f8c3a97daac372a2e7023271fd5e48aec611ac5696a5e9ca486925c"} Feb 14 18:49:35 crc kubenswrapper[4897]: I0214 18:49:35.457356 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"7718217e-05bd-4eed-a87d-34ddc193453f","Type":"ContainerStarted","Data":"86704fdfe86120f3551aec737225aef3ec60f51202810573671ef11530465da1"} Feb 14 18:49:35 crc kubenswrapper[4897]: I0214 18:49:35.457385 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"7718217e-05bd-4eed-a87d-34ddc193453f","Type":"ContainerStarted","Data":"6aa57af991d495e554a98e5dbc2c3222d5c4790e0c8153cba637ddb2efe2c45c"} Feb 14 18:49:35 crc kubenswrapper[4897]: I0214 18:49:35.463403 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d2bac22b-985e-423c-8765-df9df37cee02","Type":"ContainerStarted","Data":"ac662934d1b1d91a5930d8a7abe3e7ab6e662461facb7d04193d5807fd532e37"} Feb 14 18:49:35 crc kubenswrapper[4897]: I0214 18:49:35.463475 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d2bac22b-985e-423c-8765-df9df37cee02","Type":"ContainerStarted","Data":"461eaba6896d011b85af1c256d1c9e5c61b1cb9aa36155bb9ee5e0f4ebc74815"} Feb 14 18:49:35 crc kubenswrapper[4897]: I0214 18:49:35.463486 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d2bac22b-985e-423c-8765-df9df37cee02","Type":"ContainerStarted","Data":"cad1935f77d14bbc3c3f25a044cad9a5cb111c66ce6db101455fcfa9351ef86c"} Feb 14 18:49:35 crc kubenswrapper[4897]: I0214 18:49:35.463496 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d2bac22b-985e-423c-8765-df9df37cee02","Type":"ContainerStarted","Data":"68da2158dc98a3ef1c7aa74ce117c216156a7a210459a9b407fa730e0fe1ca88"} Feb 14 18:49:35 crc kubenswrapper[4897]: I0214 18:49:35.463506 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d2bac22b-985e-423c-8765-df9df37cee02","Type":"ContainerStarted","Data":"79fd78b3d6e96c6fab96c4c58b8d749958b8e5cb0b4a1ee54fc350bdd521eda4"} Feb 14 18:49:35 crc kubenswrapper[4897]: I0214 18:49:35.469709 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:49:35 crc kubenswrapper[4897]: I0214 18:49:35.517128 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=7.901013632 podStartE2EDuration="15.517084696s" podCreationTimestamp="2026-02-14 18:49:20 +0000 UTC" firstStartedPulling="2026-02-14 18:49:25.416697237 +0000 UTC m=+418.393105770" lastFinishedPulling="2026-02-14 18:49:33.032768311 +0000 UTC m=+426.009176834" observedRunningTime="2026-02-14 18:49:35.499723807 +0000 UTC m=+428.476132340" watchObservedRunningTime="2026-02-14 18:49:35.517084696 +0000 UTC m=+428.493493279" Feb 14 18:49:35 crc kubenswrapper[4897]: I0214 18:49:35.561135 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.460965786 podStartE2EDuration="10.561105102s" podCreationTimestamp="2026-02-14 18:49:25 +0000 UTC" firstStartedPulling="2026-02-14 18:49:28.149077932 +0000 UTC m=+421.125486415" lastFinishedPulling="2026-02-14 18:49:34.249217248 +0000 UTC m=+427.225625731" observedRunningTime="2026-02-14 18:49:35.551614891 +0000 UTC m=+428.528023464" watchObservedRunningTime="2026-02-14 18:49:35.561105102 +0000 UTC m=+428.537513625" Feb 14 18:49:35 crc kubenswrapper[4897]: I0214 18:49:35.622113 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-6jjtk"] Feb 14 18:49:36 crc kubenswrapper[4897]: I0214 18:49:36.054512 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:49:36 crc kubenswrapper[4897]: I0214 18:49:36.483872 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" Feb 14 18:49:45 crc kubenswrapper[4897]: I0214 18:49:45.015202 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 18:49:45 crc kubenswrapper[4897]: I0214 18:49:45.015712 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 18:50:00 crc kubenswrapper[4897]: I0214 18:50:00.682733 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-6jjtk" podUID="044e39f7-5f0c-4bd9-ad2b-6bab235abf9a" containerName="console" containerID="cri-o://8d9d60ff40b373ccdbe9fec097e8d7fda7b22ec2e68a3639fcbdb9747b7058f4" gracePeriod=15 Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.166371 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-6jjtk_044e39f7-5f0c-4bd9-ad2b-6bab235abf9a/console/0.log" Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.166795 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.210253 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-console-serving-cert\") pod \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.210404 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjgf9\" (UniqueName: \"kubernetes.io/projected/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-kube-api-access-cjgf9\") pod \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.210498 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-console-oauth-config\") pod \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.210551 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-oauth-serving-cert\") pod \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.210599 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-console-config\") pod \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.210654 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-trusted-ca-bundle\") pod \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.210705 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-service-ca\") pod \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\" (UID: \"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a\") " Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.211105 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-console-config" (OuterVolumeSpecName: "console-config") pod "044e39f7-5f0c-4bd9-ad2b-6bab235abf9a" (UID: "044e39f7-5f0c-4bd9-ad2b-6bab235abf9a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.211552 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "044e39f7-5f0c-4bd9-ad2b-6bab235abf9a" (UID: "044e39f7-5f0c-4bd9-ad2b-6bab235abf9a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.211583 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-service-ca" (OuterVolumeSpecName: "service-ca") pod "044e39f7-5f0c-4bd9-ad2b-6bab235abf9a" (UID: "044e39f7-5f0c-4bd9-ad2b-6bab235abf9a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.212577 4897 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.212609 4897 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-console-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.212623 4897 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.212840 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "044e39f7-5f0c-4bd9-ad2b-6bab235abf9a" (UID: "044e39f7-5f0c-4bd9-ad2b-6bab235abf9a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.221573 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "044e39f7-5f0c-4bd9-ad2b-6bab235abf9a" (UID: "044e39f7-5f0c-4bd9-ad2b-6bab235abf9a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.222150 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "044e39f7-5f0c-4bd9-ad2b-6bab235abf9a" (UID: "044e39f7-5f0c-4bd9-ad2b-6bab235abf9a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.222268 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-kube-api-access-cjgf9" (OuterVolumeSpecName: "kube-api-access-cjgf9") pod "044e39f7-5f0c-4bd9-ad2b-6bab235abf9a" (UID: "044e39f7-5f0c-4bd9-ad2b-6bab235abf9a"). InnerVolumeSpecName "kube-api-access-cjgf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.314152 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjgf9\" (UniqueName: \"kubernetes.io/projected/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-kube-api-access-cjgf9\") on node \"crc\" DevicePath \"\"" Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.314187 4897 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.314196 4897 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.314209 4897 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.733631 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-6jjtk_044e39f7-5f0c-4bd9-ad2b-6bab235abf9a/console/0.log" Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.734485 4897 generic.go:334] "Generic (PLEG): container finished" podID="044e39f7-5f0c-4bd9-ad2b-6bab235abf9a" containerID="8d9d60ff40b373ccdbe9fec097e8d7fda7b22ec2e68a3639fcbdb9747b7058f4" exitCode=2 Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.734591 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6jjtk" event={"ID":"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a","Type":"ContainerDied","Data":"8d9d60ff40b373ccdbe9fec097e8d7fda7b22ec2e68a3639fcbdb9747b7058f4"} Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.734645 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6jjtk" Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.734730 4897 scope.go:117] "RemoveContainer" containerID="8d9d60ff40b373ccdbe9fec097e8d7fda7b22ec2e68a3639fcbdb9747b7058f4" Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.734669 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6jjtk" event={"ID":"044e39f7-5f0c-4bd9-ad2b-6bab235abf9a","Type":"ContainerDied","Data":"58c9ea43b70550e808154cbbe88bce0cb96d7581c712e635752e72c5f313ec06"} Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.772415 4897 scope.go:117] "RemoveContainer" containerID="8d9d60ff40b373ccdbe9fec097e8d7fda7b22ec2e68a3639fcbdb9747b7058f4" Feb 14 18:50:01 crc kubenswrapper[4897]: E0214 18:50:01.772920 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d9d60ff40b373ccdbe9fec097e8d7fda7b22ec2e68a3639fcbdb9747b7058f4\": container with ID starting with 8d9d60ff40b373ccdbe9fec097e8d7fda7b22ec2e68a3639fcbdb9747b7058f4 not found: ID does not exist" containerID="8d9d60ff40b373ccdbe9fec097e8d7fda7b22ec2e68a3639fcbdb9747b7058f4" Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.773001 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d9d60ff40b373ccdbe9fec097e8d7fda7b22ec2e68a3639fcbdb9747b7058f4"} err="failed to get container status \"8d9d60ff40b373ccdbe9fec097e8d7fda7b22ec2e68a3639fcbdb9747b7058f4\": rpc error: code = NotFound desc = could not find container \"8d9d60ff40b373ccdbe9fec097e8d7fda7b22ec2e68a3639fcbdb9747b7058f4\": container with ID starting with 8d9d60ff40b373ccdbe9fec097e8d7fda7b22ec2e68a3639fcbdb9747b7058f4 not found: ID does not exist" Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.792950 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-6jjtk"] Feb 14 18:50:01 crc kubenswrapper[4897]: I0214 18:50:01.803857 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-6jjtk"] Feb 14 18:50:01 crc kubenswrapper[4897]: E0214 18:50:01.815354 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod044e39f7_5f0c_4bd9_ad2b_6bab235abf9a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod044e39f7_5f0c_4bd9_ad2b_6bab235abf9a.slice/crio-58c9ea43b70550e808154cbbe88bce0cb96d7581c712e635752e72c5f313ec06\": RecentStats: unable to find data in memory cache]" Feb 14 18:50:03 crc kubenswrapper[4897]: I0214 18:50:03.806161 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="044e39f7-5f0c-4bd9-ad2b-6bab235abf9a" path="/var/lib/kubelet/pods/044e39f7-5f0c-4bd9-ad2b-6bab235abf9a/volumes" Feb 14 18:50:05 crc kubenswrapper[4897]: I0214 18:50:05.022012 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 18:50:05 crc kubenswrapper[4897]: I0214 18:50:05.027646 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 18:50:26 crc kubenswrapper[4897]: I0214 18:50:26.054555 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:50:26 crc kubenswrapper[4897]: I0214 18:50:26.104780 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:50:26 crc kubenswrapper[4897]: I0214 18:50:26.978264 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 18:50:53 crc kubenswrapper[4897]: I0214 18:50:53.758384 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-74f65588b4-xzwdj"] Feb 14 18:50:53 crc kubenswrapper[4897]: E0214 18:50:53.759455 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="044e39f7-5f0c-4bd9-ad2b-6bab235abf9a" containerName="console" Feb 14 18:50:53 crc kubenswrapper[4897]: I0214 18:50:53.759471 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="044e39f7-5f0c-4bd9-ad2b-6bab235abf9a" containerName="console" Feb 14 18:50:53 crc kubenswrapper[4897]: I0214 18:50:53.759727 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="044e39f7-5f0c-4bd9-ad2b-6bab235abf9a" containerName="console" Feb 14 18:50:53 crc kubenswrapper[4897]: I0214 18:50:53.760486 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:50:53 crc kubenswrapper[4897]: I0214 18:50:53.769214 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-74f65588b4-xzwdj"] Feb 14 18:50:53 crc kubenswrapper[4897]: I0214 18:50:53.798144 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1c9e604b-a644-4bd8-a149-c91719694ea8-console-oauth-config\") pod \"console-74f65588b4-xzwdj\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:50:53 crc kubenswrapper[4897]: I0214 18:50:53.798196 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7868\" (UniqueName: \"kubernetes.io/projected/1c9e604b-a644-4bd8-a149-c91719694ea8-kube-api-access-w7868\") pod \"console-74f65588b4-xzwdj\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:50:53 crc kubenswrapper[4897]: I0214 18:50:53.798224 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c9e604b-a644-4bd8-a149-c91719694ea8-trusted-ca-bundle\") pod \"console-74f65588b4-xzwdj\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:50:53 crc kubenswrapper[4897]: I0214 18:50:53.798267 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c9e604b-a644-4bd8-a149-c91719694ea8-console-serving-cert\") pod \"console-74f65588b4-xzwdj\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:50:53 crc kubenswrapper[4897]: I0214 18:50:53.798284 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1c9e604b-a644-4bd8-a149-c91719694ea8-service-ca\") pod \"console-74f65588b4-xzwdj\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:50:53 crc kubenswrapper[4897]: I0214 18:50:53.798319 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1c9e604b-a644-4bd8-a149-c91719694ea8-console-config\") pod \"console-74f65588b4-xzwdj\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:50:53 crc kubenswrapper[4897]: I0214 18:50:53.798345 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1c9e604b-a644-4bd8-a149-c91719694ea8-oauth-serving-cert\") pod \"console-74f65588b4-xzwdj\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:50:53 crc kubenswrapper[4897]: I0214 18:50:53.900135 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c9e604b-a644-4bd8-a149-c91719694ea8-console-serving-cert\") pod \"console-74f65588b4-xzwdj\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:50:53 crc kubenswrapper[4897]: I0214 18:50:53.900188 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1c9e604b-a644-4bd8-a149-c91719694ea8-service-ca\") pod \"console-74f65588b4-xzwdj\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:50:53 crc kubenswrapper[4897]: I0214 18:50:53.900247 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1c9e604b-a644-4bd8-a149-c91719694ea8-console-config\") pod \"console-74f65588b4-xzwdj\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:50:53 crc kubenswrapper[4897]: I0214 18:50:53.900283 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1c9e604b-a644-4bd8-a149-c91719694ea8-oauth-serving-cert\") pod \"console-74f65588b4-xzwdj\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:50:53 crc kubenswrapper[4897]: I0214 18:50:53.900321 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1c9e604b-a644-4bd8-a149-c91719694ea8-console-oauth-config\") pod \"console-74f65588b4-xzwdj\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:50:53 crc kubenswrapper[4897]: I0214 18:50:53.900341 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7868\" (UniqueName: \"kubernetes.io/projected/1c9e604b-a644-4bd8-a149-c91719694ea8-kube-api-access-w7868\") pod \"console-74f65588b4-xzwdj\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:50:53 crc kubenswrapper[4897]: I0214 18:50:53.900379 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c9e604b-a644-4bd8-a149-c91719694ea8-trusted-ca-bundle\") pod \"console-74f65588b4-xzwdj\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:50:53 crc kubenswrapper[4897]: I0214 18:50:53.901788 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1c9e604b-a644-4bd8-a149-c91719694ea8-service-ca\") pod \"console-74f65588b4-xzwdj\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:50:53 crc kubenswrapper[4897]: I0214 18:50:53.901891 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1c9e604b-a644-4bd8-a149-c91719694ea8-oauth-serving-cert\") pod \"console-74f65588b4-xzwdj\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:50:53 crc kubenswrapper[4897]: I0214 18:50:53.902570 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1c9e604b-a644-4bd8-a149-c91719694ea8-console-config\") pod \"console-74f65588b4-xzwdj\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:50:53 crc kubenswrapper[4897]: I0214 18:50:53.903961 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c9e604b-a644-4bd8-a149-c91719694ea8-trusted-ca-bundle\") pod \"console-74f65588b4-xzwdj\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:50:53 crc kubenswrapper[4897]: I0214 18:50:53.916416 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1c9e604b-a644-4bd8-a149-c91719694ea8-console-oauth-config\") pod \"console-74f65588b4-xzwdj\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:50:53 crc kubenswrapper[4897]: I0214 18:50:53.916435 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c9e604b-a644-4bd8-a149-c91719694ea8-console-serving-cert\") pod \"console-74f65588b4-xzwdj\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:50:53 crc kubenswrapper[4897]: I0214 18:50:53.922399 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7868\" (UniqueName: \"kubernetes.io/projected/1c9e604b-a644-4bd8-a149-c91719694ea8-kube-api-access-w7868\") pod \"console-74f65588b4-xzwdj\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:50:54 crc kubenswrapper[4897]: I0214 18:50:54.097608 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:50:54 crc kubenswrapper[4897]: I0214 18:50:54.445268 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-74f65588b4-xzwdj"] Feb 14 18:50:55 crc kubenswrapper[4897]: I0214 18:50:55.163872 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74f65588b4-xzwdj" event={"ID":"1c9e604b-a644-4bd8-a149-c91719694ea8","Type":"ContainerStarted","Data":"6cc9520cefb4c4d021da629b847023015a6c3270feca4a3e6793837c6077c3b6"} Feb 14 18:50:55 crc kubenswrapper[4897]: I0214 18:50:55.164292 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74f65588b4-xzwdj" event={"ID":"1c9e604b-a644-4bd8-a149-c91719694ea8","Type":"ContainerStarted","Data":"e7b4123865730ba7d0cd78ab4175cecbb731f7ea248ddabfea8f52364481096a"} Feb 14 18:50:55 crc kubenswrapper[4897]: I0214 18:50:55.200183 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-74f65588b4-xzwdj" podStartSLOduration=2.200157615 podStartE2EDuration="2.200157615s" podCreationTimestamp="2026-02-14 18:50:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:50:55.190961086 +0000 UTC m=+508.167369639" watchObservedRunningTime="2026-02-14 18:50:55.200157615 +0000 UTC m=+508.176566128" Feb 14 18:51:04 crc kubenswrapper[4897]: I0214 18:51:04.098355 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:51:04 crc kubenswrapper[4897]: I0214 18:51:04.099119 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:51:04 crc kubenswrapper[4897]: I0214 18:51:04.107969 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:51:04 crc kubenswrapper[4897]: I0214 18:51:04.244725 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:51:04 crc kubenswrapper[4897]: I0214 18:51:04.328135 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-66b5df45c6-rrlk8"] Feb 14 18:51:28 crc kubenswrapper[4897]: I0214 18:51:28.177880 4897 scope.go:117] "RemoveContainer" containerID="232f3c737ff8b8ee99153a62d77f08996a90061918e84f647703f787e430ee25" Feb 14 18:51:29 crc kubenswrapper[4897]: I0214 18:51:29.393388 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-66b5df45c6-rrlk8" podUID="9490a7a2-1c74-4391-b113-5a37b912de71" containerName="console" containerID="cri-o://4cff2c12e4c96835b98d2fce1605aca8429eb13667cfba1d91e28faf9dc320b2" gracePeriod=15 Feb 14 18:51:29 crc kubenswrapper[4897]: I0214 18:51:29.859709 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-66b5df45c6-rrlk8_9490a7a2-1c74-4391-b113-5a37b912de71/console/0.log" Feb 14 18:51:29 crc kubenswrapper[4897]: I0214 18:51:29.860137 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.050239 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9490a7a2-1c74-4391-b113-5a37b912de71-console-oauth-config\") pod \"9490a7a2-1c74-4391-b113-5a37b912de71\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.050303 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9490a7a2-1c74-4391-b113-5a37b912de71-console-config\") pod \"9490a7a2-1c74-4391-b113-5a37b912de71\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.050401 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wfsq\" (UniqueName: \"kubernetes.io/projected/9490a7a2-1c74-4391-b113-5a37b912de71-kube-api-access-9wfsq\") pod \"9490a7a2-1c74-4391-b113-5a37b912de71\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.050462 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9490a7a2-1c74-4391-b113-5a37b912de71-console-serving-cert\") pod \"9490a7a2-1c74-4391-b113-5a37b912de71\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.050495 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9490a7a2-1c74-4391-b113-5a37b912de71-oauth-serving-cert\") pod \"9490a7a2-1c74-4391-b113-5a37b912de71\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.050598 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9490a7a2-1c74-4391-b113-5a37b912de71-service-ca\") pod \"9490a7a2-1c74-4391-b113-5a37b912de71\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.050651 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9490a7a2-1c74-4391-b113-5a37b912de71-trusted-ca-bundle\") pod \"9490a7a2-1c74-4391-b113-5a37b912de71\" (UID: \"9490a7a2-1c74-4391-b113-5a37b912de71\") " Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.052202 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9490a7a2-1c74-4391-b113-5a37b912de71-service-ca" (OuterVolumeSpecName: "service-ca") pod "9490a7a2-1c74-4391-b113-5a37b912de71" (UID: "9490a7a2-1c74-4391-b113-5a37b912de71"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.052254 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9490a7a2-1c74-4391-b113-5a37b912de71-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "9490a7a2-1c74-4391-b113-5a37b912de71" (UID: "9490a7a2-1c74-4391-b113-5a37b912de71"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.052291 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9490a7a2-1c74-4391-b113-5a37b912de71-console-config" (OuterVolumeSpecName: "console-config") pod "9490a7a2-1c74-4391-b113-5a37b912de71" (UID: "9490a7a2-1c74-4391-b113-5a37b912de71"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.052422 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9490a7a2-1c74-4391-b113-5a37b912de71-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "9490a7a2-1c74-4391-b113-5a37b912de71" (UID: "9490a7a2-1c74-4391-b113-5a37b912de71"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.059131 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9490a7a2-1c74-4391-b113-5a37b912de71-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "9490a7a2-1c74-4391-b113-5a37b912de71" (UID: "9490a7a2-1c74-4391-b113-5a37b912de71"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.059286 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9490a7a2-1c74-4391-b113-5a37b912de71-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "9490a7a2-1c74-4391-b113-5a37b912de71" (UID: "9490a7a2-1c74-4391-b113-5a37b912de71"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.060138 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9490a7a2-1c74-4391-b113-5a37b912de71-kube-api-access-9wfsq" (OuterVolumeSpecName: "kube-api-access-9wfsq") pod "9490a7a2-1c74-4391-b113-5a37b912de71" (UID: "9490a7a2-1c74-4391-b113-5a37b912de71"). InnerVolumeSpecName "kube-api-access-9wfsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.153507 4897 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9490a7a2-1c74-4391-b113-5a37b912de71-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.153555 4897 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9490a7a2-1c74-4391-b113-5a37b912de71-console-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.153569 4897 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9490a7a2-1c74-4391-b113-5a37b912de71-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.153583 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wfsq\" (UniqueName: \"kubernetes.io/projected/9490a7a2-1c74-4391-b113-5a37b912de71-kube-api-access-9wfsq\") on node \"crc\" DevicePath \"\"" Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.153599 4897 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9490a7a2-1c74-4391-b113-5a37b912de71-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.153612 4897 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9490a7a2-1c74-4391-b113-5a37b912de71-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.153623 4897 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9490a7a2-1c74-4391-b113-5a37b912de71-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.475601 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-66b5df45c6-rrlk8_9490a7a2-1c74-4391-b113-5a37b912de71/console/0.log" Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.475685 4897 generic.go:334] "Generic (PLEG): container finished" podID="9490a7a2-1c74-4391-b113-5a37b912de71" containerID="4cff2c12e4c96835b98d2fce1605aca8429eb13667cfba1d91e28faf9dc320b2" exitCode=2 Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.475733 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66b5df45c6-rrlk8" event={"ID":"9490a7a2-1c74-4391-b113-5a37b912de71","Type":"ContainerDied","Data":"4cff2c12e4c96835b98d2fce1605aca8429eb13667cfba1d91e28faf9dc320b2"} Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.475775 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66b5df45c6-rrlk8" event={"ID":"9490a7a2-1c74-4391-b113-5a37b912de71","Type":"ContainerDied","Data":"591df4d01dd838544463f24b831e1eae5831053ce0a1e65ef71722407b5f860b"} Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.475806 4897 scope.go:117] "RemoveContainer" containerID="4cff2c12e4c96835b98d2fce1605aca8429eb13667cfba1d91e28faf9dc320b2" Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.476003 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66b5df45c6-rrlk8" Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.503948 4897 scope.go:117] "RemoveContainer" containerID="4cff2c12e4c96835b98d2fce1605aca8429eb13667cfba1d91e28faf9dc320b2" Feb 14 18:51:30 crc kubenswrapper[4897]: E0214 18:51:30.504685 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cff2c12e4c96835b98d2fce1605aca8429eb13667cfba1d91e28faf9dc320b2\": container with ID starting with 4cff2c12e4c96835b98d2fce1605aca8429eb13667cfba1d91e28faf9dc320b2 not found: ID does not exist" containerID="4cff2c12e4c96835b98d2fce1605aca8429eb13667cfba1d91e28faf9dc320b2" Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.504738 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cff2c12e4c96835b98d2fce1605aca8429eb13667cfba1d91e28faf9dc320b2"} err="failed to get container status \"4cff2c12e4c96835b98d2fce1605aca8429eb13667cfba1d91e28faf9dc320b2\": rpc error: code = NotFound desc = could not find container \"4cff2c12e4c96835b98d2fce1605aca8429eb13667cfba1d91e28faf9dc320b2\": container with ID starting with 4cff2c12e4c96835b98d2fce1605aca8429eb13667cfba1d91e28faf9dc320b2 not found: ID does not exist" Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.531279 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-66b5df45c6-rrlk8"] Feb 14 18:51:30 crc kubenswrapper[4897]: I0214 18:51:30.537177 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-66b5df45c6-rrlk8"] Feb 14 18:51:31 crc kubenswrapper[4897]: I0214 18:51:31.726714 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 18:51:31 crc kubenswrapper[4897]: I0214 18:51:31.726791 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 18:51:31 crc kubenswrapper[4897]: I0214 18:51:31.800601 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9490a7a2-1c74-4391-b113-5a37b912de71" path="/var/lib/kubelet/pods/9490a7a2-1c74-4391-b113-5a37b912de71/volumes" Feb 14 18:52:01 crc kubenswrapper[4897]: I0214 18:52:01.726782 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 18:52:01 crc kubenswrapper[4897]: I0214 18:52:01.727441 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 18:52:31 crc kubenswrapper[4897]: I0214 18:52:31.726241 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 18:52:31 crc kubenswrapper[4897]: I0214 18:52:31.727026 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 18:52:31 crc kubenswrapper[4897]: I0214 18:52:31.727125 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 18:52:31 crc kubenswrapper[4897]: I0214 18:52:31.728324 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2c722bb3847b6caa173e38da195a6a74bd7b3547a2d4d41a8a85c1c5e17187d8"} pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 18:52:31 crc kubenswrapper[4897]: I0214 18:52:31.728447 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" containerID="cri-o://2c722bb3847b6caa173e38da195a6a74bd7b3547a2d4d41a8a85c1c5e17187d8" gracePeriod=600 Feb 14 18:52:31 crc kubenswrapper[4897]: E0214 18:52:31.827748 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f885c6c_b913_48e3_93fc_abf932515ea9.slice/crio-2c722bb3847b6caa173e38da195a6a74bd7b3547a2d4d41a8a85c1c5e17187d8.scope\": RecentStats: unable to find data in memory cache]" Feb 14 18:52:32 crc kubenswrapper[4897]: I0214 18:52:32.110563 4897 generic.go:334] "Generic (PLEG): container finished" podID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerID="2c722bb3847b6caa173e38da195a6a74bd7b3547a2d4d41a8a85c1c5e17187d8" exitCode=0 Feb 14 18:52:32 crc kubenswrapper[4897]: I0214 18:52:32.110603 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerDied","Data":"2c722bb3847b6caa173e38da195a6a74bd7b3547a2d4d41a8a85c1c5e17187d8"} Feb 14 18:52:32 crc kubenswrapper[4897]: I0214 18:52:32.110630 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerStarted","Data":"446e5cdc189ae2c51f665c763c60fe16201efbf3c0c2e1e9f8fe851134e12224"} Feb 14 18:52:32 crc kubenswrapper[4897]: I0214 18:52:32.110646 4897 scope.go:117] "RemoveContainer" containerID="ac685437ddee138a3eaa2a50823011ad70b1b32e6d58f93b6f0439596a8822de" Feb 14 18:53:26 crc kubenswrapper[4897]: I0214 18:53:26.206140 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj"] Feb 14 18:53:26 crc kubenswrapper[4897]: E0214 18:53:26.207064 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9490a7a2-1c74-4391-b113-5a37b912de71" containerName="console" Feb 14 18:53:26 crc kubenswrapper[4897]: I0214 18:53:26.207082 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="9490a7a2-1c74-4391-b113-5a37b912de71" containerName="console" Feb 14 18:53:26 crc kubenswrapper[4897]: I0214 18:53:26.207243 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="9490a7a2-1c74-4391-b113-5a37b912de71" containerName="console" Feb 14 18:53:26 crc kubenswrapper[4897]: I0214 18:53:26.208378 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj" Feb 14 18:53:26 crc kubenswrapper[4897]: I0214 18:53:26.211139 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 14 18:53:26 crc kubenswrapper[4897]: I0214 18:53:26.214265 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj"] Feb 14 18:53:26 crc kubenswrapper[4897]: I0214 18:53:26.305333 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ba87404e-9bf2-4003-a612-0461c1af3db2-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj\" (UID: \"ba87404e-9bf2-4003-a612-0461c1af3db2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj" Feb 14 18:53:26 crc kubenswrapper[4897]: I0214 18:53:26.305406 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ba87404e-9bf2-4003-a612-0461c1af3db2-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj\" (UID: \"ba87404e-9bf2-4003-a612-0461c1af3db2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj" Feb 14 18:53:26 crc kubenswrapper[4897]: I0214 18:53:26.305614 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvm7r\" (UniqueName: \"kubernetes.io/projected/ba87404e-9bf2-4003-a612-0461c1af3db2-kube-api-access-kvm7r\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj\" (UID: \"ba87404e-9bf2-4003-a612-0461c1af3db2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj" Feb 14 18:53:26 crc kubenswrapper[4897]: I0214 18:53:26.407720 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvm7r\" (UniqueName: \"kubernetes.io/projected/ba87404e-9bf2-4003-a612-0461c1af3db2-kube-api-access-kvm7r\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj\" (UID: \"ba87404e-9bf2-4003-a612-0461c1af3db2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj" Feb 14 18:53:26 crc kubenswrapper[4897]: I0214 18:53:26.407866 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ba87404e-9bf2-4003-a612-0461c1af3db2-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj\" (UID: \"ba87404e-9bf2-4003-a612-0461c1af3db2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj" Feb 14 18:53:26 crc kubenswrapper[4897]: I0214 18:53:26.407918 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ba87404e-9bf2-4003-a612-0461c1af3db2-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj\" (UID: \"ba87404e-9bf2-4003-a612-0461c1af3db2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj" Feb 14 18:53:26 crc kubenswrapper[4897]: I0214 18:53:26.408643 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ba87404e-9bf2-4003-a612-0461c1af3db2-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj\" (UID: \"ba87404e-9bf2-4003-a612-0461c1af3db2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj" Feb 14 18:53:26 crc kubenswrapper[4897]: I0214 18:53:26.409201 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ba87404e-9bf2-4003-a612-0461c1af3db2-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj\" (UID: \"ba87404e-9bf2-4003-a612-0461c1af3db2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj" Feb 14 18:53:26 crc kubenswrapper[4897]: I0214 18:53:26.431528 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvm7r\" (UniqueName: \"kubernetes.io/projected/ba87404e-9bf2-4003-a612-0461c1af3db2-kube-api-access-kvm7r\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj\" (UID: \"ba87404e-9bf2-4003-a612-0461c1af3db2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj" Feb 14 18:53:26 crc kubenswrapper[4897]: I0214 18:53:26.534870 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj" Feb 14 18:53:26 crc kubenswrapper[4897]: I0214 18:53:26.802194 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj"] Feb 14 18:53:26 crc kubenswrapper[4897]: I0214 18:53:26.988626 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj" event={"ID":"ba87404e-9bf2-4003-a612-0461c1af3db2","Type":"ContainerStarted","Data":"dd25a334648d78c2ff9490da84a59fac5b4a58a9953957afcbe7f28fd9ad6fc2"} Feb 14 18:53:26 crc kubenswrapper[4897]: I0214 18:53:26.988688 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj" event={"ID":"ba87404e-9bf2-4003-a612-0461c1af3db2","Type":"ContainerStarted","Data":"47fd2aef556218a506a1b77f288160283d79a16774aa550f430ec30a942399a0"} Feb 14 18:53:28 crc kubenswrapper[4897]: I0214 18:53:28.000666 4897 generic.go:334] "Generic (PLEG): container finished" podID="ba87404e-9bf2-4003-a612-0461c1af3db2" containerID="dd25a334648d78c2ff9490da84a59fac5b4a58a9953957afcbe7f28fd9ad6fc2" exitCode=0 Feb 14 18:53:28 crc kubenswrapper[4897]: I0214 18:53:28.000740 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj" event={"ID":"ba87404e-9bf2-4003-a612-0461c1af3db2","Type":"ContainerDied","Data":"dd25a334648d78c2ff9490da84a59fac5b4a58a9953957afcbe7f28fd9ad6fc2"} Feb 14 18:53:28 crc kubenswrapper[4897]: I0214 18:53:28.004994 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 18:53:30 crc kubenswrapper[4897]: I0214 18:53:30.017335 4897 generic.go:334] "Generic (PLEG): container finished" podID="ba87404e-9bf2-4003-a612-0461c1af3db2" containerID="284c529b53916c191dfd862f7d0483ebbfbabc5901dfab6513569bd683400d21" exitCode=0 Feb 14 18:53:30 crc kubenswrapper[4897]: I0214 18:53:30.017454 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj" event={"ID":"ba87404e-9bf2-4003-a612-0461c1af3db2","Type":"ContainerDied","Data":"284c529b53916c191dfd862f7d0483ebbfbabc5901dfab6513569bd683400d21"} Feb 14 18:53:31 crc kubenswrapper[4897]: I0214 18:53:31.030145 4897 generic.go:334] "Generic (PLEG): container finished" podID="ba87404e-9bf2-4003-a612-0461c1af3db2" containerID="05dca552278969c504f69e3964ea730c56aa7a6e61d523df5fba61b1c09d5463" exitCode=0 Feb 14 18:53:31 crc kubenswrapper[4897]: I0214 18:53:31.030232 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj" event={"ID":"ba87404e-9bf2-4003-a612-0461c1af3db2","Type":"ContainerDied","Data":"05dca552278969c504f69e3964ea730c56aa7a6e61d523df5fba61b1c09d5463"} Feb 14 18:53:32 crc kubenswrapper[4897]: I0214 18:53:32.355252 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj" Feb 14 18:53:32 crc kubenswrapper[4897]: I0214 18:53:32.498959 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvm7r\" (UniqueName: \"kubernetes.io/projected/ba87404e-9bf2-4003-a612-0461c1af3db2-kube-api-access-kvm7r\") pod \"ba87404e-9bf2-4003-a612-0461c1af3db2\" (UID: \"ba87404e-9bf2-4003-a612-0461c1af3db2\") " Feb 14 18:53:32 crc kubenswrapper[4897]: I0214 18:53:32.499121 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ba87404e-9bf2-4003-a612-0461c1af3db2-util\") pod \"ba87404e-9bf2-4003-a612-0461c1af3db2\" (UID: \"ba87404e-9bf2-4003-a612-0461c1af3db2\") " Feb 14 18:53:32 crc kubenswrapper[4897]: I0214 18:53:32.499183 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ba87404e-9bf2-4003-a612-0461c1af3db2-bundle\") pod \"ba87404e-9bf2-4003-a612-0461c1af3db2\" (UID: \"ba87404e-9bf2-4003-a612-0461c1af3db2\") " Feb 14 18:53:32 crc kubenswrapper[4897]: I0214 18:53:32.503411 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba87404e-9bf2-4003-a612-0461c1af3db2-bundle" (OuterVolumeSpecName: "bundle") pod "ba87404e-9bf2-4003-a612-0461c1af3db2" (UID: "ba87404e-9bf2-4003-a612-0461c1af3db2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:53:32 crc kubenswrapper[4897]: I0214 18:53:32.507757 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba87404e-9bf2-4003-a612-0461c1af3db2-kube-api-access-kvm7r" (OuterVolumeSpecName: "kube-api-access-kvm7r") pod "ba87404e-9bf2-4003-a612-0461c1af3db2" (UID: "ba87404e-9bf2-4003-a612-0461c1af3db2"). InnerVolumeSpecName "kube-api-access-kvm7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:53:32 crc kubenswrapper[4897]: I0214 18:53:32.600609 4897 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ba87404e-9bf2-4003-a612-0461c1af3db2-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 18:53:32 crc kubenswrapper[4897]: I0214 18:53:32.600664 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvm7r\" (UniqueName: \"kubernetes.io/projected/ba87404e-9bf2-4003-a612-0461c1af3db2-kube-api-access-kvm7r\") on node \"crc\" DevicePath \"\"" Feb 14 18:53:32 crc kubenswrapper[4897]: I0214 18:53:32.869857 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba87404e-9bf2-4003-a612-0461c1af3db2-util" (OuterVolumeSpecName: "util") pod "ba87404e-9bf2-4003-a612-0461c1af3db2" (UID: "ba87404e-9bf2-4003-a612-0461c1af3db2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:53:32 crc kubenswrapper[4897]: I0214 18:53:32.905786 4897 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ba87404e-9bf2-4003-a612-0461c1af3db2-util\") on node \"crc\" DevicePath \"\"" Feb 14 18:53:33 crc kubenswrapper[4897]: I0214 18:53:33.045710 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj" event={"ID":"ba87404e-9bf2-4003-a612-0461c1af3db2","Type":"ContainerDied","Data":"47fd2aef556218a506a1b77f288160283d79a16774aa550f430ec30a942399a0"} Feb 14 18:53:33 crc kubenswrapper[4897]: I0214 18:53:33.045770 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47fd2aef556218a506a1b77f288160283d79a16774aa550f430ec30a942399a0" Feb 14 18:53:33 crc kubenswrapper[4897]: I0214 18:53:33.045806 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj" Feb 14 18:53:37 crc kubenswrapper[4897]: I0214 18:53:37.385150 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fz879"] Feb 14 18:53:37 crc kubenswrapper[4897]: I0214 18:53:37.386274 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="ovn-controller" containerID="cri-o://b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad" gracePeriod=30 Feb 14 18:53:37 crc kubenswrapper[4897]: I0214 18:53:37.386705 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="sbdb" containerID="cri-o://19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c" gracePeriod=30 Feb 14 18:53:37 crc kubenswrapper[4897]: I0214 18:53:37.386764 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="nbdb" containerID="cri-o://962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096" gracePeriod=30 Feb 14 18:53:37 crc kubenswrapper[4897]: I0214 18:53:37.386807 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="northd" containerID="cri-o://79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f" gracePeriod=30 Feb 14 18:53:37 crc kubenswrapper[4897]: I0214 18:53:37.386845 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d" gracePeriod=30 Feb 14 18:53:37 crc kubenswrapper[4897]: I0214 18:53:37.386882 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="kube-rbac-proxy-node" containerID="cri-o://1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec" gracePeriod=30 Feb 14 18:53:37 crc kubenswrapper[4897]: I0214 18:53:37.386919 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="ovn-acl-logging" containerID="cri-o://837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85" gracePeriod=30 Feb 14 18:53:37 crc kubenswrapper[4897]: I0214 18:53:37.452808 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="ovnkube-controller" containerID="cri-o://c1354ad043e398dad7fb05c9ffd329f33064647c36d846905fdbe863d60a6b16" gracePeriod=30 Feb 14 18:53:38 crc kubenswrapper[4897]: I0214 18:53:38.089470 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fz879_f304b761-40a3-41ba-af33-a2b0634a55fb/ovnkube-controller/3.log" Feb 14 18:53:38 crc kubenswrapper[4897]: I0214 18:53:38.092889 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fz879_f304b761-40a3-41ba-af33-a2b0634a55fb/ovn-acl-logging/0.log" Feb 14 18:53:38 crc kubenswrapper[4897]: I0214 18:53:38.093904 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fz879_f304b761-40a3-41ba-af33-a2b0634a55fb/ovn-controller/0.log" Feb 14 18:53:38 crc kubenswrapper[4897]: I0214 18:53:38.094653 4897 generic.go:334] "Generic (PLEG): container finished" podID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerID="c1354ad043e398dad7fb05c9ffd329f33064647c36d846905fdbe863d60a6b16" exitCode=0 Feb 14 18:53:38 crc kubenswrapper[4897]: I0214 18:53:38.094683 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" event={"ID":"f304b761-40a3-41ba-af33-a2b0634a55fb","Type":"ContainerDied","Data":"c1354ad043e398dad7fb05c9ffd329f33064647c36d846905fdbe863d60a6b16"} Feb 14 18:53:38 crc kubenswrapper[4897]: I0214 18:53:38.094698 4897 generic.go:334] "Generic (PLEG): container finished" podID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerID="19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c" exitCode=0 Feb 14 18:53:38 crc kubenswrapper[4897]: I0214 18:53:38.094718 4897 generic.go:334] "Generic (PLEG): container finished" podID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerID="962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096" exitCode=0 Feb 14 18:53:38 crc kubenswrapper[4897]: I0214 18:53:38.094734 4897 generic.go:334] "Generic (PLEG): container finished" podID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerID="79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f" exitCode=0 Feb 14 18:53:38 crc kubenswrapper[4897]: I0214 18:53:38.094751 4897 generic.go:334] "Generic (PLEG): container finished" podID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerID="837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85" exitCode=143 Feb 14 18:53:38 crc kubenswrapper[4897]: I0214 18:53:38.094772 4897 generic.go:334] "Generic (PLEG): container finished" podID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerID="b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad" exitCode=143 Feb 14 18:53:38 crc kubenswrapper[4897]: I0214 18:53:38.094738 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" event={"ID":"f304b761-40a3-41ba-af33-a2b0634a55fb","Type":"ContainerDied","Data":"19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c"} Feb 14 18:53:38 crc kubenswrapper[4897]: I0214 18:53:38.094755 4897 scope.go:117] "RemoveContainer" containerID="da8347fb3c0aebea3a85959f11e26786d5507dd1ddfe24b700b7d4981f23ef63" Feb 14 18:53:38 crc kubenswrapper[4897]: I0214 18:53:38.094903 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" event={"ID":"f304b761-40a3-41ba-af33-a2b0634a55fb","Type":"ContainerDied","Data":"962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096"} Feb 14 18:53:38 crc kubenswrapper[4897]: I0214 18:53:38.094950 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" event={"ID":"f304b761-40a3-41ba-af33-a2b0634a55fb","Type":"ContainerDied","Data":"79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f"} Feb 14 18:53:38 crc kubenswrapper[4897]: I0214 18:53:38.094973 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" event={"ID":"f304b761-40a3-41ba-af33-a2b0634a55fb","Type":"ContainerDied","Data":"837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85"} Feb 14 18:53:38 crc kubenswrapper[4897]: I0214 18:53:38.094993 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" event={"ID":"f304b761-40a3-41ba-af33-a2b0634a55fb","Type":"ContainerDied","Data":"b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad"} Feb 14 18:53:38 crc kubenswrapper[4897]: I0214 18:53:38.097541 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ldvzr_b5b30895-0d98-44e4-8e31-2c5ebe5e1850/kube-multus/2.log" Feb 14 18:53:38 crc kubenswrapper[4897]: I0214 18:53:38.098753 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ldvzr_b5b30895-0d98-44e4-8e31-2c5ebe5e1850/kube-multus/1.log" Feb 14 18:53:38 crc kubenswrapper[4897]: I0214 18:53:38.098794 4897 generic.go:334] "Generic (PLEG): container finished" podID="b5b30895-0d98-44e4-8e31-2c5ebe5e1850" containerID="a994cd3d62a87d79d3720ba26ad60a180a3ea6b395c07485dd6d24071ac72977" exitCode=2 Feb 14 18:53:38 crc kubenswrapper[4897]: I0214 18:53:38.098823 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ldvzr" event={"ID":"b5b30895-0d98-44e4-8e31-2c5ebe5e1850","Type":"ContainerDied","Data":"a994cd3d62a87d79d3720ba26ad60a180a3ea6b395c07485dd6d24071ac72977"} Feb 14 18:53:38 crc kubenswrapper[4897]: I0214 18:53:38.099580 4897 scope.go:117] "RemoveContainer" containerID="a994cd3d62a87d79d3720ba26ad60a180a3ea6b395c07485dd6d24071ac72977" Feb 14 18:53:38 crc kubenswrapper[4897]: E0214 18:53:38.099790 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-ldvzr_openshift-multus(b5b30895-0d98-44e4-8e31-2c5ebe5e1850)\"" pod="openshift-multus/multus-ldvzr" podUID="b5b30895-0d98-44e4-8e31-2c5ebe5e1850" Feb 14 18:53:38 crc kubenswrapper[4897]: I0214 18:53:38.129090 4897 scope.go:117] "RemoveContainer" containerID="59dea786c4d826f44c37335db7c4d2752d93bf799ec0044b1c6fd22efab3256d" Feb 14 18:53:38 crc kubenswrapper[4897]: E0214 18:53:38.521506 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c is running failed: container process not found" containerID="19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 14 18:53:38 crc kubenswrapper[4897]: E0214 18:53:38.521607 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096 is running failed: container process not found" containerID="962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 14 18:53:38 crc kubenswrapper[4897]: E0214 18:53:38.521972 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096 is running failed: container process not found" containerID="962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 14 18:53:38 crc kubenswrapper[4897]: E0214 18:53:38.522179 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c is running failed: container process not found" containerID="19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 14 18:53:38 crc kubenswrapper[4897]: E0214 18:53:38.522259 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096 is running failed: container process not found" containerID="962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 14 18:53:38 crc kubenswrapper[4897]: E0214 18:53:38.522284 4897 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="nbdb" Feb 14 18:53:38 crc kubenswrapper[4897]: E0214 18:53:38.522361 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c is running failed: container process not found" containerID="19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 14 18:53:38 crc kubenswrapper[4897]: E0214 18:53:38.522382 4897 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="sbdb" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.092121 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fz879_f304b761-40a3-41ba-af33-a2b0634a55fb/ovn-acl-logging/0.log" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.092579 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fz879_f304b761-40a3-41ba-af33-a2b0634a55fb/ovn-controller/0.log" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.092959 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.105870 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fz879_f304b761-40a3-41ba-af33-a2b0634a55fb/ovn-acl-logging/0.log" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.106254 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fz879_f304b761-40a3-41ba-af33-a2b0634a55fb/ovn-controller/0.log" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.106488 4897 generic.go:334] "Generic (PLEG): container finished" podID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerID="4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d" exitCode=0 Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.106509 4897 generic.go:334] "Generic (PLEG): container finished" podID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerID="1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec" exitCode=0 Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.106544 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" event={"ID":"f304b761-40a3-41ba-af33-a2b0634a55fb","Type":"ContainerDied","Data":"4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d"} Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.106564 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" event={"ID":"f304b761-40a3-41ba-af33-a2b0634a55fb","Type":"ContainerDied","Data":"1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec"} Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.106575 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" event={"ID":"f304b761-40a3-41ba-af33-a2b0634a55fb","Type":"ContainerDied","Data":"94f2c9d0841081233151eb26444a5ad930742620ed1d41ae4112ef4e7a9c6506"} Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.106590 4897 scope.go:117] "RemoveContainer" containerID="c1354ad043e398dad7fb05c9ffd329f33064647c36d846905fdbe863d60a6b16" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.106688 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fz879" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.109651 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ldvzr_b5b30895-0d98-44e4-8e31-2c5ebe5e1850/kube-multus/2.log" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.125281 4897 scope.go:117] "RemoveContainer" containerID="19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.132229 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-systemd-units\") pod \"f304b761-40a3-41ba-af33-a2b0634a55fb\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.132294 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7j56\" (UniqueName: \"kubernetes.io/projected/f304b761-40a3-41ba-af33-a2b0634a55fb-kube-api-access-j7j56\") pod \"f304b761-40a3-41ba-af33-a2b0634a55fb\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.132318 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f304b761-40a3-41ba-af33-a2b0634a55fb-ovn-node-metrics-cert\") pod \"f304b761-40a3-41ba-af33-a2b0634a55fb\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.132348 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-run-netns\") pod \"f304b761-40a3-41ba-af33-a2b0634a55fb\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.132365 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-kubelet\") pod \"f304b761-40a3-41ba-af33-a2b0634a55fb\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.132355 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "f304b761-40a3-41ba-af33-a2b0634a55fb" (UID: "f304b761-40a3-41ba-af33-a2b0634a55fb"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.132400 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-node-log\") pod \"f304b761-40a3-41ba-af33-a2b0634a55fb\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.132422 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f304b761-40a3-41ba-af33-a2b0634a55fb-env-overrides\") pod \"f304b761-40a3-41ba-af33-a2b0634a55fb\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.132421 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "f304b761-40a3-41ba-af33-a2b0634a55fb" (UID: "f304b761-40a3-41ba-af33-a2b0634a55fb"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.132448 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-run-ovn\") pod \"f304b761-40a3-41ba-af33-a2b0634a55fb\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.132453 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "f304b761-40a3-41ba-af33-a2b0634a55fb" (UID: "f304b761-40a3-41ba-af33-a2b0634a55fb"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.132482 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-node-log" (OuterVolumeSpecName: "node-log") pod "f304b761-40a3-41ba-af33-a2b0634a55fb" (UID: "f304b761-40a3-41ba-af33-a2b0634a55fb"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.132503 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f304b761-40a3-41ba-af33-a2b0634a55fb-ovnkube-config\") pod \"f304b761-40a3-41ba-af33-a2b0634a55fb\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.132628 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-run-openvswitch\") pod \"f304b761-40a3-41ba-af33-a2b0634a55fb\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.132689 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-cni-netd\") pod \"f304b761-40a3-41ba-af33-a2b0634a55fb\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.132716 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-run-systemd\") pod \"f304b761-40a3-41ba-af33-a2b0634a55fb\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.132748 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-run-ovn-kubernetes\") pod \"f304b761-40a3-41ba-af33-a2b0634a55fb\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.132782 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-slash\") pod \"f304b761-40a3-41ba-af33-a2b0634a55fb\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.132801 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-etc-openvswitch\") pod \"f304b761-40a3-41ba-af33-a2b0634a55fb\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.132820 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-cni-bin\") pod \"f304b761-40a3-41ba-af33-a2b0634a55fb\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.132851 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f304b761-40a3-41ba-af33-a2b0634a55fb-ovnkube-script-lib\") pod \"f304b761-40a3-41ba-af33-a2b0634a55fb\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.132906 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-log-socket\") pod \"f304b761-40a3-41ba-af33-a2b0634a55fb\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.132927 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-var-lib-openvswitch\") pod \"f304b761-40a3-41ba-af33-a2b0634a55fb\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.132956 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"f304b761-40a3-41ba-af33-a2b0634a55fb\" (UID: \"f304b761-40a3-41ba-af33-a2b0634a55fb\") " Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.132849 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f304b761-40a3-41ba-af33-a2b0634a55fb-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "f304b761-40a3-41ba-af33-a2b0634a55fb" (UID: "f304b761-40a3-41ba-af33-a2b0634a55fb"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.133055 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f304b761-40a3-41ba-af33-a2b0634a55fb-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "f304b761-40a3-41ba-af33-a2b0634a55fb" (UID: "f304b761-40a3-41ba-af33-a2b0634a55fb"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.133079 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "f304b761-40a3-41ba-af33-a2b0634a55fb" (UID: "f304b761-40a3-41ba-af33-a2b0634a55fb"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.133077 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "f304b761-40a3-41ba-af33-a2b0634a55fb" (UID: "f304b761-40a3-41ba-af33-a2b0634a55fb"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.133098 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-slash" (OuterVolumeSpecName: "host-slash") pod "f304b761-40a3-41ba-af33-a2b0634a55fb" (UID: "f304b761-40a3-41ba-af33-a2b0634a55fb"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.133098 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "f304b761-40a3-41ba-af33-a2b0634a55fb" (UID: "f304b761-40a3-41ba-af33-a2b0634a55fb"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.133120 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "f304b761-40a3-41ba-af33-a2b0634a55fb" (UID: "f304b761-40a3-41ba-af33-a2b0634a55fb"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.133120 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "f304b761-40a3-41ba-af33-a2b0634a55fb" (UID: "f304b761-40a3-41ba-af33-a2b0634a55fb"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.133138 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-log-socket" (OuterVolumeSpecName: "log-socket") pod "f304b761-40a3-41ba-af33-a2b0634a55fb" (UID: "f304b761-40a3-41ba-af33-a2b0634a55fb"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.133136 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "f304b761-40a3-41ba-af33-a2b0634a55fb" (UID: "f304b761-40a3-41ba-af33-a2b0634a55fb"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.133158 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "f304b761-40a3-41ba-af33-a2b0634a55fb" (UID: "f304b761-40a3-41ba-af33-a2b0634a55fb"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.133173 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "f304b761-40a3-41ba-af33-a2b0634a55fb" (UID: "f304b761-40a3-41ba-af33-a2b0634a55fb"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.133336 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f304b761-40a3-41ba-af33-a2b0634a55fb-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "f304b761-40a3-41ba-af33-a2b0634a55fb" (UID: "f304b761-40a3-41ba-af33-a2b0634a55fb"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.133978 4897 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.133999 4897 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f304b761-40a3-41ba-af33-a2b0634a55fb-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.134011 4897 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.134022 4897 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.134077 4897 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.134088 4897 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-slash\") on node \"crc\" DevicePath \"\"" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.134099 4897 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.134109 4897 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.134121 4897 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f304b761-40a3-41ba-af33-a2b0634a55fb-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.134131 4897 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.134142 4897 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-log-socket\") on node \"crc\" DevicePath \"\"" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.134153 4897 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.134165 4897 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.134176 4897 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.134186 4897 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.134196 4897 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-node-log\") on node \"crc\" DevicePath \"\"" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.134205 4897 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f304b761-40a3-41ba-af33-a2b0634a55fb-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.143515 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f304b761-40a3-41ba-af33-a2b0634a55fb-kube-api-access-j7j56" (OuterVolumeSpecName: "kube-api-access-j7j56") pod "f304b761-40a3-41ba-af33-a2b0634a55fb" (UID: "f304b761-40a3-41ba-af33-a2b0634a55fb"). InnerVolumeSpecName "kube-api-access-j7j56". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.156351 4897 scope.go:117] "RemoveContainer" containerID="962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.156911 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f304b761-40a3-41ba-af33-a2b0634a55fb-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "f304b761-40a3-41ba-af33-a2b0634a55fb" (UID: "f304b761-40a3-41ba-af33-a2b0634a55fb"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.185153 4897 scope.go:117] "RemoveContainer" containerID="79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.186442 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "f304b761-40a3-41ba-af33-a2b0634a55fb" (UID: "f304b761-40a3-41ba-af33-a2b0634a55fb"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.189662 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dchhj"] Feb 14 18:53:39 crc kubenswrapper[4897]: E0214 18:53:39.189957 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="ovnkube-controller" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.189974 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="ovnkube-controller" Feb 14 18:53:39 crc kubenswrapper[4897]: E0214 18:53:39.189982 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="northd" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.189989 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="northd" Feb 14 18:53:39 crc kubenswrapper[4897]: E0214 18:53:39.189998 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="ovnkube-controller" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.190005 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="ovnkube-controller" Feb 14 18:53:39 crc kubenswrapper[4897]: E0214 18:53:39.190013 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba87404e-9bf2-4003-a612-0461c1af3db2" containerName="pull" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.190019 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba87404e-9bf2-4003-a612-0461c1af3db2" containerName="pull" Feb 14 18:53:39 crc kubenswrapper[4897]: E0214 18:53:39.190043 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="kube-rbac-proxy-ovn-metrics" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.190055 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="kube-rbac-proxy-ovn-metrics" Feb 14 18:53:39 crc kubenswrapper[4897]: E0214 18:53:39.190067 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="kubecfg-setup" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.190073 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="kubecfg-setup" Feb 14 18:53:39 crc kubenswrapper[4897]: E0214 18:53:39.190083 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba87404e-9bf2-4003-a612-0461c1af3db2" containerName="extract" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.190089 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba87404e-9bf2-4003-a612-0461c1af3db2" containerName="extract" Feb 14 18:53:39 crc kubenswrapper[4897]: E0214 18:53:39.190099 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="nbdb" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.190104 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="nbdb" Feb 14 18:53:39 crc kubenswrapper[4897]: E0214 18:53:39.190113 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba87404e-9bf2-4003-a612-0461c1af3db2" containerName="util" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.190119 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba87404e-9bf2-4003-a612-0461c1af3db2" containerName="util" Feb 14 18:53:39 crc kubenswrapper[4897]: E0214 18:53:39.190127 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="kube-rbac-proxy-node" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.190132 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="kube-rbac-proxy-node" Feb 14 18:53:39 crc kubenswrapper[4897]: E0214 18:53:39.190143 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="sbdb" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.190149 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="sbdb" Feb 14 18:53:39 crc kubenswrapper[4897]: E0214 18:53:39.190157 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="ovnkube-controller" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.190162 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="ovnkube-controller" Feb 14 18:53:39 crc kubenswrapper[4897]: E0214 18:53:39.190170 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="ovn-controller" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.190175 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="ovn-controller" Feb 14 18:53:39 crc kubenswrapper[4897]: E0214 18:53:39.190187 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="ovn-acl-logging" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.190193 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="ovn-acl-logging" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.190296 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="kube-rbac-proxy-ovn-metrics" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.190306 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="ovnkube-controller" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.190314 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="ovn-acl-logging" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.190320 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba87404e-9bf2-4003-a612-0461c1af3db2" containerName="extract" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.190331 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="ovnkube-controller" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.190339 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="ovnkube-controller" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.190346 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="ovn-controller" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.190355 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="ovnkube-controller" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.190361 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="nbdb" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.190369 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="kube-rbac-proxy-node" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.190377 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="northd" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.190384 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="sbdb" Feb 14 18:53:39 crc kubenswrapper[4897]: E0214 18:53:39.190471 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="ovnkube-controller" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.190477 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="ovnkube-controller" Feb 14 18:53:39 crc kubenswrapper[4897]: E0214 18:53:39.190489 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="ovnkube-controller" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.190495 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="ovnkube-controller" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.190593 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" containerName="ovnkube-controller" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.192224 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.213198 4897 scope.go:117] "RemoveContainer" containerID="4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.235671 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-host-cni-bin\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.235728 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-ovn-node-metrics-cert\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.235757 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-run-systemd\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.235777 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-host-run-ovn-kubernetes\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.235803 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-systemd-units\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.235869 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj8bd\" (UniqueName: \"kubernetes.io/projected/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-kube-api-access-xj8bd\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.235892 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-var-lib-openvswitch\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.235909 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.235930 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-etc-openvswitch\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.235954 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-host-slash\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.235979 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-env-overrides\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.235998 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-ovnkube-config\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.236013 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-ovnkube-script-lib\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.236044 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-host-cni-netd\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.236062 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-node-log\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.236077 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-log-socket\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.236096 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-host-kubelet\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.236110 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-host-run-netns\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.236125 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-run-openvswitch\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.236144 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-run-ovn\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.236299 4897 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f304b761-40a3-41ba-af33-a2b0634a55fb-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.236329 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7j56\" (UniqueName: \"kubernetes.io/projected/f304b761-40a3-41ba-af33-a2b0634a55fb-kube-api-access-j7j56\") on node \"crc\" DevicePath \"\"" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.236340 4897 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f304b761-40a3-41ba-af33-a2b0634a55fb-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.242829 4897 scope.go:117] "RemoveContainer" containerID="1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.257913 4897 scope.go:117] "RemoveContainer" containerID="837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.279517 4897 scope.go:117] "RemoveContainer" containerID="b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.302125 4897 scope.go:117] "RemoveContainer" containerID="81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.331160 4897 scope.go:117] "RemoveContainer" containerID="c1354ad043e398dad7fb05c9ffd329f33064647c36d846905fdbe863d60a6b16" Feb 14 18:53:39 crc kubenswrapper[4897]: E0214 18:53:39.331578 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1354ad043e398dad7fb05c9ffd329f33064647c36d846905fdbe863d60a6b16\": container with ID starting with c1354ad043e398dad7fb05c9ffd329f33064647c36d846905fdbe863d60a6b16 not found: ID does not exist" containerID="c1354ad043e398dad7fb05c9ffd329f33064647c36d846905fdbe863d60a6b16" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.331607 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1354ad043e398dad7fb05c9ffd329f33064647c36d846905fdbe863d60a6b16"} err="failed to get container status \"c1354ad043e398dad7fb05c9ffd329f33064647c36d846905fdbe863d60a6b16\": rpc error: code = NotFound desc = could not find container \"c1354ad043e398dad7fb05c9ffd329f33064647c36d846905fdbe863d60a6b16\": container with ID starting with c1354ad043e398dad7fb05c9ffd329f33064647c36d846905fdbe863d60a6b16 not found: ID does not exist" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.331627 4897 scope.go:117] "RemoveContainer" containerID="19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c" Feb 14 18:53:39 crc kubenswrapper[4897]: E0214 18:53:39.331801 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\": container with ID starting with 19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c not found: ID does not exist" containerID="19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.331827 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c"} err="failed to get container status \"19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\": rpc error: code = NotFound desc = could not find container \"19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\": container with ID starting with 19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c not found: ID does not exist" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.331839 4897 scope.go:117] "RemoveContainer" containerID="962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096" Feb 14 18:53:39 crc kubenswrapper[4897]: E0214 18:53:39.331993 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\": container with ID starting with 962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096 not found: ID does not exist" containerID="962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.332012 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096"} err="failed to get container status \"962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\": rpc error: code = NotFound desc = could not find container \"962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\": container with ID starting with 962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096 not found: ID does not exist" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.332024 4897 scope.go:117] "RemoveContainer" containerID="79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f" Feb 14 18:53:39 crc kubenswrapper[4897]: E0214 18:53:39.332214 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\": container with ID starting with 79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f not found: ID does not exist" containerID="79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.332236 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f"} err="failed to get container status \"79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\": rpc error: code = NotFound desc = could not find container \"79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\": container with ID starting with 79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f not found: ID does not exist" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.332247 4897 scope.go:117] "RemoveContainer" containerID="4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d" Feb 14 18:53:39 crc kubenswrapper[4897]: E0214 18:53:39.332638 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\": container with ID starting with 4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d not found: ID does not exist" containerID="4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.332659 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d"} err="failed to get container status \"4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\": rpc error: code = NotFound desc = could not find container \"4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\": container with ID starting with 4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d not found: ID does not exist" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.332685 4897 scope.go:117] "RemoveContainer" containerID="1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec" Feb 14 18:53:39 crc kubenswrapper[4897]: E0214 18:53:39.333151 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\": container with ID starting with 1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec not found: ID does not exist" containerID="1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.333176 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec"} err="failed to get container status \"1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\": rpc error: code = NotFound desc = could not find container \"1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\": container with ID starting with 1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec not found: ID does not exist" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.333188 4897 scope.go:117] "RemoveContainer" containerID="837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85" Feb 14 18:53:39 crc kubenswrapper[4897]: E0214 18:53:39.335232 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\": container with ID starting with 837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85 not found: ID does not exist" containerID="837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.335255 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85"} err="failed to get container status \"837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\": rpc error: code = NotFound desc = could not find container \"837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\": container with ID starting with 837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85 not found: ID does not exist" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.335267 4897 scope.go:117] "RemoveContainer" containerID="b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad" Feb 14 18:53:39 crc kubenswrapper[4897]: E0214 18:53:39.337011 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\": container with ID starting with b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad not found: ID does not exist" containerID="b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.337052 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad"} err="failed to get container status \"b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\": rpc error: code = NotFound desc = could not find container \"b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\": container with ID starting with b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad not found: ID does not exist" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.337066 4897 scope.go:117] "RemoveContainer" containerID="81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.337652 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-host-slash\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.337691 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-env-overrides\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.337713 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-ovnkube-config\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.337726 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-ovnkube-script-lib\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.337744 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-host-cni-netd\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.337760 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-node-log\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.337773 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-log-socket\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.337792 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-host-kubelet\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.337806 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-host-run-netns\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.337819 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-run-openvswitch\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.337836 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-run-ovn\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.337851 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-host-cni-bin\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.337867 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-ovn-node-metrics-cert\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.337889 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-run-systemd\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.337905 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-host-run-ovn-kubernetes\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.337929 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-systemd-units\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.337948 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj8bd\" (UniqueName: \"kubernetes.io/projected/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-kube-api-access-xj8bd\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.337971 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-var-lib-openvswitch\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.337985 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.338003 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-etc-openvswitch\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.338084 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-etc-openvswitch\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.338119 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-host-slash\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.338627 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-env-overrides\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.339010 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-ovnkube-config\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.339131 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-host-cni-bin\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.339158 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-host-kubelet\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.339193 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-host-run-netns\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.339224 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-run-openvswitch\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.339226 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-host-cni-netd\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.339250 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-node-log\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.339265 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-run-ovn\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.339276 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-log-socket\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.339289 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-systemd-units\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.339300 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-run-systemd\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.339314 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-host-run-ovn-kubernetes\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.339342 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-var-lib-openvswitch\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: E0214 18:53:39.339361 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\": container with ID starting with 81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5 not found: ID does not exist" containerID="81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.339422 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-ovnkube-script-lib\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.339458 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.339472 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5"} err="failed to get container status \"81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\": rpc error: code = NotFound desc = could not find container \"81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\": container with ID starting with 81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5 not found: ID does not exist" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.339501 4897 scope.go:117] "RemoveContainer" containerID="c1354ad043e398dad7fb05c9ffd329f33064647c36d846905fdbe863d60a6b16" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.343203 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1354ad043e398dad7fb05c9ffd329f33064647c36d846905fdbe863d60a6b16"} err="failed to get container status \"c1354ad043e398dad7fb05c9ffd329f33064647c36d846905fdbe863d60a6b16\": rpc error: code = NotFound desc = could not find container \"c1354ad043e398dad7fb05c9ffd329f33064647c36d846905fdbe863d60a6b16\": container with ID starting with c1354ad043e398dad7fb05c9ffd329f33064647c36d846905fdbe863d60a6b16 not found: ID does not exist" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.343246 4897 scope.go:117] "RemoveContainer" containerID="19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.343570 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-ovn-node-metrics-cert\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.343590 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c"} err="failed to get container status \"19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\": rpc error: code = NotFound desc = could not find container \"19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c\": container with ID starting with 19f557f386ad03f6c26b17e8bad34d97a9ad29728f6ad72533c5eccd6711138c not found: ID does not exist" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.343611 4897 scope.go:117] "RemoveContainer" containerID="962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.347122 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096"} err="failed to get container status \"962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\": rpc error: code = NotFound desc = could not find container \"962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096\": container with ID starting with 962f7e186671b24412ff26f3ba4ca4077bbee06907b9835c8d20f0a85ff68096 not found: ID does not exist" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.347151 4897 scope.go:117] "RemoveContainer" containerID="79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.351128 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f"} err="failed to get container status \"79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\": rpc error: code = NotFound desc = could not find container \"79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f\": container with ID starting with 79b10b83e1b261da116623d001f441aefc0d5fe5e207d21531c106a6390f576f not found: ID does not exist" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.351174 4897 scope.go:117] "RemoveContainer" containerID="4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.351646 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d"} err="failed to get container status \"4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\": rpc error: code = NotFound desc = could not find container \"4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d\": container with ID starting with 4aaae83a84fbd79042a1d80533337b180863a5eb4d8423d46db709dcdace319d not found: ID does not exist" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.351666 4897 scope.go:117] "RemoveContainer" containerID="1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.351891 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec"} err="failed to get container status \"1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\": rpc error: code = NotFound desc = could not find container \"1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec\": container with ID starting with 1dcf8d8cf2c1aa725f800bc6fb1a40e047187d6561cd71b505de0d5e6ce11cec not found: ID does not exist" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.351910 4897 scope.go:117] "RemoveContainer" containerID="837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.352162 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85"} err="failed to get container status \"837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\": rpc error: code = NotFound desc = could not find container \"837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85\": container with ID starting with 837f8692d51b5e7c4dbabb96cf26e433d00faefe7c1e0a8bfb39fe0678d1da85 not found: ID does not exist" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.352178 4897 scope.go:117] "RemoveContainer" containerID="b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.352516 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad"} err="failed to get container status \"b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\": rpc error: code = NotFound desc = could not find container \"b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad\": container with ID starting with b399d425756f7dffcd39a8da0b19d01aaf1be034cd17b52d9441484618aed1ad not found: ID does not exist" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.352557 4897 scope.go:117] "RemoveContainer" containerID="81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.353397 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5"} err="failed to get container status \"81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\": rpc error: code = NotFound desc = could not find container \"81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5\": container with ID starting with 81eee8582b37adf8a0de5179243c762e24419e92659f04454c849ee43d92fce5 not found: ID does not exist" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.357477 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj8bd\" (UniqueName: \"kubernetes.io/projected/2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6-kube-api-access-xj8bd\") pod \"ovnkube-node-dchhj\" (UID: \"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.432088 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fz879"] Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.438550 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fz879"] Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.506387 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:39 crc kubenswrapper[4897]: I0214 18:53:39.800004 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f304b761-40a3-41ba-af33-a2b0634a55fb" path="/var/lib/kubelet/pods/f304b761-40a3-41ba-af33-a2b0634a55fb/volumes" Feb 14 18:53:40 crc kubenswrapper[4897]: I0214 18:53:40.116518 4897 generic.go:334] "Generic (PLEG): container finished" podID="2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6" containerID="3e970140c4f760329cdd9d5c5fbf2534fc57e182d7f652658508b1635888adf1" exitCode=0 Feb 14 18:53:40 crc kubenswrapper[4897]: I0214 18:53:40.116597 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" event={"ID":"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6","Type":"ContainerDied","Data":"3e970140c4f760329cdd9d5c5fbf2534fc57e182d7f652658508b1635888adf1"} Feb 14 18:53:40 crc kubenswrapper[4897]: I0214 18:53:40.116641 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" event={"ID":"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6","Type":"ContainerStarted","Data":"3d4d0f3021b7f8369d8caa1c5c9a790bf9143bcfcae47ede467e9be0105176bd"} Feb 14 18:53:41 crc kubenswrapper[4897]: I0214 18:53:41.128772 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" event={"ID":"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6","Type":"ContainerStarted","Data":"955af6c62218a3d4254df2ba3199152c0bc98d36cd9640add75d6f95170aea97"} Feb 14 18:53:41 crc kubenswrapper[4897]: I0214 18:53:41.129132 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" event={"ID":"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6","Type":"ContainerStarted","Data":"32e31c2a0231e8add8f1f52086bc011909828f8a3203aa6219e830d0f6a7d2c8"} Feb 14 18:53:41 crc kubenswrapper[4897]: I0214 18:53:41.129148 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" event={"ID":"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6","Type":"ContainerStarted","Data":"03a15890e0ecec601aa0462097cb0d545e82bd44bfe62f00ca72040321a89ad0"} Feb 14 18:53:41 crc kubenswrapper[4897]: I0214 18:53:41.129161 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" event={"ID":"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6","Type":"ContainerStarted","Data":"dde6fe38efefadb1d3d6a050d5ae0928fd840719f2ffaf11330a8bcd015ba57e"} Feb 14 18:53:41 crc kubenswrapper[4897]: I0214 18:53:41.129169 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" event={"ID":"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6","Type":"ContainerStarted","Data":"d156cfed3212aa31f542a217b29d6bcf607584d31d077c61c79e95b0ab316837"} Feb 14 18:53:41 crc kubenswrapper[4897]: I0214 18:53:41.129179 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" event={"ID":"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6","Type":"ContainerStarted","Data":"056ae44b45aa964e72157cea84af9cbc505145be892030b539c94b45fb2ea72e"} Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.083921 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-nttxw"] Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.085795 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nttxw" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.088296 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.088613 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-l4ckd" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.091978 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.146412 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" event={"ID":"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6","Type":"ContainerStarted","Data":"ed74059c645dee5b01acd17f838f86bc04828976091a880b9052453807ce5cf5"} Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.197896 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg"] Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.198700 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.200724 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-rwxfl" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.200870 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.201161 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7rp7\" (UniqueName: \"kubernetes.io/projected/3d91a41b-7d8f-4ad4-9005-1a3bf7c40156-kube-api-access-r7rp7\") pod \"obo-prometheus-operator-68bc856cb9-nttxw\" (UID: \"3d91a41b-7d8f-4ad4-9005-1a3bf7c40156\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nttxw" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.210294 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2"] Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.211246 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.302637 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg\" (UID: \"3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.302693 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg\" (UID: \"3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.302798 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/869c7f86-090e-405c-9147-0815dbdd87c2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2\" (UID: \"869c7f86-090e-405c-9147-0815dbdd87c2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.302914 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/869c7f86-090e-405c-9147-0815dbdd87c2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2\" (UID: \"869c7f86-090e-405c-9147-0815dbdd87c2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.302947 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7rp7\" (UniqueName: \"kubernetes.io/projected/3d91a41b-7d8f-4ad4-9005-1a3bf7c40156-kube-api-access-r7rp7\") pod \"obo-prometheus-operator-68bc856cb9-nttxw\" (UID: \"3d91a41b-7d8f-4ad4-9005-1a3bf7c40156\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nttxw" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.325142 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7rp7\" (UniqueName: \"kubernetes.io/projected/3d91a41b-7d8f-4ad4-9005-1a3bf7c40156-kube-api-access-r7rp7\") pod \"obo-prometheus-operator-68bc856cb9-nttxw\" (UID: \"3d91a41b-7d8f-4ad4-9005-1a3bf7c40156\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nttxw" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.403347 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nttxw" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.403852 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg\" (UID: \"3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.403900 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg\" (UID: \"3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.403961 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/869c7f86-090e-405c-9147-0815dbdd87c2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2\" (UID: \"869c7f86-090e-405c-9147-0815dbdd87c2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.404015 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/869c7f86-090e-405c-9147-0815dbdd87c2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2\" (UID: \"869c7f86-090e-405c-9147-0815dbdd87c2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.407024 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg\" (UID: \"3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.407604 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg\" (UID: \"3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.407712 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/869c7f86-090e-405c-9147-0815dbdd87c2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2\" (UID: \"869c7f86-090e-405c-9147-0815dbdd87c2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.407874 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9t57n"] Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.408759 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9t57n" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.409333 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/869c7f86-090e-405c-9147-0815dbdd87c2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2\" (UID: \"869c7f86-090e-405c-9147-0815dbdd87c2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.411336 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.411523 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-kmcw7" Feb 14 18:53:44 crc kubenswrapper[4897]: E0214 18:53:44.447156 4897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-nttxw_openshift-operators_3d91a41b-7d8f-4ad4-9005-1a3bf7c40156_0(96135db9c38bc67a92dd8c3ea6b9047331eec9e7c6b57e8ec218e3d1eab54750): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 18:53:44 crc kubenswrapper[4897]: E0214 18:53:44.447243 4897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-nttxw_openshift-operators_3d91a41b-7d8f-4ad4-9005-1a3bf7c40156_0(96135db9c38bc67a92dd8c3ea6b9047331eec9e7c6b57e8ec218e3d1eab54750): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nttxw" Feb 14 18:53:44 crc kubenswrapper[4897]: E0214 18:53:44.447281 4897 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-nttxw_openshift-operators_3d91a41b-7d8f-4ad4-9005-1a3bf7c40156_0(96135db9c38bc67a92dd8c3ea6b9047331eec9e7c6b57e8ec218e3d1eab54750): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nttxw" Feb 14 18:53:44 crc kubenswrapper[4897]: E0214 18:53:44.447323 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-nttxw_openshift-operators(3d91a41b-7d8f-4ad4-9005-1a3bf7c40156)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-nttxw_openshift-operators(3d91a41b-7d8f-4ad4-9005-1a3bf7c40156)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-nttxw_openshift-operators_3d91a41b-7d8f-4ad4-9005-1a3bf7c40156_0(96135db9c38bc67a92dd8c3ea6b9047331eec9e7c6b57e8ec218e3d1eab54750): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nttxw" podUID="3d91a41b-7d8f-4ad4-9005-1a3bf7c40156" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.505444 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f9fcba2-5e97-421b-8868-b497df246731-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9t57n\" (UID: \"7f9fcba2-5e97-421b-8868-b497df246731\") " pod="openshift-operators/observability-operator-59bdc8b94-9t57n" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.505531 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66p9j\" (UniqueName: \"kubernetes.io/projected/7f9fcba2-5e97-421b-8868-b497df246731-kube-api-access-66p9j\") pod \"observability-operator-59bdc8b94-9t57n\" (UID: \"7f9fcba2-5e97-421b-8868-b497df246731\") " pod="openshift-operators/observability-operator-59bdc8b94-9t57n" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.515111 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.528650 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.544296 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-q66h9"] Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.545892 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-q66h9" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.549635 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-nb59b" Feb 14 18:53:44 crc kubenswrapper[4897]: E0214 18:53:44.559543 4897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg_openshift-operators_3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5_0(24ae9c93c5732fb8729a797a54d8fdcaeaa795ea5abebb802313971d2129defe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 18:53:44 crc kubenswrapper[4897]: E0214 18:53:44.559606 4897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg_openshift-operators_3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5_0(24ae9c93c5732fb8729a797a54d8fdcaeaa795ea5abebb802313971d2129defe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg" Feb 14 18:53:44 crc kubenswrapper[4897]: E0214 18:53:44.559632 4897 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg_openshift-operators_3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5_0(24ae9c93c5732fb8729a797a54d8fdcaeaa795ea5abebb802313971d2129defe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg" Feb 14 18:53:44 crc kubenswrapper[4897]: E0214 18:53:44.559680 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg_openshift-operators(3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg_openshift-operators(3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg_openshift-operators_3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5_0(24ae9c93c5732fb8729a797a54d8fdcaeaa795ea5abebb802313971d2129defe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg" podUID="3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5" Feb 14 18:53:44 crc kubenswrapper[4897]: E0214 18:53:44.586967 4897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2_openshift-operators_869c7f86-090e-405c-9147-0815dbdd87c2_0(f210b1b98bf99f61ecae321ad751850d43944b9d65c492b99d6197e61932cbc3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 18:53:44 crc kubenswrapper[4897]: E0214 18:53:44.587056 4897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2_openshift-operators_869c7f86-090e-405c-9147-0815dbdd87c2_0(f210b1b98bf99f61ecae321ad751850d43944b9d65c492b99d6197e61932cbc3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2" Feb 14 18:53:44 crc kubenswrapper[4897]: E0214 18:53:44.587084 4897 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2_openshift-operators_869c7f86-090e-405c-9147-0815dbdd87c2_0(f210b1b98bf99f61ecae321ad751850d43944b9d65c492b99d6197e61932cbc3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2" Feb 14 18:53:44 crc kubenswrapper[4897]: E0214 18:53:44.587131 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2_openshift-operators(869c7f86-090e-405c-9147-0815dbdd87c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2_openshift-operators(869c7f86-090e-405c-9147-0815dbdd87c2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2_openshift-operators_869c7f86-090e-405c-9147-0815dbdd87c2_0(f210b1b98bf99f61ecae321ad751850d43944b9d65c492b99d6197e61932cbc3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2" podUID="869c7f86-090e-405c-9147-0815dbdd87c2" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.606745 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj4cx\" (UniqueName: \"kubernetes.io/projected/b37fa061-9005-4aec-8681-c1107aad5075-kube-api-access-jj4cx\") pod \"perses-operator-5bf474d74f-q66h9\" (UID: \"b37fa061-9005-4aec-8681-c1107aad5075\") " pod="openshift-operators/perses-operator-5bf474d74f-q66h9" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.606804 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f9fcba2-5e97-421b-8868-b497df246731-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9t57n\" (UID: \"7f9fcba2-5e97-421b-8868-b497df246731\") " pod="openshift-operators/observability-operator-59bdc8b94-9t57n" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.606849 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b37fa061-9005-4aec-8681-c1107aad5075-openshift-service-ca\") pod \"perses-operator-5bf474d74f-q66h9\" (UID: \"b37fa061-9005-4aec-8681-c1107aad5075\") " pod="openshift-operators/perses-operator-5bf474d74f-q66h9" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.606894 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66p9j\" (UniqueName: \"kubernetes.io/projected/7f9fcba2-5e97-421b-8868-b497df246731-kube-api-access-66p9j\") pod \"observability-operator-59bdc8b94-9t57n\" (UID: \"7f9fcba2-5e97-421b-8868-b497df246731\") " pod="openshift-operators/observability-operator-59bdc8b94-9t57n" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.610611 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f9fcba2-5e97-421b-8868-b497df246731-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9t57n\" (UID: \"7f9fcba2-5e97-421b-8868-b497df246731\") " pod="openshift-operators/observability-operator-59bdc8b94-9t57n" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.623417 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66p9j\" (UniqueName: \"kubernetes.io/projected/7f9fcba2-5e97-421b-8868-b497df246731-kube-api-access-66p9j\") pod \"observability-operator-59bdc8b94-9t57n\" (UID: \"7f9fcba2-5e97-421b-8868-b497df246731\") " pod="openshift-operators/observability-operator-59bdc8b94-9t57n" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.709191 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj4cx\" (UniqueName: \"kubernetes.io/projected/b37fa061-9005-4aec-8681-c1107aad5075-kube-api-access-jj4cx\") pod \"perses-operator-5bf474d74f-q66h9\" (UID: \"b37fa061-9005-4aec-8681-c1107aad5075\") " pod="openshift-operators/perses-operator-5bf474d74f-q66h9" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.709260 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b37fa061-9005-4aec-8681-c1107aad5075-openshift-service-ca\") pod \"perses-operator-5bf474d74f-q66h9\" (UID: \"b37fa061-9005-4aec-8681-c1107aad5075\") " pod="openshift-operators/perses-operator-5bf474d74f-q66h9" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.710136 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b37fa061-9005-4aec-8681-c1107aad5075-openshift-service-ca\") pod \"perses-operator-5bf474d74f-q66h9\" (UID: \"b37fa061-9005-4aec-8681-c1107aad5075\") " pod="openshift-operators/perses-operator-5bf474d74f-q66h9" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.730394 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj4cx\" (UniqueName: \"kubernetes.io/projected/b37fa061-9005-4aec-8681-c1107aad5075-kube-api-access-jj4cx\") pod \"perses-operator-5bf474d74f-q66h9\" (UID: \"b37fa061-9005-4aec-8681-c1107aad5075\") " pod="openshift-operators/perses-operator-5bf474d74f-q66h9" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.803968 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9t57n" Feb 14 18:53:44 crc kubenswrapper[4897]: E0214 18:53:44.823120 4897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9t57n_openshift-operators_7f9fcba2-5e97-421b-8868-b497df246731_0(dcf4338e7d00438ac8289bc00f9604d7ddaf1128dd4dc9c1660f96d88d02e252): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 18:53:44 crc kubenswrapper[4897]: E0214 18:53:44.823187 4897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9t57n_openshift-operators_7f9fcba2-5e97-421b-8868-b497df246731_0(dcf4338e7d00438ac8289bc00f9604d7ddaf1128dd4dc9c1660f96d88d02e252): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9t57n" Feb 14 18:53:44 crc kubenswrapper[4897]: E0214 18:53:44.823210 4897 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9t57n_openshift-operators_7f9fcba2-5e97-421b-8868-b497df246731_0(dcf4338e7d00438ac8289bc00f9604d7ddaf1128dd4dc9c1660f96d88d02e252): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9t57n" Feb 14 18:53:44 crc kubenswrapper[4897]: E0214 18:53:44.823259 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-9t57n_openshift-operators(7f9fcba2-5e97-421b-8868-b497df246731)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-9t57n_openshift-operators(7f9fcba2-5e97-421b-8868-b497df246731)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9t57n_openshift-operators_7f9fcba2-5e97-421b-8868-b497df246731_0(dcf4338e7d00438ac8289bc00f9604d7ddaf1128dd4dc9c1660f96d88d02e252): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-9t57n" podUID="7f9fcba2-5e97-421b-8868-b497df246731" Feb 14 18:53:44 crc kubenswrapper[4897]: I0214 18:53:44.878845 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-q66h9" Feb 14 18:53:44 crc kubenswrapper[4897]: E0214 18:53:44.897364 4897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-q66h9_openshift-operators_b37fa061-9005-4aec-8681-c1107aad5075_0(7d89f425c9a153c9f624763ff8516dbacdad3260e2a16db8e5f40beb228e9df0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 18:53:44 crc kubenswrapper[4897]: E0214 18:53:44.897454 4897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-q66h9_openshift-operators_b37fa061-9005-4aec-8681-c1107aad5075_0(7d89f425c9a153c9f624763ff8516dbacdad3260e2a16db8e5f40beb228e9df0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-q66h9" Feb 14 18:53:44 crc kubenswrapper[4897]: E0214 18:53:44.897488 4897 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-q66h9_openshift-operators_b37fa061-9005-4aec-8681-c1107aad5075_0(7d89f425c9a153c9f624763ff8516dbacdad3260e2a16db8e5f40beb228e9df0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-q66h9" Feb 14 18:53:44 crc kubenswrapper[4897]: E0214 18:53:44.897576 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-q66h9_openshift-operators(b37fa061-9005-4aec-8681-c1107aad5075)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-q66h9_openshift-operators(b37fa061-9005-4aec-8681-c1107aad5075)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-q66h9_openshift-operators_b37fa061-9005-4aec-8681-c1107aad5075_0(7d89f425c9a153c9f624763ff8516dbacdad3260e2a16db8e5f40beb228e9df0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-q66h9" podUID="b37fa061-9005-4aec-8681-c1107aad5075" Feb 14 18:53:46 crc kubenswrapper[4897]: I0214 18:53:46.160296 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" event={"ID":"2a2fd379-8c0d-4f83-9b8d-076fc2a9f1b6","Type":"ContainerStarted","Data":"3791bd406628b9b2278bb987947faeb8bed78d8cf537cfd40201b8900dfa9446"} Feb 14 18:53:46 crc kubenswrapper[4897]: I0214 18:53:46.160804 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:46 crc kubenswrapper[4897]: I0214 18:53:46.160816 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:46 crc kubenswrapper[4897]: I0214 18:53:46.160825 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:46 crc kubenswrapper[4897]: I0214 18:53:46.211299 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" podStartSLOduration=7.21128327 podStartE2EDuration="7.21128327s" podCreationTimestamp="2026-02-14 18:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:53:46.209539246 +0000 UTC m=+679.185947729" watchObservedRunningTime="2026-02-14 18:53:46.21128327 +0000 UTC m=+679.187691743" Feb 14 18:53:46 crc kubenswrapper[4897]: I0214 18:53:46.223745 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:46 crc kubenswrapper[4897]: I0214 18:53:46.236603 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:53:46 crc kubenswrapper[4897]: I0214 18:53:46.319631 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-q66h9"] Feb 14 18:53:46 crc kubenswrapper[4897]: I0214 18:53:46.319730 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-q66h9" Feb 14 18:53:46 crc kubenswrapper[4897]: I0214 18:53:46.320166 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-q66h9" Feb 14 18:53:46 crc kubenswrapper[4897]: I0214 18:53:46.332881 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg"] Feb 14 18:53:46 crc kubenswrapper[4897]: I0214 18:53:46.333112 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg" Feb 14 18:53:46 crc kubenswrapper[4897]: I0214 18:53:46.333792 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg" Feb 14 18:53:46 crc kubenswrapper[4897]: I0214 18:53:46.338466 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2"] Feb 14 18:53:46 crc kubenswrapper[4897]: I0214 18:53:46.338597 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2" Feb 14 18:53:46 crc kubenswrapper[4897]: I0214 18:53:46.339224 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2" Feb 14 18:53:46 crc kubenswrapper[4897]: I0214 18:53:46.346936 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-nttxw"] Feb 14 18:53:46 crc kubenswrapper[4897]: I0214 18:53:46.347155 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nttxw" Feb 14 18:53:46 crc kubenswrapper[4897]: I0214 18:53:46.347653 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nttxw" Feb 14 18:53:46 crc kubenswrapper[4897]: I0214 18:53:46.365816 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9t57n"] Feb 14 18:53:46 crc kubenswrapper[4897]: I0214 18:53:46.365933 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9t57n" Feb 14 18:53:46 crc kubenswrapper[4897]: I0214 18:53:46.366373 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9t57n" Feb 14 18:53:46 crc kubenswrapper[4897]: E0214 18:53:46.378826 4897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-q66h9_openshift-operators_b37fa061-9005-4aec-8681-c1107aad5075_0(bcf01ead2c448eb64c8e8e4b179393b0654dc0f90bd2b5181b8330e591d685ac): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 18:53:46 crc kubenswrapper[4897]: E0214 18:53:46.378881 4897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-q66h9_openshift-operators_b37fa061-9005-4aec-8681-c1107aad5075_0(bcf01ead2c448eb64c8e8e4b179393b0654dc0f90bd2b5181b8330e591d685ac): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-q66h9" Feb 14 18:53:46 crc kubenswrapper[4897]: E0214 18:53:46.378902 4897 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-q66h9_openshift-operators_b37fa061-9005-4aec-8681-c1107aad5075_0(bcf01ead2c448eb64c8e8e4b179393b0654dc0f90bd2b5181b8330e591d685ac): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-q66h9" Feb 14 18:53:46 crc kubenswrapper[4897]: E0214 18:53:46.378941 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-q66h9_openshift-operators(b37fa061-9005-4aec-8681-c1107aad5075)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-q66h9_openshift-operators(b37fa061-9005-4aec-8681-c1107aad5075)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-q66h9_openshift-operators_b37fa061-9005-4aec-8681-c1107aad5075_0(bcf01ead2c448eb64c8e8e4b179393b0654dc0f90bd2b5181b8330e591d685ac): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-q66h9" podUID="b37fa061-9005-4aec-8681-c1107aad5075" Feb 14 18:53:46 crc kubenswrapper[4897]: E0214 18:53:46.419607 4897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg_openshift-operators_3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5_0(4cba021b8df0691da2c02b8178ee0f4c6654f9be331a37e10807ecec7a98e719): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 18:53:46 crc kubenswrapper[4897]: E0214 18:53:46.419786 4897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg_openshift-operators_3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5_0(4cba021b8df0691da2c02b8178ee0f4c6654f9be331a37e10807ecec7a98e719): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg" Feb 14 18:53:46 crc kubenswrapper[4897]: E0214 18:53:46.419865 4897 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg_openshift-operators_3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5_0(4cba021b8df0691da2c02b8178ee0f4c6654f9be331a37e10807ecec7a98e719): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg" Feb 14 18:53:46 crc kubenswrapper[4897]: E0214 18:53:46.419958 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg_openshift-operators(3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg_openshift-operators(3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg_openshift-operators_3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5_0(4cba021b8df0691da2c02b8178ee0f4c6654f9be331a37e10807ecec7a98e719): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg" podUID="3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5" Feb 14 18:53:46 crc kubenswrapper[4897]: E0214 18:53:46.430962 4897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-nttxw_openshift-operators_3d91a41b-7d8f-4ad4-9005-1a3bf7c40156_0(25d850d4d9d26875f6be621ddff0b9298b0cfb03fd50460d9abda15d93bf7f33): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 18:53:46 crc kubenswrapper[4897]: E0214 18:53:46.431092 4897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-nttxw_openshift-operators_3d91a41b-7d8f-4ad4-9005-1a3bf7c40156_0(25d850d4d9d26875f6be621ddff0b9298b0cfb03fd50460d9abda15d93bf7f33): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nttxw" Feb 14 18:53:46 crc kubenswrapper[4897]: E0214 18:53:46.431170 4897 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-nttxw_openshift-operators_3d91a41b-7d8f-4ad4-9005-1a3bf7c40156_0(25d850d4d9d26875f6be621ddff0b9298b0cfb03fd50460d9abda15d93bf7f33): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nttxw" Feb 14 18:53:46 crc kubenswrapper[4897]: E0214 18:53:46.431277 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-nttxw_openshift-operators(3d91a41b-7d8f-4ad4-9005-1a3bf7c40156)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-nttxw_openshift-operators(3d91a41b-7d8f-4ad4-9005-1a3bf7c40156)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-nttxw_openshift-operators_3d91a41b-7d8f-4ad4-9005-1a3bf7c40156_0(25d850d4d9d26875f6be621ddff0b9298b0cfb03fd50460d9abda15d93bf7f33): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nttxw" podUID="3d91a41b-7d8f-4ad4-9005-1a3bf7c40156" Feb 14 18:53:46 crc kubenswrapper[4897]: E0214 18:53:46.451254 4897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2_openshift-operators_869c7f86-090e-405c-9147-0815dbdd87c2_0(db891dbfe4dcad5653f6bcfaeed1827222edefdf50e0365db3750ebd014ab8cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 18:53:46 crc kubenswrapper[4897]: E0214 18:53:46.451322 4897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2_openshift-operators_869c7f86-090e-405c-9147-0815dbdd87c2_0(db891dbfe4dcad5653f6bcfaeed1827222edefdf50e0365db3750ebd014ab8cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2" Feb 14 18:53:46 crc kubenswrapper[4897]: E0214 18:53:46.451344 4897 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2_openshift-operators_869c7f86-090e-405c-9147-0815dbdd87c2_0(db891dbfe4dcad5653f6bcfaeed1827222edefdf50e0365db3750ebd014ab8cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2" Feb 14 18:53:46 crc kubenswrapper[4897]: E0214 18:53:46.451389 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2_openshift-operators(869c7f86-090e-405c-9147-0815dbdd87c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2_openshift-operators(869c7f86-090e-405c-9147-0815dbdd87c2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2_openshift-operators_869c7f86-090e-405c-9147-0815dbdd87c2_0(db891dbfe4dcad5653f6bcfaeed1827222edefdf50e0365db3750ebd014ab8cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2" podUID="869c7f86-090e-405c-9147-0815dbdd87c2" Feb 14 18:53:46 crc kubenswrapper[4897]: E0214 18:53:46.464095 4897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9t57n_openshift-operators_7f9fcba2-5e97-421b-8868-b497df246731_0(e7552a5062d2d9afdab18793bf95a0d4e1ea1d8a57ad50cba3c99925b594cfac): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 18:53:46 crc kubenswrapper[4897]: E0214 18:53:46.464155 4897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9t57n_openshift-operators_7f9fcba2-5e97-421b-8868-b497df246731_0(e7552a5062d2d9afdab18793bf95a0d4e1ea1d8a57ad50cba3c99925b594cfac): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9t57n" Feb 14 18:53:46 crc kubenswrapper[4897]: E0214 18:53:46.464179 4897 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9t57n_openshift-operators_7f9fcba2-5e97-421b-8868-b497df246731_0(e7552a5062d2d9afdab18793bf95a0d4e1ea1d8a57ad50cba3c99925b594cfac): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9t57n" Feb 14 18:53:46 crc kubenswrapper[4897]: E0214 18:53:46.464222 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-9t57n_openshift-operators(7f9fcba2-5e97-421b-8868-b497df246731)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-9t57n_openshift-operators(7f9fcba2-5e97-421b-8868-b497df246731)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9t57n_openshift-operators_7f9fcba2-5e97-421b-8868-b497df246731_0(e7552a5062d2d9afdab18793bf95a0d4e1ea1d8a57ad50cba3c99925b594cfac): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-9t57n" podUID="7f9fcba2-5e97-421b-8868-b497df246731" Feb 14 18:53:50 crc kubenswrapper[4897]: I0214 18:53:50.793970 4897 scope.go:117] "RemoveContainer" containerID="a994cd3d62a87d79d3720ba26ad60a180a3ea6b395c07485dd6d24071ac72977" Feb 14 18:53:50 crc kubenswrapper[4897]: E0214 18:53:50.794443 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-ldvzr_openshift-multus(b5b30895-0d98-44e4-8e31-2c5ebe5e1850)\"" pod="openshift-multus/multus-ldvzr" podUID="b5b30895-0d98-44e4-8e31-2c5ebe5e1850" Feb 14 18:53:57 crc kubenswrapper[4897]: I0214 18:53:57.793559 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nttxw" Feb 14 18:53:57 crc kubenswrapper[4897]: I0214 18:53:57.798162 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nttxw" Feb 14 18:53:57 crc kubenswrapper[4897]: E0214 18:53:57.831691 4897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-nttxw_openshift-operators_3d91a41b-7d8f-4ad4-9005-1a3bf7c40156_0(6484e3a840aed4921428767fd5c8c319654b6d3665af278dc0a8b50fc646bc35): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 18:53:57 crc kubenswrapper[4897]: E0214 18:53:57.832143 4897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-nttxw_openshift-operators_3d91a41b-7d8f-4ad4-9005-1a3bf7c40156_0(6484e3a840aed4921428767fd5c8c319654b6d3665af278dc0a8b50fc646bc35): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nttxw" Feb 14 18:53:57 crc kubenswrapper[4897]: E0214 18:53:57.832178 4897 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-nttxw_openshift-operators_3d91a41b-7d8f-4ad4-9005-1a3bf7c40156_0(6484e3a840aed4921428767fd5c8c319654b6d3665af278dc0a8b50fc646bc35): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nttxw" Feb 14 18:53:57 crc kubenswrapper[4897]: E0214 18:53:57.832240 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-nttxw_openshift-operators(3d91a41b-7d8f-4ad4-9005-1a3bf7c40156)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-nttxw_openshift-operators(3d91a41b-7d8f-4ad4-9005-1a3bf7c40156)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-nttxw_openshift-operators_3d91a41b-7d8f-4ad4-9005-1a3bf7c40156_0(6484e3a840aed4921428767fd5c8c319654b6d3665af278dc0a8b50fc646bc35): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nttxw" podUID="3d91a41b-7d8f-4ad4-9005-1a3bf7c40156" Feb 14 18:53:59 crc kubenswrapper[4897]: I0214 18:53:59.793093 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2" Feb 14 18:53:59 crc kubenswrapper[4897]: I0214 18:53:59.793766 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2" Feb 14 18:53:59 crc kubenswrapper[4897]: E0214 18:53:59.845543 4897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2_openshift-operators_869c7f86-090e-405c-9147-0815dbdd87c2_0(335d7882990910f6404fba32d237cfefc75b9a2c7a20e0ea0f371ea6d8c4ed19): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 18:53:59 crc kubenswrapper[4897]: E0214 18:53:59.845898 4897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2_openshift-operators_869c7f86-090e-405c-9147-0815dbdd87c2_0(335d7882990910f6404fba32d237cfefc75b9a2c7a20e0ea0f371ea6d8c4ed19): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2" Feb 14 18:53:59 crc kubenswrapper[4897]: E0214 18:53:59.845925 4897 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2_openshift-operators_869c7f86-090e-405c-9147-0815dbdd87c2_0(335d7882990910f6404fba32d237cfefc75b9a2c7a20e0ea0f371ea6d8c4ed19): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2" Feb 14 18:53:59 crc kubenswrapper[4897]: E0214 18:53:59.845981 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2_openshift-operators(869c7f86-090e-405c-9147-0815dbdd87c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2_openshift-operators(869c7f86-090e-405c-9147-0815dbdd87c2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2_openshift-operators_869c7f86-090e-405c-9147-0815dbdd87c2_0(335d7882990910f6404fba32d237cfefc75b9a2c7a20e0ea0f371ea6d8c4ed19): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2" podUID="869c7f86-090e-405c-9147-0815dbdd87c2" Feb 14 18:54:01 crc kubenswrapper[4897]: I0214 18:54:01.793410 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9t57n" Feb 14 18:54:01 crc kubenswrapper[4897]: I0214 18:54:01.793460 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg" Feb 14 18:54:01 crc kubenswrapper[4897]: I0214 18:54:01.793529 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-q66h9" Feb 14 18:54:01 crc kubenswrapper[4897]: I0214 18:54:01.794085 4897 scope.go:117] "RemoveContainer" containerID="a994cd3d62a87d79d3720ba26ad60a180a3ea6b395c07485dd6d24071ac72977" Feb 14 18:54:01 crc kubenswrapper[4897]: I0214 18:54:01.794748 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg" Feb 14 18:54:01 crc kubenswrapper[4897]: I0214 18:54:01.795009 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-q66h9" Feb 14 18:54:01 crc kubenswrapper[4897]: I0214 18:54:01.795179 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9t57n" Feb 14 18:54:01 crc kubenswrapper[4897]: E0214 18:54:01.831457 4897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg_openshift-operators_3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5_0(ac2758634a07d76741121ba83e2b0e427b9c6a8f16ca766c6ead91e669554994): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 18:54:01 crc kubenswrapper[4897]: E0214 18:54:01.831529 4897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg_openshift-operators_3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5_0(ac2758634a07d76741121ba83e2b0e427b9c6a8f16ca766c6ead91e669554994): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg" Feb 14 18:54:01 crc kubenswrapper[4897]: E0214 18:54:01.831555 4897 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg_openshift-operators_3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5_0(ac2758634a07d76741121ba83e2b0e427b9c6a8f16ca766c6ead91e669554994): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg" Feb 14 18:54:01 crc kubenswrapper[4897]: E0214 18:54:01.831603 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg_openshift-operators(3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg_openshift-operators(3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg_openshift-operators_3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5_0(ac2758634a07d76741121ba83e2b0e427b9c6a8f16ca766c6ead91e669554994): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg" podUID="3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5" Feb 14 18:54:01 crc kubenswrapper[4897]: E0214 18:54:01.851231 4897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9t57n_openshift-operators_7f9fcba2-5e97-421b-8868-b497df246731_0(3d7b4d3afa0af0964c1985c357501052549f428d8379a4e5b64251e31c0b0eb1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 18:54:01 crc kubenswrapper[4897]: E0214 18:54:01.851450 4897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9t57n_openshift-operators_7f9fcba2-5e97-421b-8868-b497df246731_0(3d7b4d3afa0af0964c1985c357501052549f428d8379a4e5b64251e31c0b0eb1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9t57n" Feb 14 18:54:01 crc kubenswrapper[4897]: E0214 18:54:01.851518 4897 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9t57n_openshift-operators_7f9fcba2-5e97-421b-8868-b497df246731_0(3d7b4d3afa0af0964c1985c357501052549f428d8379a4e5b64251e31c0b0eb1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9t57n" Feb 14 18:54:01 crc kubenswrapper[4897]: E0214 18:54:01.851606 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-9t57n_openshift-operators(7f9fcba2-5e97-421b-8868-b497df246731)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-9t57n_openshift-operators(7f9fcba2-5e97-421b-8868-b497df246731)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9t57n_openshift-operators_7f9fcba2-5e97-421b-8868-b497df246731_0(3d7b4d3afa0af0964c1985c357501052549f428d8379a4e5b64251e31c0b0eb1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-9t57n" podUID="7f9fcba2-5e97-421b-8868-b497df246731" Feb 14 18:54:01 crc kubenswrapper[4897]: E0214 18:54:01.864691 4897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-q66h9_openshift-operators_b37fa061-9005-4aec-8681-c1107aad5075_0(f66489e13e3a496f2ffa5ddbf55833a1cb5a3164d8afde3d86b6a161862d37c4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 18:54:01 crc kubenswrapper[4897]: E0214 18:54:01.864740 4897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-q66h9_openshift-operators_b37fa061-9005-4aec-8681-c1107aad5075_0(f66489e13e3a496f2ffa5ddbf55833a1cb5a3164d8afde3d86b6a161862d37c4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-q66h9" Feb 14 18:54:01 crc kubenswrapper[4897]: E0214 18:54:01.864761 4897 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-q66h9_openshift-operators_b37fa061-9005-4aec-8681-c1107aad5075_0(f66489e13e3a496f2ffa5ddbf55833a1cb5a3164d8afde3d86b6a161862d37c4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-q66h9" Feb 14 18:54:01 crc kubenswrapper[4897]: E0214 18:54:01.864797 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-q66h9_openshift-operators(b37fa061-9005-4aec-8681-c1107aad5075)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-q66h9_openshift-operators(b37fa061-9005-4aec-8681-c1107aad5075)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-q66h9_openshift-operators_b37fa061-9005-4aec-8681-c1107aad5075_0(f66489e13e3a496f2ffa5ddbf55833a1cb5a3164d8afde3d86b6a161862d37c4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-q66h9" podUID="b37fa061-9005-4aec-8681-c1107aad5075" Feb 14 18:54:02 crc kubenswrapper[4897]: I0214 18:54:02.260650 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ldvzr_b5b30895-0d98-44e4-8e31-2c5ebe5e1850/kube-multus/2.log" Feb 14 18:54:02 crc kubenswrapper[4897]: I0214 18:54:02.261188 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ldvzr" event={"ID":"b5b30895-0d98-44e4-8e31-2c5ebe5e1850","Type":"ContainerStarted","Data":"86dac0d01badbaffb435e894e9c627ca94872eadf8cc1c4d20f444c65d901404"} Feb 14 18:54:08 crc kubenswrapper[4897]: I0214 18:54:08.793142 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nttxw" Feb 14 18:54:08 crc kubenswrapper[4897]: I0214 18:54:08.794112 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nttxw" Feb 14 18:54:09 crc kubenswrapper[4897]: I0214 18:54:09.218089 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-nttxw"] Feb 14 18:54:09 crc kubenswrapper[4897]: I0214 18:54:09.306511 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nttxw" event={"ID":"3d91a41b-7d8f-4ad4-9005-1a3bf7c40156","Type":"ContainerStarted","Data":"76238ce4005f416983148a009b0db348a42da72c6f35e7f5cfd46cba740ae66c"} Feb 14 18:54:09 crc kubenswrapper[4897]: I0214 18:54:09.545300 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dchhj" Feb 14 18:54:10 crc kubenswrapper[4897]: I0214 18:54:10.793113 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2" Feb 14 18:54:10 crc kubenswrapper[4897]: I0214 18:54:10.793868 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2" Feb 14 18:54:11 crc kubenswrapper[4897]: I0214 18:54:11.193249 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2"] Feb 14 18:54:11 crc kubenswrapper[4897]: I0214 18:54:11.320164 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2" event={"ID":"869c7f86-090e-405c-9147-0815dbdd87c2","Type":"ContainerStarted","Data":"63e30f27cdfeb493770cadcedc8ff884129b094d39dfaa2558624d0a6d4f6fdc"} Feb 14 18:54:15 crc kubenswrapper[4897]: I0214 18:54:15.348398 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nttxw" event={"ID":"3d91a41b-7d8f-4ad4-9005-1a3bf7c40156","Type":"ContainerStarted","Data":"26294e9ee38a39e7f550ab9dbd324cc294c9c44b0a753e1fda4b3b864c5e083a"} Feb 14 18:54:15 crc kubenswrapper[4897]: I0214 18:54:15.371809 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nttxw" podStartSLOduration=25.962869782 podStartE2EDuration="31.371778084s" podCreationTimestamp="2026-02-14 18:53:44 +0000 UTC" firstStartedPulling="2026-02-14 18:54:09.241328993 +0000 UTC m=+702.217737506" lastFinishedPulling="2026-02-14 18:54:14.650237315 +0000 UTC m=+707.626645808" observedRunningTime="2026-02-14 18:54:15.369265055 +0000 UTC m=+708.345673538" watchObservedRunningTime="2026-02-14 18:54:15.371778084 +0000 UTC m=+708.348186607" Feb 14 18:54:15 crc kubenswrapper[4897]: I0214 18:54:15.793933 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-q66h9" Feb 14 18:54:15 crc kubenswrapper[4897]: I0214 18:54:15.794710 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-q66h9" Feb 14 18:54:16 crc kubenswrapper[4897]: I0214 18:54:16.368116 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2" event={"ID":"869c7f86-090e-405c-9147-0815dbdd87c2","Type":"ContainerStarted","Data":"a45a915c987b2447b37b934fd0ce1380868c0388f0c07594f247bb05fb9afc6a"} Feb 14 18:54:16 crc kubenswrapper[4897]: I0214 18:54:16.371535 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-q66h9"] Feb 14 18:54:16 crc kubenswrapper[4897]: W0214 18:54:16.372869 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb37fa061_9005_4aec_8681_c1107aad5075.slice/crio-828c881d4e65d0cdbc93f41d0c21ce97d6735e14aca3235e25efa201008dea59 WatchSource:0}: Error finding container 828c881d4e65d0cdbc93f41d0c21ce97d6735e14aca3235e25efa201008dea59: Status 404 returned error can't find the container with id 828c881d4e65d0cdbc93f41d0c21ce97d6735e14aca3235e25efa201008dea59 Feb 14 18:54:16 crc kubenswrapper[4897]: I0214 18:54:16.405874 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2" podStartSLOduration=28.292724031 podStartE2EDuration="32.405851566s" podCreationTimestamp="2026-02-14 18:53:44 +0000 UTC" firstStartedPulling="2026-02-14 18:54:11.211329341 +0000 UTC m=+704.187737824" lastFinishedPulling="2026-02-14 18:54:15.324456876 +0000 UTC m=+708.300865359" observedRunningTime="2026-02-14 18:54:16.399948371 +0000 UTC m=+709.376356874" watchObservedRunningTime="2026-02-14 18:54:16.405851566 +0000 UTC m=+709.382260049" Feb 14 18:54:16 crc kubenswrapper[4897]: I0214 18:54:16.793442 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg" Feb 14 18:54:16 crc kubenswrapper[4897]: I0214 18:54:16.794481 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg" Feb 14 18:54:17 crc kubenswrapper[4897]: I0214 18:54:17.102132 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg"] Feb 14 18:54:17 crc kubenswrapper[4897]: I0214 18:54:17.382086 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg" event={"ID":"3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5","Type":"ContainerStarted","Data":"ff9d45cc3d855a76d08d66ed6a24fed9bae1353a39ef0b025dd3f244909162fa"} Feb 14 18:54:17 crc kubenswrapper[4897]: I0214 18:54:17.382433 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg" event={"ID":"3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5","Type":"ContainerStarted","Data":"b17683d979192dfda9b9f9a7f3ef12559df5e36115086f644b2c00b58d0b164d"} Feb 14 18:54:17 crc kubenswrapper[4897]: I0214 18:54:17.383508 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-q66h9" event={"ID":"b37fa061-9005-4aec-8681-c1107aad5075","Type":"ContainerStarted","Data":"828c881d4e65d0cdbc93f41d0c21ce97d6735e14aca3235e25efa201008dea59"} Feb 14 18:54:17 crc kubenswrapper[4897]: I0214 18:54:17.424929 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg" podStartSLOduration=33.424903819 podStartE2EDuration="33.424903819s" podCreationTimestamp="2026-02-14 18:53:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:54:17.408451104 +0000 UTC m=+710.384859617" watchObservedRunningTime="2026-02-14 18:54:17.424903819 +0000 UTC m=+710.401312342" Feb 14 18:54:17 crc kubenswrapper[4897]: I0214 18:54:17.795290 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9t57n" Feb 14 18:54:17 crc kubenswrapper[4897]: I0214 18:54:17.801905 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9t57n" Feb 14 18:54:18 crc kubenswrapper[4897]: I0214 18:54:18.018142 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9t57n"] Feb 14 18:54:18 crc kubenswrapper[4897]: W0214 18:54:18.027873 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f9fcba2_5e97_421b_8868_b497df246731.slice/crio-f4a949f66b326b0351fd94aac8e8eff80f0224786ab5f70d1c0961a58a1bfe6e WatchSource:0}: Error finding container f4a949f66b326b0351fd94aac8e8eff80f0224786ab5f70d1c0961a58a1bfe6e: Status 404 returned error can't find the container with id f4a949f66b326b0351fd94aac8e8eff80f0224786ab5f70d1c0961a58a1bfe6e Feb 14 18:54:18 crc kubenswrapper[4897]: I0214 18:54:18.391989 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9t57n" event={"ID":"7f9fcba2-5e97-421b-8868-b497df246731","Type":"ContainerStarted","Data":"f4a949f66b326b0351fd94aac8e8eff80f0224786ab5f70d1c0961a58a1bfe6e"} Feb 14 18:54:19 crc kubenswrapper[4897]: I0214 18:54:19.400273 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-q66h9" event={"ID":"b37fa061-9005-4aec-8681-c1107aad5075","Type":"ContainerStarted","Data":"7501190587cc653fccf0040d58abe9a0b14faf793753171e43cf6fae775e76a3"} Feb 14 18:54:19 crc kubenswrapper[4897]: I0214 18:54:19.400452 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-q66h9" Feb 14 18:54:19 crc kubenswrapper[4897]: I0214 18:54:19.416981 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-q66h9" podStartSLOduration=32.95149454 podStartE2EDuration="35.416962755s" podCreationTimestamp="2026-02-14 18:53:44 +0000 UTC" firstStartedPulling="2026-02-14 18:54:16.377198751 +0000 UTC m=+709.353607234" lastFinishedPulling="2026-02-14 18:54:18.842666956 +0000 UTC m=+711.819075449" observedRunningTime="2026-02-14 18:54:19.414662884 +0000 UTC m=+712.391071377" watchObservedRunningTime="2026-02-14 18:54:19.416962755 +0000 UTC m=+712.393371248" Feb 14 18:54:23 crc kubenswrapper[4897]: I0214 18:54:23.445591 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9t57n" event={"ID":"7f9fcba2-5e97-421b-8868-b497df246731","Type":"ContainerStarted","Data":"380219c2ec4bd88df307bb15e72c3d8aa719c0da1019b41cb199dfbd2f7e75b4"} Feb 14 18:54:23 crc kubenswrapper[4897]: I0214 18:54:23.446575 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-9t57n" Feb 14 18:54:23 crc kubenswrapper[4897]: I0214 18:54:23.448243 4897 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9t57n container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.35:8081/healthz\": dial tcp 10.217.0.35:8081: connect: connection refused" start-of-body= Feb 14 18:54:23 crc kubenswrapper[4897]: I0214 18:54:23.448327 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-9t57n" podUID="7f9fcba2-5e97-421b-8868-b497df246731" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.35:8081/healthz\": dial tcp 10.217.0.35:8081: connect: connection refused" Feb 14 18:54:23 crc kubenswrapper[4897]: I0214 18:54:23.482068 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-9t57n" podStartSLOduration=34.554518894 podStartE2EDuration="39.482001826s" podCreationTimestamp="2026-02-14 18:53:44 +0000 UTC" firstStartedPulling="2026-02-14 18:54:18.031327922 +0000 UTC m=+711.007736405" lastFinishedPulling="2026-02-14 18:54:22.958810854 +0000 UTC m=+715.935219337" observedRunningTime="2026-02-14 18:54:23.477813544 +0000 UTC m=+716.454222087" watchObservedRunningTime="2026-02-14 18:54:23.482001826 +0000 UTC m=+716.458410349" Feb 14 18:54:24 crc kubenswrapper[4897]: I0214 18:54:24.455744 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-9t57n" Feb 14 18:54:24 crc kubenswrapper[4897]: I0214 18:54:24.883172 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-q66h9" Feb 14 18:54:34 crc kubenswrapper[4897]: I0214 18:54:34.682293 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-5jj96"] Feb 14 18:54:34 crc kubenswrapper[4897]: I0214 18:54:34.684264 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5jj96" Feb 14 18:54:34 crc kubenswrapper[4897]: I0214 18:54:34.687523 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-5jj96"] Feb 14 18:54:34 crc kubenswrapper[4897]: I0214 18:54:34.689316 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 14 18:54:34 crc kubenswrapper[4897]: I0214 18:54:34.689569 4897 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-4pxr9" Feb 14 18:54:34 crc kubenswrapper[4897]: I0214 18:54:34.689705 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 14 18:54:34 crc kubenswrapper[4897]: I0214 18:54:34.694505 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-slgqx"] Feb 14 18:54:34 crc kubenswrapper[4897]: I0214 18:54:34.695261 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-slgqx" Feb 14 18:54:34 crc kubenswrapper[4897]: I0214 18:54:34.697107 4897 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-cqtcg" Feb 14 18:54:34 crc kubenswrapper[4897]: I0214 18:54:34.704232 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-pmlmt"] Feb 14 18:54:34 crc kubenswrapper[4897]: I0214 18:54:34.704975 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-pmlmt" Feb 14 18:54:34 crc kubenswrapper[4897]: I0214 18:54:34.706460 4897 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-bmb5h" Feb 14 18:54:34 crc kubenswrapper[4897]: I0214 18:54:34.715833 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-slgqx"] Feb 14 18:54:34 crc kubenswrapper[4897]: I0214 18:54:34.721023 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d7jt\" (UniqueName: \"kubernetes.io/projected/89273d01-2f22-4f94-8217-2b51d8b1319b-kube-api-access-7d7jt\") pod \"cert-manager-858654f9db-slgqx\" (UID: \"89273d01-2f22-4f94-8217-2b51d8b1319b\") " pod="cert-manager/cert-manager-858654f9db-slgqx" Feb 14 18:54:34 crc kubenswrapper[4897]: I0214 18:54:34.721111 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdj84\" (UniqueName: \"kubernetes.io/projected/6fe45416-c3cc-40b0-bffb-d43af376cebe-kube-api-access-fdj84\") pod \"cert-manager-cainjector-cf98fcc89-5jj96\" (UID: \"6fe45416-c3cc-40b0-bffb-d43af376cebe\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-5jj96" Feb 14 18:54:34 crc kubenswrapper[4897]: I0214 18:54:34.721176 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdtjq\" (UniqueName: \"kubernetes.io/projected/0b1febb3-dc70-4cd5-9a48-024547405da7-kube-api-access-cdtjq\") pod \"cert-manager-webhook-687f57d79b-pmlmt\" (UID: \"0b1febb3-dc70-4cd5-9a48-024547405da7\") " pod="cert-manager/cert-manager-webhook-687f57d79b-pmlmt" Feb 14 18:54:34 crc kubenswrapper[4897]: I0214 18:54:34.733949 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-pmlmt"] Feb 14 18:54:34 crc kubenswrapper[4897]: I0214 18:54:34.822820 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d7jt\" (UniqueName: \"kubernetes.io/projected/89273d01-2f22-4f94-8217-2b51d8b1319b-kube-api-access-7d7jt\") pod \"cert-manager-858654f9db-slgqx\" (UID: \"89273d01-2f22-4f94-8217-2b51d8b1319b\") " pod="cert-manager/cert-manager-858654f9db-slgqx" Feb 14 18:54:34 crc kubenswrapper[4897]: I0214 18:54:34.822918 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdj84\" (UniqueName: \"kubernetes.io/projected/6fe45416-c3cc-40b0-bffb-d43af376cebe-kube-api-access-fdj84\") pod \"cert-manager-cainjector-cf98fcc89-5jj96\" (UID: \"6fe45416-c3cc-40b0-bffb-d43af376cebe\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-5jj96" Feb 14 18:54:34 crc kubenswrapper[4897]: I0214 18:54:34.822980 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdtjq\" (UniqueName: \"kubernetes.io/projected/0b1febb3-dc70-4cd5-9a48-024547405da7-kube-api-access-cdtjq\") pod \"cert-manager-webhook-687f57d79b-pmlmt\" (UID: \"0b1febb3-dc70-4cd5-9a48-024547405da7\") " pod="cert-manager/cert-manager-webhook-687f57d79b-pmlmt" Feb 14 18:54:34 crc kubenswrapper[4897]: I0214 18:54:34.847363 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d7jt\" (UniqueName: \"kubernetes.io/projected/89273d01-2f22-4f94-8217-2b51d8b1319b-kube-api-access-7d7jt\") pod \"cert-manager-858654f9db-slgqx\" (UID: \"89273d01-2f22-4f94-8217-2b51d8b1319b\") " pod="cert-manager/cert-manager-858654f9db-slgqx" Feb 14 18:54:34 crc kubenswrapper[4897]: I0214 18:54:34.848955 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdj84\" (UniqueName: \"kubernetes.io/projected/6fe45416-c3cc-40b0-bffb-d43af376cebe-kube-api-access-fdj84\") pod \"cert-manager-cainjector-cf98fcc89-5jj96\" (UID: \"6fe45416-c3cc-40b0-bffb-d43af376cebe\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-5jj96" Feb 14 18:54:34 crc kubenswrapper[4897]: I0214 18:54:34.856070 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdtjq\" (UniqueName: \"kubernetes.io/projected/0b1febb3-dc70-4cd5-9a48-024547405da7-kube-api-access-cdtjq\") pod \"cert-manager-webhook-687f57d79b-pmlmt\" (UID: \"0b1febb3-dc70-4cd5-9a48-024547405da7\") " pod="cert-manager/cert-manager-webhook-687f57d79b-pmlmt" Feb 14 18:54:35 crc kubenswrapper[4897]: I0214 18:54:35.009698 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5jj96" Feb 14 18:54:35 crc kubenswrapper[4897]: I0214 18:54:35.017413 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-slgqx" Feb 14 18:54:35 crc kubenswrapper[4897]: I0214 18:54:35.032550 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-pmlmt" Feb 14 18:54:35 crc kubenswrapper[4897]: I0214 18:54:35.463644 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-5jj96"] Feb 14 18:54:35 crc kubenswrapper[4897]: W0214 18:54:35.469513 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fe45416_c3cc_40b0_bffb_d43af376cebe.slice/crio-521009700ed863e6b902db3e2f37c84518bdf1c96e418ee90f9560490c0e6dfb WatchSource:0}: Error finding container 521009700ed863e6b902db3e2f37c84518bdf1c96e418ee90f9560490c0e6dfb: Status 404 returned error can't find the container with id 521009700ed863e6b902db3e2f37c84518bdf1c96e418ee90f9560490c0e6dfb Feb 14 18:54:35 crc kubenswrapper[4897]: W0214 18:54:35.510185 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89273d01_2f22_4f94_8217_2b51d8b1319b.slice/crio-2b7e91924db111d7b98e22d7b7ae6857244767365b6fa3713a2074cf61c3adb9 WatchSource:0}: Error finding container 2b7e91924db111d7b98e22d7b7ae6857244767365b6fa3713a2074cf61c3adb9: Status 404 returned error can't find the container with id 2b7e91924db111d7b98e22d7b7ae6857244767365b6fa3713a2074cf61c3adb9 Feb 14 18:54:35 crc kubenswrapper[4897]: I0214 18:54:35.510726 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-slgqx"] Feb 14 18:54:35 crc kubenswrapper[4897]: I0214 18:54:35.515982 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-pmlmt"] Feb 14 18:54:35 crc kubenswrapper[4897]: W0214 18:54:35.518352 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b1febb3_dc70_4cd5_9a48_024547405da7.slice/crio-f29eff3745aef673545a0c5c25fa44141878cbb17365ab0814d41fbd97af625e WatchSource:0}: Error finding container f29eff3745aef673545a0c5c25fa44141878cbb17365ab0814d41fbd97af625e: Status 404 returned error can't find the container with id f29eff3745aef673545a0c5c25fa44141878cbb17365ab0814d41fbd97af625e Feb 14 18:54:35 crc kubenswrapper[4897]: I0214 18:54:35.526366 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-slgqx" event={"ID":"89273d01-2f22-4f94-8217-2b51d8b1319b","Type":"ContainerStarted","Data":"2b7e91924db111d7b98e22d7b7ae6857244767365b6fa3713a2074cf61c3adb9"} Feb 14 18:54:35 crc kubenswrapper[4897]: I0214 18:54:35.527243 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5jj96" event={"ID":"6fe45416-c3cc-40b0-bffb-d43af376cebe","Type":"ContainerStarted","Data":"521009700ed863e6b902db3e2f37c84518bdf1c96e418ee90f9560490c0e6dfb"} Feb 14 18:54:36 crc kubenswrapper[4897]: I0214 18:54:36.536222 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-pmlmt" event={"ID":"0b1febb3-dc70-4cd5-9a48-024547405da7","Type":"ContainerStarted","Data":"f29eff3745aef673545a0c5c25fa44141878cbb17365ab0814d41fbd97af625e"} Feb 14 18:54:39 crc kubenswrapper[4897]: I0214 18:54:39.600095 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-slgqx" event={"ID":"89273d01-2f22-4f94-8217-2b51d8b1319b","Type":"ContainerStarted","Data":"084c8b7f57d130818d2048798b12cef68a45cb7f77ede9e162e6ac74523458c1"} Feb 14 18:54:39 crc kubenswrapper[4897]: I0214 18:54:39.601465 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5jj96" event={"ID":"6fe45416-c3cc-40b0-bffb-d43af376cebe","Type":"ContainerStarted","Data":"dd604a71dda34adc33613cc6f3221173c41054f474d3ab0f60b6af2cfe7edc08"} Feb 14 18:54:39 crc kubenswrapper[4897]: I0214 18:54:39.602661 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-pmlmt" event={"ID":"0b1febb3-dc70-4cd5-9a48-024547405da7","Type":"ContainerStarted","Data":"1be5652285d6be8f65ac3bac27c06b387ee73047042629391d39478c1af3cf62"} Feb 14 18:54:39 crc kubenswrapper[4897]: I0214 18:54:39.602811 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-pmlmt" Feb 14 18:54:39 crc kubenswrapper[4897]: I0214 18:54:39.617088 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-slgqx" podStartSLOduration=1.8656107720000001 podStartE2EDuration="5.617071267s" podCreationTimestamp="2026-02-14 18:54:34 +0000 UTC" firstStartedPulling="2026-02-14 18:54:35.512903824 +0000 UTC m=+728.489312307" lastFinishedPulling="2026-02-14 18:54:39.264364279 +0000 UTC m=+732.240772802" observedRunningTime="2026-02-14 18:54:39.614649491 +0000 UTC m=+732.591057974" watchObservedRunningTime="2026-02-14 18:54:39.617071267 +0000 UTC m=+732.593479740" Feb 14 18:54:39 crc kubenswrapper[4897]: I0214 18:54:39.634108 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-pmlmt" podStartSLOduration=1.9758560790000002 podStartE2EDuration="5.634091097s" podCreationTimestamp="2026-02-14 18:54:34 +0000 UTC" firstStartedPulling="2026-02-14 18:54:35.52078906 +0000 UTC m=+728.497197543" lastFinishedPulling="2026-02-14 18:54:39.179024078 +0000 UTC m=+732.155432561" observedRunningTime="2026-02-14 18:54:39.632429016 +0000 UTC m=+732.608837509" watchObservedRunningTime="2026-02-14 18:54:39.634091097 +0000 UTC m=+732.610499580" Feb 14 18:54:39 crc kubenswrapper[4897]: I0214 18:54:39.659899 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5jj96" podStartSLOduration=1.952695836 podStartE2EDuration="5.659881781s" podCreationTimestamp="2026-02-14 18:54:34 +0000 UTC" firstStartedPulling="2026-02-14 18:54:35.471858334 +0000 UTC m=+728.448266817" lastFinishedPulling="2026-02-14 18:54:39.179044269 +0000 UTC m=+732.155452762" observedRunningTime="2026-02-14 18:54:39.657534858 +0000 UTC m=+732.633943371" watchObservedRunningTime="2026-02-14 18:54:39.659881781 +0000 UTC m=+732.636290274" Feb 14 18:54:45 crc kubenswrapper[4897]: I0214 18:54:45.036732 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-pmlmt" Feb 14 18:55:01 crc kubenswrapper[4897]: I0214 18:55:01.726874 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 18:55:01 crc kubenswrapper[4897]: I0214 18:55:01.727724 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 18:55:05 crc kubenswrapper[4897]: I0214 18:55:05.126373 4897 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 14 18:55:07 crc kubenswrapper[4897]: I0214 18:55:07.642781 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh"] Feb 14 18:55:07 crc kubenswrapper[4897]: I0214 18:55:07.645971 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh" Feb 14 18:55:07 crc kubenswrapper[4897]: I0214 18:55:07.649761 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 14 18:55:07 crc kubenswrapper[4897]: I0214 18:55:07.650330 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh"] Feb 14 18:55:07 crc kubenswrapper[4897]: I0214 18:55:07.738936 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh\" (UID: \"dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh" Feb 14 18:55:07 crc kubenswrapper[4897]: I0214 18:55:07.739013 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh\" (UID: \"dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh" Feb 14 18:55:07 crc kubenswrapper[4897]: I0214 18:55:07.739106 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j95f9\" (UniqueName: \"kubernetes.io/projected/dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e-kube-api-access-j95f9\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh\" (UID: \"dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh" Feb 14 18:55:07 crc kubenswrapper[4897]: I0214 18:55:07.840375 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh\" (UID: \"dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh" Feb 14 18:55:07 crc kubenswrapper[4897]: I0214 18:55:07.840459 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j95f9\" (UniqueName: \"kubernetes.io/projected/dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e-kube-api-access-j95f9\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh\" (UID: \"dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh" Feb 14 18:55:07 crc kubenswrapper[4897]: I0214 18:55:07.840505 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh\" (UID: \"dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh" Feb 14 18:55:07 crc kubenswrapper[4897]: I0214 18:55:07.840953 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh\" (UID: \"dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh" Feb 14 18:55:07 crc kubenswrapper[4897]: I0214 18:55:07.840967 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh\" (UID: \"dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh" Feb 14 18:55:07 crc kubenswrapper[4897]: I0214 18:55:07.870163 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j95f9\" (UniqueName: \"kubernetes.io/projected/dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e-kube-api-access-j95f9\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh\" (UID: \"dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh" Feb 14 18:55:08 crc kubenswrapper[4897]: I0214 18:55:08.007364 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7"] Feb 14 18:55:08 crc kubenswrapper[4897]: I0214 18:55:08.008806 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7" Feb 14 18:55:08 crc kubenswrapper[4897]: I0214 18:55:08.018143 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh" Feb 14 18:55:08 crc kubenswrapper[4897]: I0214 18:55:08.048732 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7"] Feb 14 18:55:08 crc kubenswrapper[4897]: I0214 18:55:08.147605 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shwsp\" (UniqueName: \"kubernetes.io/projected/b1d83377-16af-4d9a-ad7d-3d0c2059b951-kube-api-access-shwsp\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7\" (UID: \"b1d83377-16af-4d9a-ad7d-3d0c2059b951\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7" Feb 14 18:55:08 crc kubenswrapper[4897]: I0214 18:55:08.147948 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b1d83377-16af-4d9a-ad7d-3d0c2059b951-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7\" (UID: \"b1d83377-16af-4d9a-ad7d-3d0c2059b951\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7" Feb 14 18:55:08 crc kubenswrapper[4897]: I0214 18:55:08.148013 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b1d83377-16af-4d9a-ad7d-3d0c2059b951-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7\" (UID: \"b1d83377-16af-4d9a-ad7d-3d0c2059b951\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7" Feb 14 18:55:08 crc kubenswrapper[4897]: I0214 18:55:08.249045 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shwsp\" (UniqueName: \"kubernetes.io/projected/b1d83377-16af-4d9a-ad7d-3d0c2059b951-kube-api-access-shwsp\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7\" (UID: \"b1d83377-16af-4d9a-ad7d-3d0c2059b951\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7" Feb 14 18:55:08 crc kubenswrapper[4897]: I0214 18:55:08.249121 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b1d83377-16af-4d9a-ad7d-3d0c2059b951-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7\" (UID: \"b1d83377-16af-4d9a-ad7d-3d0c2059b951\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7" Feb 14 18:55:08 crc kubenswrapper[4897]: I0214 18:55:08.249173 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b1d83377-16af-4d9a-ad7d-3d0c2059b951-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7\" (UID: \"b1d83377-16af-4d9a-ad7d-3d0c2059b951\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7" Feb 14 18:55:08 crc kubenswrapper[4897]: I0214 18:55:08.249659 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b1d83377-16af-4d9a-ad7d-3d0c2059b951-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7\" (UID: \"b1d83377-16af-4d9a-ad7d-3d0c2059b951\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7" Feb 14 18:55:08 crc kubenswrapper[4897]: I0214 18:55:08.250561 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b1d83377-16af-4d9a-ad7d-3d0c2059b951-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7\" (UID: \"b1d83377-16af-4d9a-ad7d-3d0c2059b951\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7" Feb 14 18:55:08 crc kubenswrapper[4897]: I0214 18:55:08.270502 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shwsp\" (UniqueName: \"kubernetes.io/projected/b1d83377-16af-4d9a-ad7d-3d0c2059b951-kube-api-access-shwsp\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7\" (UID: \"b1d83377-16af-4d9a-ad7d-3d0c2059b951\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7" Feb 14 18:55:08 crc kubenswrapper[4897]: I0214 18:55:08.275814 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh"] Feb 14 18:55:08 crc kubenswrapper[4897]: I0214 18:55:08.323672 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7" Feb 14 18:55:08 crc kubenswrapper[4897]: I0214 18:55:08.525687 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7"] Feb 14 18:55:08 crc kubenswrapper[4897]: W0214 18:55:08.575728 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1d83377_16af_4d9a_ad7d_3d0c2059b951.slice/crio-1f5e123dbd916c8ed2386a865305797dccd3d4647c3514c1cfe4e8727fb3863b WatchSource:0}: Error finding container 1f5e123dbd916c8ed2386a865305797dccd3d4647c3514c1cfe4e8727fb3863b: Status 404 returned error can't find the container with id 1f5e123dbd916c8ed2386a865305797dccd3d4647c3514c1cfe4e8727fb3863b Feb 14 18:55:08 crc kubenswrapper[4897]: I0214 18:55:08.856347 4897 generic.go:334] "Generic (PLEG): container finished" podID="dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e" containerID="5682e9c7d6d14c167766f33e193c812259f83eedaa87a7b0274ce8fab4bda233" exitCode=0 Feb 14 18:55:08 crc kubenswrapper[4897]: I0214 18:55:08.856444 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh" event={"ID":"dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e","Type":"ContainerDied","Data":"5682e9c7d6d14c167766f33e193c812259f83eedaa87a7b0274ce8fab4bda233"} Feb 14 18:55:08 crc kubenswrapper[4897]: I0214 18:55:08.856493 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh" event={"ID":"dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e","Type":"ContainerStarted","Data":"254bd7b0b83301a3f87701143cb2d4cd96cbacfb64d7e5de3f5758b025180c74"} Feb 14 18:55:08 crc kubenswrapper[4897]: I0214 18:55:08.860521 4897 generic.go:334] "Generic (PLEG): container finished" podID="b1d83377-16af-4d9a-ad7d-3d0c2059b951" containerID="c582c51a4202f8e6b2cd60f9c59ef4d6a501bf07d9750ebb3df3ed42f9bfa631" exitCode=0 Feb 14 18:55:08 crc kubenswrapper[4897]: I0214 18:55:08.860590 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7" event={"ID":"b1d83377-16af-4d9a-ad7d-3d0c2059b951","Type":"ContainerDied","Data":"c582c51a4202f8e6b2cd60f9c59ef4d6a501bf07d9750ebb3df3ed42f9bfa631"} Feb 14 18:55:08 crc kubenswrapper[4897]: I0214 18:55:08.860625 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7" event={"ID":"b1d83377-16af-4d9a-ad7d-3d0c2059b951","Type":"ContainerStarted","Data":"1f5e123dbd916c8ed2386a865305797dccd3d4647c3514c1cfe4e8727fb3863b"} Feb 14 18:55:10 crc kubenswrapper[4897]: I0214 18:55:10.882550 4897 generic.go:334] "Generic (PLEG): container finished" podID="b1d83377-16af-4d9a-ad7d-3d0c2059b951" containerID="7a0ccd2013af3c15b7d9b01bc55c9216a82c745e9ddb7110362f8c0228a03f73" exitCode=0 Feb 14 18:55:10 crc kubenswrapper[4897]: I0214 18:55:10.882650 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7" event={"ID":"b1d83377-16af-4d9a-ad7d-3d0c2059b951","Type":"ContainerDied","Data":"7a0ccd2013af3c15b7d9b01bc55c9216a82c745e9ddb7110362f8c0228a03f73"} Feb 14 18:55:10 crc kubenswrapper[4897]: I0214 18:55:10.888997 4897 generic.go:334] "Generic (PLEG): container finished" podID="dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e" containerID="3ebe1de4d9898959cdb10c52f63cf827c7c942ce647f38f36869003bae782a8e" exitCode=0 Feb 14 18:55:10 crc kubenswrapper[4897]: I0214 18:55:10.889097 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh" event={"ID":"dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e","Type":"ContainerDied","Data":"3ebe1de4d9898959cdb10c52f63cf827c7c942ce647f38f36869003bae782a8e"} Feb 14 18:55:11 crc kubenswrapper[4897]: I0214 18:55:11.382666 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mxpml"] Feb 14 18:55:11 crc kubenswrapper[4897]: I0214 18:55:11.385162 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mxpml" Feb 14 18:55:11 crc kubenswrapper[4897]: I0214 18:55:11.390213 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mxpml"] Feb 14 18:55:11 crc kubenswrapper[4897]: I0214 18:55:11.506559 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22fe96da-7df2-46e3-8203-71013397709a-utilities\") pod \"redhat-operators-mxpml\" (UID: \"22fe96da-7df2-46e3-8203-71013397709a\") " pod="openshift-marketplace/redhat-operators-mxpml" Feb 14 18:55:11 crc kubenswrapper[4897]: I0214 18:55:11.506618 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22fe96da-7df2-46e3-8203-71013397709a-catalog-content\") pod \"redhat-operators-mxpml\" (UID: \"22fe96da-7df2-46e3-8203-71013397709a\") " pod="openshift-marketplace/redhat-operators-mxpml" Feb 14 18:55:11 crc kubenswrapper[4897]: I0214 18:55:11.506674 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64sr8\" (UniqueName: \"kubernetes.io/projected/22fe96da-7df2-46e3-8203-71013397709a-kube-api-access-64sr8\") pod \"redhat-operators-mxpml\" (UID: \"22fe96da-7df2-46e3-8203-71013397709a\") " pod="openshift-marketplace/redhat-operators-mxpml" Feb 14 18:55:11 crc kubenswrapper[4897]: I0214 18:55:11.607806 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22fe96da-7df2-46e3-8203-71013397709a-utilities\") pod \"redhat-operators-mxpml\" (UID: \"22fe96da-7df2-46e3-8203-71013397709a\") " pod="openshift-marketplace/redhat-operators-mxpml" Feb 14 18:55:11 crc kubenswrapper[4897]: I0214 18:55:11.607858 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22fe96da-7df2-46e3-8203-71013397709a-catalog-content\") pod \"redhat-operators-mxpml\" (UID: \"22fe96da-7df2-46e3-8203-71013397709a\") " pod="openshift-marketplace/redhat-operators-mxpml" Feb 14 18:55:11 crc kubenswrapper[4897]: I0214 18:55:11.607914 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64sr8\" (UniqueName: \"kubernetes.io/projected/22fe96da-7df2-46e3-8203-71013397709a-kube-api-access-64sr8\") pod \"redhat-operators-mxpml\" (UID: \"22fe96da-7df2-46e3-8203-71013397709a\") " pod="openshift-marketplace/redhat-operators-mxpml" Feb 14 18:55:11 crc kubenswrapper[4897]: I0214 18:55:11.608426 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22fe96da-7df2-46e3-8203-71013397709a-utilities\") pod \"redhat-operators-mxpml\" (UID: \"22fe96da-7df2-46e3-8203-71013397709a\") " pod="openshift-marketplace/redhat-operators-mxpml" Feb 14 18:55:11 crc kubenswrapper[4897]: I0214 18:55:11.608485 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22fe96da-7df2-46e3-8203-71013397709a-catalog-content\") pod \"redhat-operators-mxpml\" (UID: \"22fe96da-7df2-46e3-8203-71013397709a\") " pod="openshift-marketplace/redhat-operators-mxpml" Feb 14 18:55:11 crc kubenswrapper[4897]: I0214 18:55:11.630010 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64sr8\" (UniqueName: \"kubernetes.io/projected/22fe96da-7df2-46e3-8203-71013397709a-kube-api-access-64sr8\") pod \"redhat-operators-mxpml\" (UID: \"22fe96da-7df2-46e3-8203-71013397709a\") " pod="openshift-marketplace/redhat-operators-mxpml" Feb 14 18:55:11 crc kubenswrapper[4897]: I0214 18:55:11.704901 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mxpml" Feb 14 18:55:11 crc kubenswrapper[4897]: I0214 18:55:11.896137 4897 generic.go:334] "Generic (PLEG): container finished" podID="b1d83377-16af-4d9a-ad7d-3d0c2059b951" containerID="30ba9e66f0d94c65b5945b52999d8c715ac60d7a6a79c1104fa5c96b7228efb5" exitCode=0 Feb 14 18:55:11 crc kubenswrapper[4897]: I0214 18:55:11.896195 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7" event={"ID":"b1d83377-16af-4d9a-ad7d-3d0c2059b951","Type":"ContainerDied","Data":"30ba9e66f0d94c65b5945b52999d8c715ac60d7a6a79c1104fa5c96b7228efb5"} Feb 14 18:55:11 crc kubenswrapper[4897]: I0214 18:55:11.898216 4897 generic.go:334] "Generic (PLEG): container finished" podID="dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e" containerID="3f4e44eb924e0569fc2279a30373be621cfc9e89813ab7ab4bf3836b2b394031" exitCode=0 Feb 14 18:55:11 crc kubenswrapper[4897]: I0214 18:55:11.898271 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh" event={"ID":"dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e","Type":"ContainerDied","Data":"3f4e44eb924e0569fc2279a30373be621cfc9e89813ab7ab4bf3836b2b394031"} Feb 14 18:55:12 crc kubenswrapper[4897]: I0214 18:55:12.589779 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mxpml"] Feb 14 18:55:12 crc kubenswrapper[4897]: I0214 18:55:12.905637 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxpml" event={"ID":"22fe96da-7df2-46e3-8203-71013397709a","Type":"ContainerStarted","Data":"6dc67e147f8b766f7c033762f3b537539b911470815d5336318a44653df3da84"} Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.146749 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh" Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.215940 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7" Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.333089 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j95f9\" (UniqueName: \"kubernetes.io/projected/dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e-kube-api-access-j95f9\") pod \"dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e\" (UID: \"dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e\") " Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.333229 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e-bundle\") pod \"dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e\" (UID: \"dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e\") " Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.333269 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shwsp\" (UniqueName: \"kubernetes.io/projected/b1d83377-16af-4d9a-ad7d-3d0c2059b951-kube-api-access-shwsp\") pod \"b1d83377-16af-4d9a-ad7d-3d0c2059b951\" (UID: \"b1d83377-16af-4d9a-ad7d-3d0c2059b951\") " Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.333345 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e-util\") pod \"dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e\" (UID: \"dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e\") " Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.333379 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b1d83377-16af-4d9a-ad7d-3d0c2059b951-util\") pod \"b1d83377-16af-4d9a-ad7d-3d0c2059b951\" (UID: \"b1d83377-16af-4d9a-ad7d-3d0c2059b951\") " Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.333417 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b1d83377-16af-4d9a-ad7d-3d0c2059b951-bundle\") pod \"b1d83377-16af-4d9a-ad7d-3d0c2059b951\" (UID: \"b1d83377-16af-4d9a-ad7d-3d0c2059b951\") " Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.334454 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1d83377-16af-4d9a-ad7d-3d0c2059b951-bundle" (OuterVolumeSpecName: "bundle") pod "b1d83377-16af-4d9a-ad7d-3d0c2059b951" (UID: "b1d83377-16af-4d9a-ad7d-3d0c2059b951"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.334485 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e-bundle" (OuterVolumeSpecName: "bundle") pod "dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e" (UID: "dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.339509 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e-kube-api-access-j95f9" (OuterVolumeSpecName: "kube-api-access-j95f9") pod "dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e" (UID: "dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e"). InnerVolumeSpecName "kube-api-access-j95f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.342218 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1d83377-16af-4d9a-ad7d-3d0c2059b951-kube-api-access-shwsp" (OuterVolumeSpecName: "kube-api-access-shwsp") pod "b1d83377-16af-4d9a-ad7d-3d0c2059b951" (UID: "b1d83377-16af-4d9a-ad7d-3d0c2059b951"). InnerVolumeSpecName "kube-api-access-shwsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.357433 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e-util" (OuterVolumeSpecName: "util") pod "dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e" (UID: "dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.360798 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1d83377-16af-4d9a-ad7d-3d0c2059b951-util" (OuterVolumeSpecName: "util") pod "b1d83377-16af-4d9a-ad7d-3d0c2059b951" (UID: "b1d83377-16af-4d9a-ad7d-3d0c2059b951"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.435241 4897 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e-util\") on node \"crc\" DevicePath \"\"" Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.435287 4897 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b1d83377-16af-4d9a-ad7d-3d0c2059b951-util\") on node \"crc\" DevicePath \"\"" Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.435296 4897 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b1d83377-16af-4d9a-ad7d-3d0c2059b951-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.435306 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j95f9\" (UniqueName: \"kubernetes.io/projected/dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e-kube-api-access-j95f9\") on node \"crc\" DevicePath \"\"" Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.435317 4897 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.435326 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shwsp\" (UniqueName: \"kubernetes.io/projected/b1d83377-16af-4d9a-ad7d-3d0c2059b951-kube-api-access-shwsp\") on node \"crc\" DevicePath \"\"" Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.914605 4897 generic.go:334] "Generic (PLEG): container finished" podID="22fe96da-7df2-46e3-8203-71013397709a" containerID="6dc7d5793e0340aed04813d4be5659d704c8a1ba617684e5a582429c1cc20b82" exitCode=0 Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.914667 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxpml" event={"ID":"22fe96da-7df2-46e3-8203-71013397709a","Type":"ContainerDied","Data":"6dc7d5793e0340aed04813d4be5659d704c8a1ba617684e5a582429c1cc20b82"} Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.917576 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh" event={"ID":"dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e","Type":"ContainerDied","Data":"254bd7b0b83301a3f87701143cb2d4cd96cbacfb64d7e5de3f5758b025180c74"} Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.917755 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="254bd7b0b83301a3f87701143cb2d4cd96cbacfb64d7e5de3f5758b025180c74" Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.918199 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh" Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.924932 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7" event={"ID":"b1d83377-16af-4d9a-ad7d-3d0c2059b951","Type":"ContainerDied","Data":"1f5e123dbd916c8ed2386a865305797dccd3d4647c3514c1cfe4e8727fb3863b"} Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.925157 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f5e123dbd916c8ed2386a865305797dccd3d4647c3514c1cfe4e8727fb3863b" Feb 14 18:55:13 crc kubenswrapper[4897]: I0214 18:55:13.925193 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7" Feb 14 18:55:14 crc kubenswrapper[4897]: I0214 18:55:14.936094 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxpml" event={"ID":"22fe96da-7df2-46e3-8203-71013397709a","Type":"ContainerStarted","Data":"b66e5db546a33ba8f2be1eb90952e2dfd74dfbb1a553921db518f67ba01e8df5"} Feb 14 18:55:15 crc kubenswrapper[4897]: I0214 18:55:15.943342 4897 generic.go:334] "Generic (PLEG): container finished" podID="22fe96da-7df2-46e3-8203-71013397709a" containerID="b66e5db546a33ba8f2be1eb90952e2dfd74dfbb1a553921db518f67ba01e8df5" exitCode=0 Feb 14 18:55:15 crc kubenswrapper[4897]: I0214 18:55:15.943401 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxpml" event={"ID":"22fe96da-7df2-46e3-8203-71013397709a","Type":"ContainerDied","Data":"b66e5db546a33ba8f2be1eb90952e2dfd74dfbb1a553921db518f67ba01e8df5"} Feb 14 18:55:16 crc kubenswrapper[4897]: I0214 18:55:16.955435 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxpml" event={"ID":"22fe96da-7df2-46e3-8203-71013397709a","Type":"ContainerStarted","Data":"755c84a388ce3d335ca28a13a7e2587662dc43113309e3610d5435570ab4a684"} Feb 14 18:55:17 crc kubenswrapper[4897]: I0214 18:55:17.000992 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mxpml" podStartSLOduration=3.358738593 podStartE2EDuration="6.00096373s" podCreationTimestamp="2026-02-14 18:55:11 +0000 UTC" firstStartedPulling="2026-02-14 18:55:13.916632328 +0000 UTC m=+766.893040811" lastFinishedPulling="2026-02-14 18:55:16.558857465 +0000 UTC m=+769.535265948" observedRunningTime="2026-02-14 18:55:16.983717132 +0000 UTC m=+769.960125675" watchObservedRunningTime="2026-02-14 18:55:17.00096373 +0000 UTC m=+769.977372233" Feb 14 18:55:17 crc kubenswrapper[4897]: I0214 18:55:17.103690 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-x9vg8"] Feb 14 18:55:17 crc kubenswrapper[4897]: E0214 18:55:17.104013 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e" containerName="util" Feb 14 18:55:17 crc kubenswrapper[4897]: I0214 18:55:17.104049 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e" containerName="util" Feb 14 18:55:17 crc kubenswrapper[4897]: E0214 18:55:17.104061 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e" containerName="extract" Feb 14 18:55:17 crc kubenswrapper[4897]: I0214 18:55:17.104068 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e" containerName="extract" Feb 14 18:55:17 crc kubenswrapper[4897]: E0214 18:55:17.104093 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d83377-16af-4d9a-ad7d-3d0c2059b951" containerName="pull" Feb 14 18:55:17 crc kubenswrapper[4897]: I0214 18:55:17.104100 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d83377-16af-4d9a-ad7d-3d0c2059b951" containerName="pull" Feb 14 18:55:17 crc kubenswrapper[4897]: E0214 18:55:17.104112 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e" containerName="pull" Feb 14 18:55:17 crc kubenswrapper[4897]: I0214 18:55:17.104119 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e" containerName="pull" Feb 14 18:55:17 crc kubenswrapper[4897]: E0214 18:55:17.104131 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d83377-16af-4d9a-ad7d-3d0c2059b951" containerName="util" Feb 14 18:55:17 crc kubenswrapper[4897]: I0214 18:55:17.104138 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d83377-16af-4d9a-ad7d-3d0c2059b951" containerName="util" Feb 14 18:55:17 crc kubenswrapper[4897]: E0214 18:55:17.104153 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d83377-16af-4d9a-ad7d-3d0c2059b951" containerName="extract" Feb 14 18:55:17 crc kubenswrapper[4897]: I0214 18:55:17.104160 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d83377-16af-4d9a-ad7d-3d0c2059b951" containerName="extract" Feb 14 18:55:17 crc kubenswrapper[4897]: I0214 18:55:17.104297 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1d83377-16af-4d9a-ad7d-3d0c2059b951" containerName="extract" Feb 14 18:55:17 crc kubenswrapper[4897]: I0214 18:55:17.104318 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e" containerName="extract" Feb 14 18:55:17 crc kubenswrapper[4897]: I0214 18:55:17.104828 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-x9vg8" Feb 14 18:55:17 crc kubenswrapper[4897]: I0214 18:55:17.108875 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-zs9tt" Feb 14 18:55:17 crc kubenswrapper[4897]: I0214 18:55:17.108872 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Feb 14 18:55:17 crc kubenswrapper[4897]: I0214 18:55:17.109071 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Feb 14 18:55:17 crc kubenswrapper[4897]: I0214 18:55:17.125526 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-x9vg8"] Feb 14 18:55:17 crc kubenswrapper[4897]: I0214 18:55:17.195172 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjl6q\" (UniqueName: \"kubernetes.io/projected/2b81a3a6-44a0-4196-a84f-0eb00c65ce57-kube-api-access-bjl6q\") pod \"cluster-logging-operator-c769fd969-x9vg8\" (UID: \"2b81a3a6-44a0-4196-a84f-0eb00c65ce57\") " pod="openshift-logging/cluster-logging-operator-c769fd969-x9vg8" Feb 14 18:55:17 crc kubenswrapper[4897]: I0214 18:55:17.309719 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjl6q\" (UniqueName: \"kubernetes.io/projected/2b81a3a6-44a0-4196-a84f-0eb00c65ce57-kube-api-access-bjl6q\") pod \"cluster-logging-operator-c769fd969-x9vg8\" (UID: \"2b81a3a6-44a0-4196-a84f-0eb00c65ce57\") " pod="openshift-logging/cluster-logging-operator-c769fd969-x9vg8" Feb 14 18:55:17 crc kubenswrapper[4897]: I0214 18:55:17.336309 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjl6q\" (UniqueName: \"kubernetes.io/projected/2b81a3a6-44a0-4196-a84f-0eb00c65ce57-kube-api-access-bjl6q\") pod \"cluster-logging-operator-c769fd969-x9vg8\" (UID: \"2b81a3a6-44a0-4196-a84f-0eb00c65ce57\") " pod="openshift-logging/cluster-logging-operator-c769fd969-x9vg8" Feb 14 18:55:17 crc kubenswrapper[4897]: I0214 18:55:17.419553 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-x9vg8" Feb 14 18:55:17 crc kubenswrapper[4897]: I0214 18:55:17.635264 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-x9vg8"] Feb 14 18:55:17 crc kubenswrapper[4897]: W0214 18:55:17.645946 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b81a3a6_44a0_4196_a84f_0eb00c65ce57.slice/crio-adc1e96ccef39b2475c9188b37086bd3fe13205d2ffc540a8b1a359713df9351 WatchSource:0}: Error finding container adc1e96ccef39b2475c9188b37086bd3fe13205d2ffc540a8b1a359713df9351: Status 404 returned error can't find the container with id adc1e96ccef39b2475c9188b37086bd3fe13205d2ffc540a8b1a359713df9351 Feb 14 18:55:17 crc kubenswrapper[4897]: I0214 18:55:17.963218 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-x9vg8" event={"ID":"2b81a3a6-44a0-4196-a84f-0eb00c65ce57","Type":"ContainerStarted","Data":"adc1e96ccef39b2475c9188b37086bd3fe13205d2ffc540a8b1a359713df9351"} Feb 14 18:55:21 crc kubenswrapper[4897]: I0214 18:55:21.711142 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mxpml" Feb 14 18:55:21 crc kubenswrapper[4897]: I0214 18:55:21.711444 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mxpml" Feb 14 18:55:22 crc kubenswrapper[4897]: I0214 18:55:22.771179 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mxpml" podUID="22fe96da-7df2-46e3-8203-71013397709a" containerName="registry-server" probeResult="failure" output=< Feb 14 18:55:22 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 18:55:22 crc kubenswrapper[4897]: > Feb 14 18:55:25 crc kubenswrapper[4897]: I0214 18:55:25.022223 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-x9vg8" event={"ID":"2b81a3a6-44a0-4196-a84f-0eb00c65ce57","Type":"ContainerStarted","Data":"c02a0be456f9dd6a2d75a522606324686e76157cccb48b1ed3b86b06c93fe90d"} Feb 14 18:55:25 crc kubenswrapper[4897]: I0214 18:55:25.044894 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-x9vg8" podStartSLOduration=1.239373096 podStartE2EDuration="8.044877929s" podCreationTimestamp="2026-02-14 18:55:17 +0000 UTC" firstStartedPulling="2026-02-14 18:55:17.649116931 +0000 UTC m=+770.625525414" lastFinishedPulling="2026-02-14 18:55:24.454621754 +0000 UTC m=+777.431030247" observedRunningTime="2026-02-14 18:55:25.043721603 +0000 UTC m=+778.020130106" watchObservedRunningTime="2026-02-14 18:55:25.044877929 +0000 UTC m=+778.021286422" Feb 14 18:55:29 crc kubenswrapper[4897]: I0214 18:55:29.078756 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn"] Feb 14 18:55:29 crc kubenswrapper[4897]: I0214 18:55:29.080597 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn" Feb 14 18:55:29 crc kubenswrapper[4897]: I0214 18:55:29.083611 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 14 18:55:29 crc kubenswrapper[4897]: I0214 18:55:29.105365 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 14 18:55:29 crc kubenswrapper[4897]: I0214 18:55:29.106559 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 14 18:55:29 crc kubenswrapper[4897]: I0214 18:55:29.106732 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 14 18:55:29 crc kubenswrapper[4897]: I0214 18:55:29.106782 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-5pxhg" Feb 14 18:55:29 crc kubenswrapper[4897]: I0214 18:55:29.111248 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 14 18:55:29 crc kubenswrapper[4897]: I0214 18:55:29.112805 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn"] Feb 14 18:55:29 crc kubenswrapper[4897]: I0214 18:55:29.208197 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/ab082f7b-c89d-4db4-a04f-e2db844fa022-manager-config\") pod \"loki-operator-controller-manager-78d86b9dcc-fgbpn\" (UID: \"ab082f7b-c89d-4db4-a04f-e2db844fa022\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn" Feb 14 18:55:29 crc kubenswrapper[4897]: I0214 18:55:29.208244 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ab082f7b-c89d-4db4-a04f-e2db844fa022-apiservice-cert\") pod \"loki-operator-controller-manager-78d86b9dcc-fgbpn\" (UID: \"ab082f7b-c89d-4db4-a04f-e2db844fa022\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn" Feb 14 18:55:29 crc kubenswrapper[4897]: I0214 18:55:29.208492 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2wxn\" (UniqueName: \"kubernetes.io/projected/ab082f7b-c89d-4db4-a04f-e2db844fa022-kube-api-access-g2wxn\") pod \"loki-operator-controller-manager-78d86b9dcc-fgbpn\" (UID: \"ab082f7b-c89d-4db4-a04f-e2db844fa022\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn" Feb 14 18:55:29 crc kubenswrapper[4897]: I0214 18:55:29.208687 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ab082f7b-c89d-4db4-a04f-e2db844fa022-webhook-cert\") pod \"loki-operator-controller-manager-78d86b9dcc-fgbpn\" (UID: \"ab082f7b-c89d-4db4-a04f-e2db844fa022\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn" Feb 14 18:55:29 crc kubenswrapper[4897]: I0214 18:55:29.208757 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab082f7b-c89d-4db4-a04f-e2db844fa022-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-78d86b9dcc-fgbpn\" (UID: \"ab082f7b-c89d-4db4-a04f-e2db844fa022\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn" Feb 14 18:55:29 crc kubenswrapper[4897]: I0214 18:55:29.309718 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/ab082f7b-c89d-4db4-a04f-e2db844fa022-manager-config\") pod \"loki-operator-controller-manager-78d86b9dcc-fgbpn\" (UID: \"ab082f7b-c89d-4db4-a04f-e2db844fa022\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn" Feb 14 18:55:29 crc kubenswrapper[4897]: I0214 18:55:29.309798 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ab082f7b-c89d-4db4-a04f-e2db844fa022-apiservice-cert\") pod \"loki-operator-controller-manager-78d86b9dcc-fgbpn\" (UID: \"ab082f7b-c89d-4db4-a04f-e2db844fa022\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn" Feb 14 18:55:29 crc kubenswrapper[4897]: I0214 18:55:29.309949 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2wxn\" (UniqueName: \"kubernetes.io/projected/ab082f7b-c89d-4db4-a04f-e2db844fa022-kube-api-access-g2wxn\") pod \"loki-operator-controller-manager-78d86b9dcc-fgbpn\" (UID: \"ab082f7b-c89d-4db4-a04f-e2db844fa022\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn" Feb 14 18:55:29 crc kubenswrapper[4897]: I0214 18:55:29.309989 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ab082f7b-c89d-4db4-a04f-e2db844fa022-webhook-cert\") pod \"loki-operator-controller-manager-78d86b9dcc-fgbpn\" (UID: \"ab082f7b-c89d-4db4-a04f-e2db844fa022\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn" Feb 14 18:55:29 crc kubenswrapper[4897]: I0214 18:55:29.310013 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab082f7b-c89d-4db4-a04f-e2db844fa022-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-78d86b9dcc-fgbpn\" (UID: \"ab082f7b-c89d-4db4-a04f-e2db844fa022\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn" Feb 14 18:55:29 crc kubenswrapper[4897]: I0214 18:55:29.311213 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/ab082f7b-c89d-4db4-a04f-e2db844fa022-manager-config\") pod \"loki-operator-controller-manager-78d86b9dcc-fgbpn\" (UID: \"ab082f7b-c89d-4db4-a04f-e2db844fa022\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn" Feb 14 18:55:29 crc kubenswrapper[4897]: I0214 18:55:29.318895 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ab082f7b-c89d-4db4-a04f-e2db844fa022-apiservice-cert\") pod \"loki-operator-controller-manager-78d86b9dcc-fgbpn\" (UID: \"ab082f7b-c89d-4db4-a04f-e2db844fa022\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn" Feb 14 18:55:29 crc kubenswrapper[4897]: I0214 18:55:29.319629 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ab082f7b-c89d-4db4-a04f-e2db844fa022-webhook-cert\") pod \"loki-operator-controller-manager-78d86b9dcc-fgbpn\" (UID: \"ab082f7b-c89d-4db4-a04f-e2db844fa022\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn" Feb 14 18:55:29 crc kubenswrapper[4897]: I0214 18:55:29.326764 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2wxn\" (UniqueName: \"kubernetes.io/projected/ab082f7b-c89d-4db4-a04f-e2db844fa022-kube-api-access-g2wxn\") pod \"loki-operator-controller-manager-78d86b9dcc-fgbpn\" (UID: \"ab082f7b-c89d-4db4-a04f-e2db844fa022\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn" Feb 14 18:55:29 crc kubenswrapper[4897]: I0214 18:55:29.328513 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab082f7b-c89d-4db4-a04f-e2db844fa022-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-78d86b9dcc-fgbpn\" (UID: \"ab082f7b-c89d-4db4-a04f-e2db844fa022\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn" Feb 14 18:55:29 crc kubenswrapper[4897]: I0214 18:55:29.420385 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn" Feb 14 18:55:29 crc kubenswrapper[4897]: I0214 18:55:29.902716 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn"] Feb 14 18:55:30 crc kubenswrapper[4897]: I0214 18:55:30.059468 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn" event={"ID":"ab082f7b-c89d-4db4-a04f-e2db844fa022","Type":"ContainerStarted","Data":"37a0770b9a5723f13fc953e3acf44646fa36591b055015a56577dcd8adcb2e8b"} Feb 14 18:55:31 crc kubenswrapper[4897]: I0214 18:55:31.727136 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 18:55:31 crc kubenswrapper[4897]: I0214 18:55:31.727760 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 18:55:31 crc kubenswrapper[4897]: I0214 18:55:31.759670 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mxpml" Feb 14 18:55:31 crc kubenswrapper[4897]: I0214 18:55:31.828089 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mxpml" Feb 14 18:55:33 crc kubenswrapper[4897]: I0214 18:55:33.083625 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn" event={"ID":"ab082f7b-c89d-4db4-a04f-e2db844fa022","Type":"ContainerStarted","Data":"b3fa0a9a09a8e32ef910975539c50629a40330ab1ce2291b6d12dfe199fb4a00"} Feb 14 18:55:34 crc kubenswrapper[4897]: I0214 18:55:34.749051 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mxpml"] Feb 14 18:55:34 crc kubenswrapper[4897]: I0214 18:55:34.749613 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mxpml" podUID="22fe96da-7df2-46e3-8203-71013397709a" containerName="registry-server" containerID="cri-o://755c84a388ce3d335ca28a13a7e2587662dc43113309e3610d5435570ab4a684" gracePeriod=2 Feb 14 18:55:35 crc kubenswrapper[4897]: I0214 18:55:35.096778 4897 generic.go:334] "Generic (PLEG): container finished" podID="22fe96da-7df2-46e3-8203-71013397709a" containerID="755c84a388ce3d335ca28a13a7e2587662dc43113309e3610d5435570ab4a684" exitCode=0 Feb 14 18:55:35 crc kubenswrapper[4897]: I0214 18:55:35.096823 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxpml" event={"ID":"22fe96da-7df2-46e3-8203-71013397709a","Type":"ContainerDied","Data":"755c84a388ce3d335ca28a13a7e2587662dc43113309e3610d5435570ab4a684"} Feb 14 18:55:38 crc kubenswrapper[4897]: I0214 18:55:38.569799 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mxpml" Feb 14 18:55:38 crc kubenswrapper[4897]: I0214 18:55:38.666189 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22fe96da-7df2-46e3-8203-71013397709a-utilities" (OuterVolumeSpecName: "utilities") pod "22fe96da-7df2-46e3-8203-71013397709a" (UID: "22fe96da-7df2-46e3-8203-71013397709a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:55:38 crc kubenswrapper[4897]: I0214 18:55:38.665569 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22fe96da-7df2-46e3-8203-71013397709a-utilities\") pod \"22fe96da-7df2-46e3-8203-71013397709a\" (UID: \"22fe96da-7df2-46e3-8203-71013397709a\") " Feb 14 18:55:38 crc kubenswrapper[4897]: I0214 18:55:38.666294 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64sr8\" (UniqueName: \"kubernetes.io/projected/22fe96da-7df2-46e3-8203-71013397709a-kube-api-access-64sr8\") pod \"22fe96da-7df2-46e3-8203-71013397709a\" (UID: \"22fe96da-7df2-46e3-8203-71013397709a\") " Feb 14 18:55:38 crc kubenswrapper[4897]: I0214 18:55:38.667122 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22fe96da-7df2-46e3-8203-71013397709a-catalog-content\") pod \"22fe96da-7df2-46e3-8203-71013397709a\" (UID: \"22fe96da-7df2-46e3-8203-71013397709a\") " Feb 14 18:55:38 crc kubenswrapper[4897]: I0214 18:55:38.667590 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22fe96da-7df2-46e3-8203-71013397709a-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 18:55:38 crc kubenswrapper[4897]: I0214 18:55:38.670761 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22fe96da-7df2-46e3-8203-71013397709a-kube-api-access-64sr8" (OuterVolumeSpecName: "kube-api-access-64sr8") pod "22fe96da-7df2-46e3-8203-71013397709a" (UID: "22fe96da-7df2-46e3-8203-71013397709a"). InnerVolumeSpecName "kube-api-access-64sr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:55:38 crc kubenswrapper[4897]: I0214 18:55:38.768944 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64sr8\" (UniqueName: \"kubernetes.io/projected/22fe96da-7df2-46e3-8203-71013397709a-kube-api-access-64sr8\") on node \"crc\" DevicePath \"\"" Feb 14 18:55:38 crc kubenswrapper[4897]: I0214 18:55:38.797579 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22fe96da-7df2-46e3-8203-71013397709a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "22fe96da-7df2-46e3-8203-71013397709a" (UID: "22fe96da-7df2-46e3-8203-71013397709a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:55:38 crc kubenswrapper[4897]: I0214 18:55:38.870632 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22fe96da-7df2-46e3-8203-71013397709a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 18:55:39 crc kubenswrapper[4897]: I0214 18:55:39.126574 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxpml" event={"ID":"22fe96da-7df2-46e3-8203-71013397709a","Type":"ContainerDied","Data":"6dc67e147f8b766f7c033762f3b537539b911470815d5336318a44653df3da84"} Feb 14 18:55:39 crc kubenswrapper[4897]: I0214 18:55:39.126619 4897 scope.go:117] "RemoveContainer" containerID="755c84a388ce3d335ca28a13a7e2587662dc43113309e3610d5435570ab4a684" Feb 14 18:55:39 crc kubenswrapper[4897]: I0214 18:55:39.126725 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mxpml" Feb 14 18:55:39 crc kubenswrapper[4897]: I0214 18:55:39.153869 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mxpml"] Feb 14 18:55:39 crc kubenswrapper[4897]: I0214 18:55:39.159425 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mxpml"] Feb 14 18:55:39 crc kubenswrapper[4897]: I0214 18:55:39.257089 4897 scope.go:117] "RemoveContainer" containerID="b66e5db546a33ba8f2be1eb90952e2dfd74dfbb1a553921db518f67ba01e8df5" Feb 14 18:55:39 crc kubenswrapper[4897]: I0214 18:55:39.316301 4897 scope.go:117] "RemoveContainer" containerID="6dc7d5793e0340aed04813d4be5659d704c8a1ba617684e5a582429c1cc20b82" Feb 14 18:55:39 crc kubenswrapper[4897]: I0214 18:55:39.802123 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22fe96da-7df2-46e3-8203-71013397709a" path="/var/lib/kubelet/pods/22fe96da-7df2-46e3-8203-71013397709a/volumes" Feb 14 18:55:40 crc kubenswrapper[4897]: I0214 18:55:40.136560 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn" event={"ID":"ab082f7b-c89d-4db4-a04f-e2db844fa022","Type":"ContainerStarted","Data":"9286d706924f94272c200b8b4401b0575dd88183975d4cf0cb1bb079e30f3899"} Feb 14 18:55:40 crc kubenswrapper[4897]: I0214 18:55:40.136921 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn" Feb 14 18:55:40 crc kubenswrapper[4897]: I0214 18:55:40.139366 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn" Feb 14 18:55:40 crc kubenswrapper[4897]: I0214 18:55:40.170918 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn" podStartSLOduration=1.707833039 podStartE2EDuration="11.170886425s" podCreationTimestamp="2026-02-14 18:55:29 +0000 UTC" firstStartedPulling="2026-02-14 18:55:29.909628628 +0000 UTC m=+782.886037101" lastFinishedPulling="2026-02-14 18:55:39.372682004 +0000 UTC m=+792.349090487" observedRunningTime="2026-02-14 18:55:40.160488015 +0000 UTC m=+793.136896578" watchObservedRunningTime="2026-02-14 18:55:40.170886425 +0000 UTC m=+793.147294938" Feb 14 18:55:44 crc kubenswrapper[4897]: I0214 18:55:44.367823 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 14 18:55:44 crc kubenswrapper[4897]: E0214 18:55:44.368526 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22fe96da-7df2-46e3-8203-71013397709a" containerName="extract-utilities" Feb 14 18:55:44 crc kubenswrapper[4897]: I0214 18:55:44.368540 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="22fe96da-7df2-46e3-8203-71013397709a" containerName="extract-utilities" Feb 14 18:55:44 crc kubenswrapper[4897]: E0214 18:55:44.368561 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22fe96da-7df2-46e3-8203-71013397709a" containerName="extract-content" Feb 14 18:55:44 crc kubenswrapper[4897]: I0214 18:55:44.368569 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="22fe96da-7df2-46e3-8203-71013397709a" containerName="extract-content" Feb 14 18:55:44 crc kubenswrapper[4897]: E0214 18:55:44.368588 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22fe96da-7df2-46e3-8203-71013397709a" containerName="registry-server" Feb 14 18:55:44 crc kubenswrapper[4897]: I0214 18:55:44.368596 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="22fe96da-7df2-46e3-8203-71013397709a" containerName="registry-server" Feb 14 18:55:44 crc kubenswrapper[4897]: I0214 18:55:44.368744 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="22fe96da-7df2-46e3-8203-71013397709a" containerName="registry-server" Feb 14 18:55:44 crc kubenswrapper[4897]: I0214 18:55:44.369232 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 14 18:55:44 crc kubenswrapper[4897]: I0214 18:55:44.373762 4897 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-rwss2" Feb 14 18:55:44 crc kubenswrapper[4897]: I0214 18:55:44.377096 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 14 18:55:44 crc kubenswrapper[4897]: I0214 18:55:44.377288 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 14 18:55:44 crc kubenswrapper[4897]: I0214 18:55:44.378682 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 14 18:55:44 crc kubenswrapper[4897]: I0214 18:55:44.459618 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c41a0194-059e-46cd-b957-32529ce80d33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c41a0194-059e-46cd-b957-32529ce80d33\") pod \"minio\" (UID: \"d8bcdb2c-d922-450b-8d44-481eadfc3ec6\") " pod="minio-dev/minio" Feb 14 18:55:44 crc kubenswrapper[4897]: I0214 18:55:44.459788 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcsdl\" (UniqueName: \"kubernetes.io/projected/d8bcdb2c-d922-450b-8d44-481eadfc3ec6-kube-api-access-mcsdl\") pod \"minio\" (UID: \"d8bcdb2c-d922-450b-8d44-481eadfc3ec6\") " pod="minio-dev/minio" Feb 14 18:55:44 crc kubenswrapper[4897]: I0214 18:55:44.561123 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcsdl\" (UniqueName: \"kubernetes.io/projected/d8bcdb2c-d922-450b-8d44-481eadfc3ec6-kube-api-access-mcsdl\") pod \"minio\" (UID: \"d8bcdb2c-d922-450b-8d44-481eadfc3ec6\") " pod="minio-dev/minio" Feb 14 18:55:44 crc kubenswrapper[4897]: I0214 18:55:44.561239 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c41a0194-059e-46cd-b957-32529ce80d33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c41a0194-059e-46cd-b957-32529ce80d33\") pod \"minio\" (UID: \"d8bcdb2c-d922-450b-8d44-481eadfc3ec6\") " pod="minio-dev/minio" Feb 14 18:55:44 crc kubenswrapper[4897]: I0214 18:55:44.564701 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 18:55:44 crc kubenswrapper[4897]: I0214 18:55:44.564737 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c41a0194-059e-46cd-b957-32529ce80d33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c41a0194-059e-46cd-b957-32529ce80d33\") pod \"minio\" (UID: \"d8bcdb2c-d922-450b-8d44-481eadfc3ec6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/070edcfd5cc1433e1f20bcbad7a0acfdb59590521112cf692b2cabdef4cf50fd/globalmount\"" pod="minio-dev/minio" Feb 14 18:55:44 crc kubenswrapper[4897]: I0214 18:55:44.585898 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcsdl\" (UniqueName: \"kubernetes.io/projected/d8bcdb2c-d922-450b-8d44-481eadfc3ec6-kube-api-access-mcsdl\") pod \"minio\" (UID: \"d8bcdb2c-d922-450b-8d44-481eadfc3ec6\") " pod="minio-dev/minio" Feb 14 18:55:44 crc kubenswrapper[4897]: I0214 18:55:44.595730 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c41a0194-059e-46cd-b957-32529ce80d33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c41a0194-059e-46cd-b957-32529ce80d33\") pod \"minio\" (UID: \"d8bcdb2c-d922-450b-8d44-481eadfc3ec6\") " pod="minio-dev/minio" Feb 14 18:55:44 crc kubenswrapper[4897]: I0214 18:55:44.694098 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 14 18:55:45 crc kubenswrapper[4897]: I0214 18:55:45.190617 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 14 18:55:45 crc kubenswrapper[4897]: W0214 18:55:45.200325 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8bcdb2c_d922_450b_8d44_481eadfc3ec6.slice/crio-b07306d4fdd15d018b05811ad1e39916797d6d92a9efcd94e1e1f08198bcba71 WatchSource:0}: Error finding container b07306d4fdd15d018b05811ad1e39916797d6d92a9efcd94e1e1f08198bcba71: Status 404 returned error can't find the container with id b07306d4fdd15d018b05811ad1e39916797d6d92a9efcd94e1e1f08198bcba71 Feb 14 18:55:46 crc kubenswrapper[4897]: I0214 18:55:46.191040 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"d8bcdb2c-d922-450b-8d44-481eadfc3ec6","Type":"ContainerStarted","Data":"b07306d4fdd15d018b05811ad1e39916797d6d92a9efcd94e1e1f08198bcba71"} Feb 14 18:55:49 crc kubenswrapper[4897]: I0214 18:55:49.210423 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"d8bcdb2c-d922-450b-8d44-481eadfc3ec6","Type":"ContainerStarted","Data":"d6a3dc53106614bb9384cb2ef99aa108135f01d4eb1c25aa16721c114ecefba4"} Feb 14 18:55:49 crc kubenswrapper[4897]: I0214 18:55:49.235009 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=3.968811886 podStartE2EDuration="7.234989413s" podCreationTimestamp="2026-02-14 18:55:42 +0000 UTC" firstStartedPulling="2026-02-14 18:55:45.205399432 +0000 UTC m=+798.181807955" lastFinishedPulling="2026-02-14 18:55:48.471576959 +0000 UTC m=+801.447985482" observedRunningTime="2026-02-14 18:55:49.233078247 +0000 UTC m=+802.209486730" watchObservedRunningTime="2026-02-14 18:55:49.234989413 +0000 UTC m=+802.211397906" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.097432 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2"] Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.100278 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.102812 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.102816 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.103013 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.103167 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-2lgmn" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.104936 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.117471 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2"] Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.144671 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f4eb68c-7592-4025-a9a0-d5ed85aeec3c-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-lx9b2\" (UID: \"0f4eb68c-7592-4025-a9a0-d5ed85aeec3c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.144789 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f4eb68c-7592-4025-a9a0-d5ed85aeec3c-config\") pod \"logging-loki-distributor-5d5548c9f5-lx9b2\" (UID: \"0f4eb68c-7592-4025-a9a0-d5ed85aeec3c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.144835 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/0f4eb68c-7592-4025-a9a0-d5ed85aeec3c-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-lx9b2\" (UID: \"0f4eb68c-7592-4025-a9a0-d5ed85aeec3c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.144866 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vffrk\" (UniqueName: \"kubernetes.io/projected/0f4eb68c-7592-4025-a9a0-d5ed85aeec3c-kube-api-access-vffrk\") pod \"logging-loki-distributor-5d5548c9f5-lx9b2\" (UID: \"0f4eb68c-7592-4025-a9a0-d5ed85aeec3c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.144893 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/0f4eb68c-7592-4025-a9a0-d5ed85aeec3c-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-lx9b2\" (UID: \"0f4eb68c-7592-4025-a9a0-d5ed85aeec3c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.245746 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f4eb68c-7592-4025-a9a0-d5ed85aeec3c-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-lx9b2\" (UID: \"0f4eb68c-7592-4025-a9a0-d5ed85aeec3c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.245842 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f4eb68c-7592-4025-a9a0-d5ed85aeec3c-config\") pod \"logging-loki-distributor-5d5548c9f5-lx9b2\" (UID: \"0f4eb68c-7592-4025-a9a0-d5ed85aeec3c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.245896 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/0f4eb68c-7592-4025-a9a0-d5ed85aeec3c-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-lx9b2\" (UID: \"0f4eb68c-7592-4025-a9a0-d5ed85aeec3c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.245930 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vffrk\" (UniqueName: \"kubernetes.io/projected/0f4eb68c-7592-4025-a9a0-d5ed85aeec3c-kube-api-access-vffrk\") pod \"logging-loki-distributor-5d5548c9f5-lx9b2\" (UID: \"0f4eb68c-7592-4025-a9a0-d5ed85aeec3c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.245948 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/0f4eb68c-7592-4025-a9a0-d5ed85aeec3c-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-lx9b2\" (UID: \"0f4eb68c-7592-4025-a9a0-d5ed85aeec3c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.246776 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f4eb68c-7592-4025-a9a0-d5ed85aeec3c-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-lx9b2\" (UID: \"0f4eb68c-7592-4025-a9a0-d5ed85aeec3c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.246812 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f4eb68c-7592-4025-a9a0-d5ed85aeec3c-config\") pod \"logging-loki-distributor-5d5548c9f5-lx9b2\" (UID: \"0f4eb68c-7592-4025-a9a0-d5ed85aeec3c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.252813 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/0f4eb68c-7592-4025-a9a0-d5ed85aeec3c-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-lx9b2\" (UID: \"0f4eb68c-7592-4025-a9a0-d5ed85aeec3c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.254175 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/0f4eb68c-7592-4025-a9a0-d5ed85aeec3c-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-lx9b2\" (UID: \"0f4eb68c-7592-4025-a9a0-d5ed85aeec3c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.265941 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh"] Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.267010 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.269531 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.269753 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.269867 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.271322 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh"] Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.278215 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vffrk\" (UniqueName: \"kubernetes.io/projected/0f4eb68c-7592-4025-a9a0-d5ed85aeec3c-kube-api-access-vffrk\") pod \"logging-loki-distributor-5d5548c9f5-lx9b2\" (UID: \"0f4eb68c-7592-4025-a9a0-d5ed85aeec3c\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.347440 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/74485545-1349-4cd2-9764-72af83ba9aa1-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-jw9nh\" (UID: \"74485545-1349-4cd2-9764-72af83ba9aa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.347493 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74485545-1349-4cd2-9764-72af83ba9aa1-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-jw9nh\" (UID: \"74485545-1349-4cd2-9764-72af83ba9aa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.347512 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx4dn\" (UniqueName: \"kubernetes.io/projected/74485545-1349-4cd2-9764-72af83ba9aa1-kube-api-access-kx4dn\") pod \"logging-loki-querier-76bf7b6d45-jw9nh\" (UID: \"74485545-1349-4cd2-9764-72af83ba9aa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.347626 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74485545-1349-4cd2-9764-72af83ba9aa1-config\") pod \"logging-loki-querier-76bf7b6d45-jw9nh\" (UID: \"74485545-1349-4cd2-9764-72af83ba9aa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.347751 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/74485545-1349-4cd2-9764-72af83ba9aa1-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-jw9nh\" (UID: \"74485545-1349-4cd2-9764-72af83ba9aa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.347781 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/74485545-1349-4cd2-9764-72af83ba9aa1-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-jw9nh\" (UID: \"74485545-1349-4cd2-9764-72af83ba9aa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.349904 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-zhtld"] Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.350702 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-zhtld" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.352694 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.352928 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.372558 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-zhtld"] Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.428378 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.435129 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-c7757d78c-ctkkw"] Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.437232 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.439378 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.439592 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.439617 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.439873 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.440150 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.449406 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fed2ea1c-038a-40eb-a753-68705d1ae150-config\") pod \"logging-loki-query-frontend-6d6859c548-zhtld\" (UID: \"fed2ea1c-038a-40eb-a753-68705d1ae150\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-zhtld" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.449451 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fed2ea1c-038a-40eb-a753-68705d1ae150-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-zhtld\" (UID: \"fed2ea1c-038a-40eb-a753-68705d1ae150\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-zhtld" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.449480 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/74485545-1349-4cd2-9764-72af83ba9aa1-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-jw9nh\" (UID: \"74485545-1349-4cd2-9764-72af83ba9aa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.449511 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74485545-1349-4cd2-9764-72af83ba9aa1-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-jw9nh\" (UID: \"74485545-1349-4cd2-9764-72af83ba9aa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.449526 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx4dn\" (UniqueName: \"kubernetes.io/projected/74485545-1349-4cd2-9764-72af83ba9aa1-kube-api-access-kx4dn\") pod \"logging-loki-querier-76bf7b6d45-jw9nh\" (UID: \"74485545-1349-4cd2-9764-72af83ba9aa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.449546 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/fed2ea1c-038a-40eb-a753-68705d1ae150-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-zhtld\" (UID: \"fed2ea1c-038a-40eb-a753-68705d1ae150\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-zhtld" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.449573 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74485545-1349-4cd2-9764-72af83ba9aa1-config\") pod \"logging-loki-querier-76bf7b6d45-jw9nh\" (UID: \"74485545-1349-4cd2-9764-72af83ba9aa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.449615 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4fzb\" (UniqueName: \"kubernetes.io/projected/fed2ea1c-038a-40eb-a753-68705d1ae150-kube-api-access-p4fzb\") pod \"logging-loki-query-frontend-6d6859c548-zhtld\" (UID: \"fed2ea1c-038a-40eb-a753-68705d1ae150\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-zhtld" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.449635 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/74485545-1349-4cd2-9764-72af83ba9aa1-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-jw9nh\" (UID: \"74485545-1349-4cd2-9764-72af83ba9aa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.449657 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/74485545-1349-4cd2-9764-72af83ba9aa1-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-jw9nh\" (UID: \"74485545-1349-4cd2-9764-72af83ba9aa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.449674 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/fed2ea1c-038a-40eb-a753-68705d1ae150-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-zhtld\" (UID: \"fed2ea1c-038a-40eb-a753-68705d1ae150\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-zhtld" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.450100 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-zxcwl" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.451159 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74485545-1349-4cd2-9764-72af83ba9aa1-config\") pod \"logging-loki-querier-76bf7b6d45-jw9nh\" (UID: \"74485545-1349-4cd2-9764-72af83ba9aa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.451853 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74485545-1349-4cd2-9764-72af83ba9aa1-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-jw9nh\" (UID: \"74485545-1349-4cd2-9764-72af83ba9aa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.455402 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-c7757d78c-fb7zn"] Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.457334 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.458895 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/74485545-1349-4cd2-9764-72af83ba9aa1-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-jw9nh\" (UID: \"74485545-1349-4cd2-9764-72af83ba9aa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.460184 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-c7757d78c-ctkkw"] Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.461617 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/74485545-1349-4cd2-9764-72af83ba9aa1-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-jw9nh\" (UID: \"74485545-1349-4cd2-9764-72af83ba9aa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.462135 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/74485545-1349-4cd2-9764-72af83ba9aa1-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-jw9nh\" (UID: \"74485545-1349-4cd2-9764-72af83ba9aa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.468494 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx4dn\" (UniqueName: \"kubernetes.io/projected/74485545-1349-4cd2-9764-72af83ba9aa1-kube-api-access-kx4dn\") pod \"logging-loki-querier-76bf7b6d45-jw9nh\" (UID: \"74485545-1349-4cd2-9764-72af83ba9aa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.492317 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-c7757d78c-fb7zn"] Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.551708 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/cec4c0da-107d-4f6d-946d-2ffe925883e4-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-c7757d78c-fb7zn\" (UID: \"cec4c0da-107d-4f6d-946d-2ffe925883e4\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.551756 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/cec4c0da-107d-4f6d-946d-2ffe925883e4-lokistack-gateway\") pod \"logging-loki-gateway-c7757d78c-fb7zn\" (UID: \"cec4c0da-107d-4f6d-946d-2ffe925883e4\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.551781 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fed2ea1c-038a-40eb-a753-68705d1ae150-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-zhtld\" (UID: \"fed2ea1c-038a-40eb-a753-68705d1ae150\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-zhtld" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.551816 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/969ba5ce-9b29-41f2-ba75-76f548daa534-tls-secret\") pod \"logging-loki-gateway-c7757d78c-ctkkw\" (UID: \"969ba5ce-9b29-41f2-ba75-76f548daa534\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.551842 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cec4c0da-107d-4f6d-946d-2ffe925883e4-logging-loki-ca-bundle\") pod \"logging-loki-gateway-c7757d78c-fb7zn\" (UID: \"cec4c0da-107d-4f6d-946d-2ffe925883e4\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.551863 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/fed2ea1c-038a-40eb-a753-68705d1ae150-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-zhtld\" (UID: \"fed2ea1c-038a-40eb-a753-68705d1ae150\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-zhtld" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.551883 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/cec4c0da-107d-4f6d-946d-2ffe925883e4-tenants\") pod \"logging-loki-gateway-c7757d78c-fb7zn\" (UID: \"cec4c0da-107d-4f6d-946d-2ffe925883e4\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.551898 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/969ba5ce-9b29-41f2-ba75-76f548daa534-tenants\") pod \"logging-loki-gateway-c7757d78c-ctkkw\" (UID: \"969ba5ce-9b29-41f2-ba75-76f548daa534\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.551928 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/969ba5ce-9b29-41f2-ba75-76f548daa534-lokistack-gateway\") pod \"logging-loki-gateway-c7757d78c-ctkkw\" (UID: \"969ba5ce-9b29-41f2-ba75-76f548daa534\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.551946 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/cec4c0da-107d-4f6d-946d-2ffe925883e4-tls-secret\") pod \"logging-loki-gateway-c7757d78c-fb7zn\" (UID: \"cec4c0da-107d-4f6d-946d-2ffe925883e4\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.551963 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/969ba5ce-9b29-41f2-ba75-76f548daa534-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-c7757d78c-ctkkw\" (UID: \"969ba5ce-9b29-41f2-ba75-76f548daa534\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.552019 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4fzb\" (UniqueName: \"kubernetes.io/projected/fed2ea1c-038a-40eb-a753-68705d1ae150-kube-api-access-p4fzb\") pod \"logging-loki-query-frontend-6d6859c548-zhtld\" (UID: \"fed2ea1c-038a-40eb-a753-68705d1ae150\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-zhtld" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.552051 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/fed2ea1c-038a-40eb-a753-68705d1ae150-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-zhtld\" (UID: \"fed2ea1c-038a-40eb-a753-68705d1ae150\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-zhtld" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.552067 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/cec4c0da-107d-4f6d-946d-2ffe925883e4-rbac\") pod \"logging-loki-gateway-c7757d78c-fb7zn\" (UID: \"cec4c0da-107d-4f6d-946d-2ffe925883e4\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.552085 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwl4g\" (UniqueName: \"kubernetes.io/projected/969ba5ce-9b29-41f2-ba75-76f548daa534-kube-api-access-fwl4g\") pod \"logging-loki-gateway-c7757d78c-ctkkw\" (UID: \"969ba5ce-9b29-41f2-ba75-76f548daa534\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.552107 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/969ba5ce-9b29-41f2-ba75-76f548daa534-rbac\") pod \"logging-loki-gateway-c7757d78c-ctkkw\" (UID: \"969ba5ce-9b29-41f2-ba75-76f548daa534\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.552128 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/969ba5ce-9b29-41f2-ba75-76f548daa534-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-c7757d78c-ctkkw\" (UID: \"969ba5ce-9b29-41f2-ba75-76f548daa534\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.552149 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/969ba5ce-9b29-41f2-ba75-76f548daa534-logging-loki-ca-bundle\") pod \"logging-loki-gateway-c7757d78c-ctkkw\" (UID: \"969ba5ce-9b29-41f2-ba75-76f548daa534\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.552173 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cec4c0da-107d-4f6d-946d-2ffe925883e4-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-c7757d78c-fb7zn\" (UID: \"cec4c0da-107d-4f6d-946d-2ffe925883e4\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.552192 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fed2ea1c-038a-40eb-a753-68705d1ae150-config\") pod \"logging-loki-query-frontend-6d6859c548-zhtld\" (UID: \"fed2ea1c-038a-40eb-a753-68705d1ae150\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-zhtld" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.552212 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbfnt\" (UniqueName: \"kubernetes.io/projected/cec4c0da-107d-4f6d-946d-2ffe925883e4-kube-api-access-kbfnt\") pod \"logging-loki-gateway-c7757d78c-fb7zn\" (UID: \"cec4c0da-107d-4f6d-946d-2ffe925883e4\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.552733 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fed2ea1c-038a-40eb-a753-68705d1ae150-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-zhtld\" (UID: \"fed2ea1c-038a-40eb-a753-68705d1ae150\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-zhtld" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.553467 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fed2ea1c-038a-40eb-a753-68705d1ae150-config\") pod \"logging-loki-query-frontend-6d6859c548-zhtld\" (UID: \"fed2ea1c-038a-40eb-a753-68705d1ae150\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-zhtld" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.556365 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/fed2ea1c-038a-40eb-a753-68705d1ae150-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-zhtld\" (UID: \"fed2ea1c-038a-40eb-a753-68705d1ae150\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-zhtld" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.559326 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/fed2ea1c-038a-40eb-a753-68705d1ae150-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-zhtld\" (UID: \"fed2ea1c-038a-40eb-a753-68705d1ae150\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-zhtld" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.595460 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4fzb\" (UniqueName: \"kubernetes.io/projected/fed2ea1c-038a-40eb-a753-68705d1ae150-kube-api-access-p4fzb\") pod \"logging-loki-query-frontend-6d6859c548-zhtld\" (UID: \"fed2ea1c-038a-40eb-a753-68705d1ae150\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-zhtld" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.628487 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.655062 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/969ba5ce-9b29-41f2-ba75-76f548daa534-logging-loki-ca-bundle\") pod \"logging-loki-gateway-c7757d78c-ctkkw\" (UID: \"969ba5ce-9b29-41f2-ba75-76f548daa534\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.655110 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cec4c0da-107d-4f6d-946d-2ffe925883e4-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-c7757d78c-fb7zn\" (UID: \"cec4c0da-107d-4f6d-946d-2ffe925883e4\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.655132 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbfnt\" (UniqueName: \"kubernetes.io/projected/cec4c0da-107d-4f6d-946d-2ffe925883e4-kube-api-access-kbfnt\") pod \"logging-loki-gateway-c7757d78c-fb7zn\" (UID: \"cec4c0da-107d-4f6d-946d-2ffe925883e4\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.655162 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/cec4c0da-107d-4f6d-946d-2ffe925883e4-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-c7757d78c-fb7zn\" (UID: \"cec4c0da-107d-4f6d-946d-2ffe925883e4\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.655184 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/cec4c0da-107d-4f6d-946d-2ffe925883e4-lokistack-gateway\") pod \"logging-loki-gateway-c7757d78c-fb7zn\" (UID: \"cec4c0da-107d-4f6d-946d-2ffe925883e4\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.655216 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/969ba5ce-9b29-41f2-ba75-76f548daa534-tls-secret\") pod \"logging-loki-gateway-c7757d78c-ctkkw\" (UID: \"969ba5ce-9b29-41f2-ba75-76f548daa534\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.655239 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cec4c0da-107d-4f6d-946d-2ffe925883e4-logging-loki-ca-bundle\") pod \"logging-loki-gateway-c7757d78c-fb7zn\" (UID: \"cec4c0da-107d-4f6d-946d-2ffe925883e4\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.655275 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/cec4c0da-107d-4f6d-946d-2ffe925883e4-tenants\") pod \"logging-loki-gateway-c7757d78c-fb7zn\" (UID: \"cec4c0da-107d-4f6d-946d-2ffe925883e4\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.655289 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/969ba5ce-9b29-41f2-ba75-76f548daa534-tenants\") pod \"logging-loki-gateway-c7757d78c-ctkkw\" (UID: \"969ba5ce-9b29-41f2-ba75-76f548daa534\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.655316 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/969ba5ce-9b29-41f2-ba75-76f548daa534-lokistack-gateway\") pod \"logging-loki-gateway-c7757d78c-ctkkw\" (UID: \"969ba5ce-9b29-41f2-ba75-76f548daa534\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.655331 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/cec4c0da-107d-4f6d-946d-2ffe925883e4-tls-secret\") pod \"logging-loki-gateway-c7757d78c-fb7zn\" (UID: \"cec4c0da-107d-4f6d-946d-2ffe925883e4\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.655345 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/969ba5ce-9b29-41f2-ba75-76f548daa534-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-c7757d78c-ctkkw\" (UID: \"969ba5ce-9b29-41f2-ba75-76f548daa534\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.655369 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/cec4c0da-107d-4f6d-946d-2ffe925883e4-rbac\") pod \"logging-loki-gateway-c7757d78c-fb7zn\" (UID: \"cec4c0da-107d-4f6d-946d-2ffe925883e4\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.655386 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwl4g\" (UniqueName: \"kubernetes.io/projected/969ba5ce-9b29-41f2-ba75-76f548daa534-kube-api-access-fwl4g\") pod \"logging-loki-gateway-c7757d78c-ctkkw\" (UID: \"969ba5ce-9b29-41f2-ba75-76f548daa534\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.655406 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/969ba5ce-9b29-41f2-ba75-76f548daa534-rbac\") pod \"logging-loki-gateway-c7757d78c-ctkkw\" (UID: \"969ba5ce-9b29-41f2-ba75-76f548daa534\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.655423 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/969ba5ce-9b29-41f2-ba75-76f548daa534-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-c7757d78c-ctkkw\" (UID: \"969ba5ce-9b29-41f2-ba75-76f548daa534\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.665672 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/969ba5ce-9b29-41f2-ba75-76f548daa534-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-c7757d78c-ctkkw\" (UID: \"969ba5ce-9b29-41f2-ba75-76f548daa534\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.668972 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/cec4c0da-107d-4f6d-946d-2ffe925883e4-lokistack-gateway\") pod \"logging-loki-gateway-c7757d78c-fb7zn\" (UID: \"cec4c0da-107d-4f6d-946d-2ffe925883e4\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.669275 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cec4c0da-107d-4f6d-946d-2ffe925883e4-logging-loki-ca-bundle\") pod \"logging-loki-gateway-c7757d78c-fb7zn\" (UID: \"cec4c0da-107d-4f6d-946d-2ffe925883e4\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.669591 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cec4c0da-107d-4f6d-946d-2ffe925883e4-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-c7757d78c-fb7zn\" (UID: \"cec4c0da-107d-4f6d-946d-2ffe925883e4\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.669909 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/969ba5ce-9b29-41f2-ba75-76f548daa534-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-c7757d78c-ctkkw\" (UID: \"969ba5ce-9b29-41f2-ba75-76f548daa534\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.670517 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/969ba5ce-9b29-41f2-ba75-76f548daa534-lokistack-gateway\") pod \"logging-loki-gateway-c7757d78c-ctkkw\" (UID: \"969ba5ce-9b29-41f2-ba75-76f548daa534\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.671097 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-zhtld" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.675719 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/cec4c0da-107d-4f6d-946d-2ffe925883e4-tenants\") pod \"logging-loki-gateway-c7757d78c-fb7zn\" (UID: \"cec4c0da-107d-4f6d-946d-2ffe925883e4\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.676250 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/cec4c0da-107d-4f6d-946d-2ffe925883e4-rbac\") pod \"logging-loki-gateway-c7757d78c-fb7zn\" (UID: \"cec4c0da-107d-4f6d-946d-2ffe925883e4\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.676445 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/969ba5ce-9b29-41f2-ba75-76f548daa534-rbac\") pod \"logging-loki-gateway-c7757d78c-ctkkw\" (UID: \"969ba5ce-9b29-41f2-ba75-76f548daa534\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.676914 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/969ba5ce-9b29-41f2-ba75-76f548daa534-tenants\") pod \"logging-loki-gateway-c7757d78c-ctkkw\" (UID: \"969ba5ce-9b29-41f2-ba75-76f548daa534\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.677971 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/cec4c0da-107d-4f6d-946d-2ffe925883e4-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-c7757d78c-fb7zn\" (UID: \"cec4c0da-107d-4f6d-946d-2ffe925883e4\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.685737 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/969ba5ce-9b29-41f2-ba75-76f548daa534-tls-secret\") pod \"logging-loki-gateway-c7757d78c-ctkkw\" (UID: \"969ba5ce-9b29-41f2-ba75-76f548daa534\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.686240 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/cec4c0da-107d-4f6d-946d-2ffe925883e4-tls-secret\") pod \"logging-loki-gateway-c7757d78c-fb7zn\" (UID: \"cec4c0da-107d-4f6d-946d-2ffe925883e4\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.702362 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/969ba5ce-9b29-41f2-ba75-76f548daa534-logging-loki-ca-bundle\") pod \"logging-loki-gateway-c7757d78c-ctkkw\" (UID: \"969ba5ce-9b29-41f2-ba75-76f548daa534\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.706827 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwl4g\" (UniqueName: \"kubernetes.io/projected/969ba5ce-9b29-41f2-ba75-76f548daa534-kube-api-access-fwl4g\") pod \"logging-loki-gateway-c7757d78c-ctkkw\" (UID: \"969ba5ce-9b29-41f2-ba75-76f548daa534\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.725798 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbfnt\" (UniqueName: \"kubernetes.io/projected/cec4c0da-107d-4f6d-946d-2ffe925883e4-kube-api-access-kbfnt\") pod \"logging-loki-gateway-c7757d78c-fb7zn\" (UID: \"cec4c0da-107d-4f6d-946d-2ffe925883e4\") " pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.749744 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2"] Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.807690 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:55:54 crc kubenswrapper[4897]: I0214 18:55:54.825061 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.192903 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh"] Feb 14 18:55:55 crc kubenswrapper[4897]: W0214 18:55:55.200854 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74485545_1349_4cd2_9764_72af83ba9aa1.slice/crio-7bdd60529f33010ef633c188d86a4be0917e13d0fb6c605cf8f5e83bb6ec34d5 WatchSource:0}: Error finding container 7bdd60529f33010ef633c188d86a4be0917e13d0fb6c605cf8f5e83bb6ec34d5: Status 404 returned error can't find the container with id 7bdd60529f33010ef633c188d86a4be0917e13d0fb6c605cf8f5e83bb6ec34d5 Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.227522 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.228567 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.230449 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.230512 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.237098 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.257135 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2" event={"ID":"0f4eb68c-7592-4025-a9a0-d5ed85aeec3c","Type":"ContainerStarted","Data":"0dc1d0b5f5d9c1a37ce3da21cd049319109de6898438b5d0c25a1d6dc6d30613"} Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.260433 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" event={"ID":"74485545-1349-4cd2-9764-72af83ba9aa1","Type":"ContainerStarted","Data":"7bdd60529f33010ef633c188d86a4be0917e13d0fb6c605cf8f5e83bb6ec34d5"} Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.277735 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lhc4\" (UniqueName: \"kubernetes.io/projected/740f1f83-6c75-4e47-a5c5-6a0ef1d40cca-kube-api-access-6lhc4\") pod \"logging-loki-ingester-0\" (UID: \"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.277801 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dab271ed-5dc9-45b3-80c3-295e87da4bb8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dab271ed-5dc9-45b3-80c3-295e87da4bb8\") pod \"logging-loki-ingester-0\" (UID: \"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.277840 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/740f1f83-6c75-4e47-a5c5-6a0ef1d40cca-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.277893 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c297d4e4-d9ae-406d-a058-74b1360d6895\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c297d4e4-d9ae-406d-a058-74b1360d6895\") pod \"logging-loki-ingester-0\" (UID: \"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.278050 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/740f1f83-6c75-4e47-a5c5-6a0ef1d40cca-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.278078 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/740f1f83-6c75-4e47-a5c5-6a0ef1d40cca-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.278100 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/740f1f83-6c75-4e47-a5c5-6a0ef1d40cca-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.278121 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/740f1f83-6c75-4e47-a5c5-6a0ef1d40cca-config\") pod \"logging-loki-ingester-0\" (UID: \"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.284237 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-zhtld"] Feb 14 18:55:55 crc kubenswrapper[4897]: W0214 18:55:55.290346 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfed2ea1c_038a_40eb_a753_68705d1ae150.slice/crio-77676f1d4e985fa7f042d12b45f6c364115d10090db39aaa23ab506d70b20f0f WatchSource:0}: Error finding container 77676f1d4e985fa7f042d12b45f6c364115d10090db39aaa23ab506d70b20f0f: Status 404 returned error can't find the container with id 77676f1d4e985fa7f042d12b45f6c364115d10090db39aaa23ab506d70b20f0f Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.309109 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.309963 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.311944 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.314265 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.315575 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.346282 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-c7757d78c-ctkkw"] Feb 14 18:55:55 crc kubenswrapper[4897]: W0214 18:55:55.353592 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod969ba5ce_9b29_41f2_ba75_76f548daa534.slice/crio-33813755b4f5630c9ca8efb94ac18a2c22fb314354ed45db6f144b908609d6c1 WatchSource:0}: Error finding container 33813755b4f5630c9ca8efb94ac18a2c22fb314354ed45db6f144b908609d6c1: Status 404 returned error can't find the container with id 33813755b4f5630c9ca8efb94ac18a2c22fb314354ed45db6f144b908609d6c1 Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.374408 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.375272 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.376834 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.379244 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.379777 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/9e988817-cbfc-4faf-a31e-bf357c1c4691-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"9e988817-cbfc-4faf-a31e-bf357c1c4691\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.379826 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/9e988817-cbfc-4faf-a31e-bf357c1c4691-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"9e988817-cbfc-4faf-a31e-bf357c1c4691\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.379862 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/740f1f83-6c75-4e47-a5c5-6a0ef1d40cca-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.379881 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/740f1f83-6c75-4e47-a5c5-6a0ef1d40cca-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.379909 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/740f1f83-6c75-4e47-a5c5-6a0ef1d40cca-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.379926 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e988817-cbfc-4faf-a31e-bf357c1c4691-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"9e988817-cbfc-4faf-a31e-bf357c1c4691\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.379948 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/740f1f83-6c75-4e47-a5c5-6a0ef1d40cca-config\") pod \"logging-loki-ingester-0\" (UID: \"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.379985 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lhc4\" (UniqueName: \"kubernetes.io/projected/740f1f83-6c75-4e47-a5c5-6a0ef1d40cca-kube-api-access-6lhc4\") pod \"logging-loki-ingester-0\" (UID: \"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.380006 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dab271ed-5dc9-45b3-80c3-295e87da4bb8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dab271ed-5dc9-45b3-80c3-295e87da4bb8\") pod \"logging-loki-ingester-0\" (UID: \"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.380023 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e988817-cbfc-4faf-a31e-bf357c1c4691-config\") pod \"logging-loki-compactor-0\" (UID: \"9e988817-cbfc-4faf-a31e-bf357c1c4691\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.380057 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/740f1f83-6c75-4e47-a5c5-6a0ef1d40cca-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.380077 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdl5v\" (UniqueName: \"kubernetes.io/projected/9e988817-cbfc-4faf-a31e-bf357c1c4691-kube-api-access-bdl5v\") pod \"logging-loki-compactor-0\" (UID: \"9e988817-cbfc-4faf-a31e-bf357c1c4691\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.380105 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c297d4e4-d9ae-406d-a058-74b1360d6895\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c297d4e4-d9ae-406d-a058-74b1360d6895\") pod \"logging-loki-ingester-0\" (UID: \"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.380124 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e6a43a9b-8511-45b0-9f42-4ca61cb58289\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6a43a9b-8511-45b0-9f42-4ca61cb58289\") pod \"logging-loki-compactor-0\" (UID: \"9e988817-cbfc-4faf-a31e-bf357c1c4691\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.380148 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/9e988817-cbfc-4faf-a31e-bf357c1c4691-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"9e988817-cbfc-4faf-a31e-bf357c1c4691\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.381094 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/740f1f83-6c75-4e47-a5c5-6a0ef1d40cca-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.383740 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.383922 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/740f1f83-6c75-4e47-a5c5-6a0ef1d40cca-config\") pod \"logging-loki-ingester-0\" (UID: \"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.385320 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.385421 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dab271ed-5dc9-45b3-80c3-295e87da4bb8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dab271ed-5dc9-45b3-80c3-295e87da4bb8\") pod \"logging-loki-ingester-0\" (UID: \"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c5496651a699cde774521587c23ee050ca3276341fbcbd25acdca417863460c9/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.385756 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.385804 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c297d4e4-d9ae-406d-a058-74b1360d6895\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c297d4e4-d9ae-406d-a058-74b1360d6895\") pod \"logging-loki-ingester-0\" (UID: \"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/299407e64771c26f651f56676cc13fec76c18654eb827ca3f41686541dfc725b/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.385869 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/740f1f83-6c75-4e47-a5c5-6a0ef1d40cca-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.387764 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/740f1f83-6c75-4e47-a5c5-6a0ef1d40cca-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.392495 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-c7757d78c-fb7zn"] Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.398817 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/740f1f83-6c75-4e47-a5c5-6a0ef1d40cca-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.409066 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lhc4\" (UniqueName: \"kubernetes.io/projected/740f1f83-6c75-4e47-a5c5-6a0ef1d40cca-kube-api-access-6lhc4\") pod \"logging-loki-ingester-0\" (UID: \"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.430860 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c297d4e4-d9ae-406d-a058-74b1360d6895\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c297d4e4-d9ae-406d-a058-74b1360d6895\") pod \"logging-loki-ingester-0\" (UID: \"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.437503 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dab271ed-5dc9-45b3-80c3-295e87da4bb8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dab271ed-5dc9-45b3-80c3-295e87da4bb8\") pod \"logging-loki-ingester-0\" (UID: \"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.481315 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/9e988817-cbfc-4faf-a31e-bf357c1c4691-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"9e988817-cbfc-4faf-a31e-bf357c1c4691\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.481371 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e988817-cbfc-4faf-a31e-bf357c1c4691-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"9e988817-cbfc-4faf-a31e-bf357c1c4691\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.481394 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-67abaaae-e6a6-4ed4-84f2-8f0a257ce0d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-67abaaae-e6a6-4ed4-84f2-8f0a257ce0d3\") pod \"logging-loki-index-gateway-0\" (UID: \"62b896b4-5861-4fa8-ac40-642f2d8688b5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.481423 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/62b896b4-5861-4fa8-ac40-642f2d8688b5-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"62b896b4-5861-4fa8-ac40-642f2d8688b5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.481447 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdl5v\" (UniqueName: \"kubernetes.io/projected/9e988817-cbfc-4faf-a31e-bf357c1c4691-kube-api-access-bdl5v\") pod \"logging-loki-compactor-0\" (UID: \"9e988817-cbfc-4faf-a31e-bf357c1c4691\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.481482 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz52p\" (UniqueName: \"kubernetes.io/projected/62b896b4-5861-4fa8-ac40-642f2d8688b5-kube-api-access-zz52p\") pod \"logging-loki-index-gateway-0\" (UID: \"62b896b4-5861-4fa8-ac40-642f2d8688b5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.481533 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/9e988817-cbfc-4faf-a31e-bf357c1c4691-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"9e988817-cbfc-4faf-a31e-bf357c1c4691\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.481562 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/62b896b4-5861-4fa8-ac40-642f2d8688b5-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"62b896b4-5861-4fa8-ac40-642f2d8688b5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.481585 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62b896b4-5861-4fa8-ac40-642f2d8688b5-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"62b896b4-5861-4fa8-ac40-642f2d8688b5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.481611 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e988817-cbfc-4faf-a31e-bf357c1c4691-config\") pod \"logging-loki-compactor-0\" (UID: \"9e988817-cbfc-4faf-a31e-bf357c1c4691\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.481660 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62b896b4-5861-4fa8-ac40-642f2d8688b5-config\") pod \"logging-loki-index-gateway-0\" (UID: \"62b896b4-5861-4fa8-ac40-642f2d8688b5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.481680 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/62b896b4-5861-4fa8-ac40-642f2d8688b5-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"62b896b4-5861-4fa8-ac40-642f2d8688b5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.481700 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e6a43a9b-8511-45b0-9f42-4ca61cb58289\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6a43a9b-8511-45b0-9f42-4ca61cb58289\") pod \"logging-loki-compactor-0\" (UID: \"9e988817-cbfc-4faf-a31e-bf357c1c4691\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.481719 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/9e988817-cbfc-4faf-a31e-bf357c1c4691-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"9e988817-cbfc-4faf-a31e-bf357c1c4691\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.482611 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e988817-cbfc-4faf-a31e-bf357c1c4691-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"9e988817-cbfc-4faf-a31e-bf357c1c4691\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.483583 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e988817-cbfc-4faf-a31e-bf357c1c4691-config\") pod \"logging-loki-compactor-0\" (UID: \"9e988817-cbfc-4faf-a31e-bf357c1c4691\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.484231 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/9e988817-cbfc-4faf-a31e-bf357c1c4691-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"9e988817-cbfc-4faf-a31e-bf357c1c4691\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.484379 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.484411 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e6a43a9b-8511-45b0-9f42-4ca61cb58289\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6a43a9b-8511-45b0-9f42-4ca61cb58289\") pod \"logging-loki-compactor-0\" (UID: \"9e988817-cbfc-4faf-a31e-bf357c1c4691\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5f71ab9a01d126553c1bb5b78f695f4727af85b9bca572eb53d17f4fa8ef1c42/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.485494 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/9e988817-cbfc-4faf-a31e-bf357c1c4691-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"9e988817-cbfc-4faf-a31e-bf357c1c4691\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.486812 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/9e988817-cbfc-4faf-a31e-bf357c1c4691-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"9e988817-cbfc-4faf-a31e-bf357c1c4691\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.502618 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdl5v\" (UniqueName: \"kubernetes.io/projected/9e988817-cbfc-4faf-a31e-bf357c1c4691-kube-api-access-bdl5v\") pod \"logging-loki-compactor-0\" (UID: \"9e988817-cbfc-4faf-a31e-bf357c1c4691\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.506530 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e6a43a9b-8511-45b0-9f42-4ca61cb58289\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6a43a9b-8511-45b0-9f42-4ca61cb58289\") pod \"logging-loki-compactor-0\" (UID: \"9e988817-cbfc-4faf-a31e-bf357c1c4691\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.547147 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.582923 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/62b896b4-5861-4fa8-ac40-642f2d8688b5-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"62b896b4-5861-4fa8-ac40-642f2d8688b5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.582975 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62b896b4-5861-4fa8-ac40-642f2d8688b5-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"62b896b4-5861-4fa8-ac40-642f2d8688b5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.583013 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62b896b4-5861-4fa8-ac40-642f2d8688b5-config\") pod \"logging-loki-index-gateway-0\" (UID: \"62b896b4-5861-4fa8-ac40-642f2d8688b5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.583040 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/62b896b4-5861-4fa8-ac40-642f2d8688b5-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"62b896b4-5861-4fa8-ac40-642f2d8688b5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.583086 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-67abaaae-e6a6-4ed4-84f2-8f0a257ce0d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-67abaaae-e6a6-4ed4-84f2-8f0a257ce0d3\") pod \"logging-loki-index-gateway-0\" (UID: \"62b896b4-5861-4fa8-ac40-642f2d8688b5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.583125 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/62b896b4-5861-4fa8-ac40-642f2d8688b5-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"62b896b4-5861-4fa8-ac40-642f2d8688b5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.583165 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz52p\" (UniqueName: \"kubernetes.io/projected/62b896b4-5861-4fa8-ac40-642f2d8688b5-kube-api-access-zz52p\") pod \"logging-loki-index-gateway-0\" (UID: \"62b896b4-5861-4fa8-ac40-642f2d8688b5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.583916 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62b896b4-5861-4fa8-ac40-642f2d8688b5-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"62b896b4-5861-4fa8-ac40-642f2d8688b5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.584948 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.584987 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-67abaaae-e6a6-4ed4-84f2-8f0a257ce0d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-67abaaae-e6a6-4ed4-84f2-8f0a257ce0d3\") pod \"logging-loki-index-gateway-0\" (UID: \"62b896b4-5861-4fa8-ac40-642f2d8688b5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6c73fa08da2eb5a7c3f8b0ca62f8e83f8f977f5a97b2795951fd7b27885ed405/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.586479 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/62b896b4-5861-4fa8-ac40-642f2d8688b5-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"62b896b4-5861-4fa8-ac40-642f2d8688b5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.587208 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62b896b4-5861-4fa8-ac40-642f2d8688b5-config\") pod \"logging-loki-index-gateway-0\" (UID: \"62b896b4-5861-4fa8-ac40-642f2d8688b5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.587524 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/62b896b4-5861-4fa8-ac40-642f2d8688b5-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"62b896b4-5861-4fa8-ac40-642f2d8688b5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.587994 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/62b896b4-5861-4fa8-ac40-642f2d8688b5-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"62b896b4-5861-4fa8-ac40-642f2d8688b5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.604531 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz52p\" (UniqueName: \"kubernetes.io/projected/62b896b4-5861-4fa8-ac40-642f2d8688b5-kube-api-access-zz52p\") pod \"logging-loki-index-gateway-0\" (UID: \"62b896b4-5861-4fa8-ac40-642f2d8688b5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.612303 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-67abaaae-e6a6-4ed4-84f2-8f0a257ce0d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-67abaaae-e6a6-4ed4-84f2-8f0a257ce0d3\") pod \"logging-loki-index-gateway-0\" (UID: \"62b896b4-5861-4fa8-ac40-642f2d8688b5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.637608 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.738601 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.896412 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 14 18:55:55 crc kubenswrapper[4897]: W0214 18:55:55.912811 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e988817_cbfc_4faf_a31e_bf357c1c4691.slice/crio-770c8a486ec456082e6dd91c81bf203d68f23f8cadeca65120bbd10d2f6b57bf WatchSource:0}: Error finding container 770c8a486ec456082e6dd91c81bf203d68f23f8cadeca65120bbd10d2f6b57bf: Status 404 returned error can't find the container with id 770c8a486ec456082e6dd91c81bf203d68f23f8cadeca65120bbd10d2f6b57bf Feb 14 18:55:55 crc kubenswrapper[4897]: I0214 18:55:55.976652 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 14 18:55:55 crc kubenswrapper[4897]: W0214 18:55:55.978398 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62b896b4_5861_4fa8_ac40_642f2d8688b5.slice/crio-58d21a802dbe6d8aa3058bc35f9afd11ec39b18cce4c854201dd00086272efeb WatchSource:0}: Error finding container 58d21a802dbe6d8aa3058bc35f9afd11ec39b18cce4c854201dd00086272efeb: Status 404 returned error can't find the container with id 58d21a802dbe6d8aa3058bc35f9afd11ec39b18cce4c854201dd00086272efeb Feb 14 18:55:56 crc kubenswrapper[4897]: I0214 18:55:56.002154 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 14 18:55:56 crc kubenswrapper[4897]: W0214 18:55:56.012106 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod740f1f83_6c75_4e47_a5c5_6a0ef1d40cca.slice/crio-c3328cab322c29466fc9b93b19dcc083313d521168f40c6571d60eb530bac158 WatchSource:0}: Error finding container c3328cab322c29466fc9b93b19dcc083313d521168f40c6571d60eb530bac158: Status 404 returned error can't find the container with id c3328cab322c29466fc9b93b19dcc083313d521168f40c6571d60eb530bac158 Feb 14 18:55:56 crc kubenswrapper[4897]: I0214 18:55:56.277626 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" event={"ID":"969ba5ce-9b29-41f2-ba75-76f548daa534","Type":"ContainerStarted","Data":"33813755b4f5630c9ca8efb94ac18a2c22fb314354ed45db6f144b908609d6c1"} Feb 14 18:55:56 crc kubenswrapper[4897]: I0214 18:55:56.279185 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca","Type":"ContainerStarted","Data":"c3328cab322c29466fc9b93b19dcc083313d521168f40c6571d60eb530bac158"} Feb 14 18:55:56 crc kubenswrapper[4897]: I0214 18:55:56.280589 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" event={"ID":"cec4c0da-107d-4f6d-946d-2ffe925883e4","Type":"ContainerStarted","Data":"47525032d257242fbdc660dd95c535616b0dbffcb544c33b669d8d63827fe1ad"} Feb 14 18:55:56 crc kubenswrapper[4897]: I0214 18:55:56.281847 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"9e988817-cbfc-4faf-a31e-bf357c1c4691","Type":"ContainerStarted","Data":"770c8a486ec456082e6dd91c81bf203d68f23f8cadeca65120bbd10d2f6b57bf"} Feb 14 18:55:56 crc kubenswrapper[4897]: I0214 18:55:56.283180 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"62b896b4-5861-4fa8-ac40-642f2d8688b5","Type":"ContainerStarted","Data":"58d21a802dbe6d8aa3058bc35f9afd11ec39b18cce4c854201dd00086272efeb"} Feb 14 18:55:56 crc kubenswrapper[4897]: I0214 18:55:56.284500 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-zhtld" event={"ID":"fed2ea1c-038a-40eb-a753-68705d1ae150","Type":"ContainerStarted","Data":"77676f1d4e985fa7f042d12b45f6c364115d10090db39aaa23ab506d70b20f0f"} Feb 14 18:56:00 crc kubenswrapper[4897]: I0214 18:56:00.313821 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"740f1f83-6c75-4e47-a5c5-6a0ef1d40cca","Type":"ContainerStarted","Data":"fc4accf6fd6eea930492d757c6d39cd5a698cf5f52f5ca93395173b2204f7874"} Feb 14 18:56:00 crc kubenswrapper[4897]: I0214 18:56:00.315431 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:56:00 crc kubenswrapper[4897]: I0214 18:56:00.317173 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" event={"ID":"cec4c0da-107d-4f6d-946d-2ffe925883e4","Type":"ContainerStarted","Data":"271e9a7de1fad5d838c3b7b1588c1949788ca4600cfad9f88b83aa01c6f71362"} Feb 14 18:56:00 crc kubenswrapper[4897]: I0214 18:56:00.326115 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"9e988817-cbfc-4faf-a31e-bf357c1c4691","Type":"ContainerStarted","Data":"900e3ee5858f4c935b336e68f001de46e0247bdf366bd100887a5a389349495d"} Feb 14 18:56:00 crc kubenswrapper[4897]: I0214 18:56:00.326260 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Feb 14 18:56:00 crc kubenswrapper[4897]: I0214 18:56:00.328432 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"62b896b4-5861-4fa8-ac40-642f2d8688b5","Type":"ContainerStarted","Data":"3e0c5c44fbe7f140e81b90e1eac532fa092d87c152cccbaf3c7147f65eea09ae"} Feb 14 18:56:00 crc kubenswrapper[4897]: I0214 18:56:00.329142 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 18:56:00 crc kubenswrapper[4897]: I0214 18:56:00.332190 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" event={"ID":"74485545-1349-4cd2-9764-72af83ba9aa1","Type":"ContainerStarted","Data":"7fddf1676850aefc4fd84c3527680026260cac6f1a022079b511786dc01356a0"} Feb 14 18:56:00 crc kubenswrapper[4897]: I0214 18:56:00.332554 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" Feb 14 18:56:00 crc kubenswrapper[4897]: I0214 18:56:00.333957 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-zhtld" event={"ID":"fed2ea1c-038a-40eb-a753-68705d1ae150","Type":"ContainerStarted","Data":"0d9b38a5e2e9c96ee267a201456981ccf79585f6a7982ea3c1373f8eac943f78"} Feb 14 18:56:00 crc kubenswrapper[4897]: I0214 18:56:00.334266 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-zhtld" Feb 14 18:56:00 crc kubenswrapper[4897]: I0214 18:56:00.336720 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2" event={"ID":"0f4eb68c-7592-4025-a9a0-d5ed85aeec3c","Type":"ContainerStarted","Data":"af01625d5270010eb8462955585a2c9491c5c919e3cdcc205492fbd946b02124"} Feb 14 18:56:00 crc kubenswrapper[4897]: I0214 18:56:00.336861 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2" Feb 14 18:56:00 crc kubenswrapper[4897]: I0214 18:56:00.346920 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" event={"ID":"969ba5ce-9b29-41f2-ba75-76f548daa534","Type":"ContainerStarted","Data":"a0e0b78745bdb04e7d35684180947b0f403d9c1b0554246bb4b48f7918554829"} Feb 14 18:56:00 crc kubenswrapper[4897]: I0214 18:56:00.352134 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.260576337 podStartE2EDuration="6.352109095s" podCreationTimestamp="2026-02-14 18:55:54 +0000 UTC" firstStartedPulling="2026-02-14 18:55:56.015793272 +0000 UTC m=+808.992201755" lastFinishedPulling="2026-02-14 18:55:59.10732599 +0000 UTC m=+812.083734513" observedRunningTime="2026-02-14 18:56:00.345285922 +0000 UTC m=+813.321694415" watchObservedRunningTime="2026-02-14 18:56:00.352109095 +0000 UTC m=+813.328517608" Feb 14 18:56:00 crc kubenswrapper[4897]: I0214 18:56:00.389966 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.309689408 podStartE2EDuration="6.389929481s" podCreationTimestamp="2026-02-14 18:55:54 +0000 UTC" firstStartedPulling="2026-02-14 18:55:55.982685216 +0000 UTC m=+808.959093699" lastFinishedPulling="2026-02-14 18:55:59.062925269 +0000 UTC m=+812.039333772" observedRunningTime="2026-02-14 18:56:00.372618376 +0000 UTC m=+813.349026889" watchObservedRunningTime="2026-02-14 18:56:00.389929481 +0000 UTC m=+813.366337964" Feb 14 18:56:00 crc kubenswrapper[4897]: I0214 18:56:00.393829 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" podStartSLOduration=2.432848327 podStartE2EDuration="6.393811226s" podCreationTimestamp="2026-02-14 18:55:54 +0000 UTC" firstStartedPulling="2026-02-14 18:55:55.203259424 +0000 UTC m=+808.179667907" lastFinishedPulling="2026-02-14 18:55:59.164222303 +0000 UTC m=+812.140630806" observedRunningTime="2026-02-14 18:56:00.387361215 +0000 UTC m=+813.363769698" watchObservedRunningTime="2026-02-14 18:56:00.393811226 +0000 UTC m=+813.370219709" Feb 14 18:56:00 crc kubenswrapper[4897]: I0214 18:56:00.411829 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.163796596 podStartE2EDuration="6.411803422s" podCreationTimestamp="2026-02-14 18:55:54 +0000 UTC" firstStartedPulling="2026-02-14 18:55:55.915652121 +0000 UTC m=+808.892060604" lastFinishedPulling="2026-02-14 18:55:59.163658937 +0000 UTC m=+812.140067430" observedRunningTime="2026-02-14 18:56:00.409637828 +0000 UTC m=+813.386046331" watchObservedRunningTime="2026-02-14 18:56:00.411803422 +0000 UTC m=+813.388211935" Feb 14 18:56:00 crc kubenswrapper[4897]: I0214 18:56:00.433473 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-zhtld" podStartSLOduration=2.610321759 podStartE2EDuration="6.433451616s" podCreationTimestamp="2026-02-14 18:55:54 +0000 UTC" firstStartedPulling="2026-02-14 18:55:55.292620564 +0000 UTC m=+808.269029047" lastFinishedPulling="2026-02-14 18:55:59.115750401 +0000 UTC m=+812.092158904" observedRunningTime="2026-02-14 18:56:00.426826609 +0000 UTC m=+813.403235122" watchObservedRunningTime="2026-02-14 18:56:00.433451616 +0000 UTC m=+813.409860109" Feb 14 18:56:00 crc kubenswrapper[4897]: I0214 18:56:00.440887 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2" podStartSLOduration=2.103020588 podStartE2EDuration="6.440870257s" podCreationTimestamp="2026-02-14 18:55:54 +0000 UTC" firstStartedPulling="2026-02-14 18:55:54.785958882 +0000 UTC m=+807.762367375" lastFinishedPulling="2026-02-14 18:55:59.123808511 +0000 UTC m=+812.100217044" observedRunningTime="2026-02-14 18:56:00.439023812 +0000 UTC m=+813.415432305" watchObservedRunningTime="2026-02-14 18:56:00.440870257 +0000 UTC m=+813.417278740" Feb 14 18:56:01 crc kubenswrapper[4897]: I0214 18:56:01.726272 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 18:56:01 crc kubenswrapper[4897]: I0214 18:56:01.726777 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 18:56:01 crc kubenswrapper[4897]: I0214 18:56:01.726864 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 18:56:01 crc kubenswrapper[4897]: I0214 18:56:01.728263 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"446e5cdc189ae2c51f665c763c60fe16201efbf3c0c2e1e9f8fe851134e12224"} pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 18:56:01 crc kubenswrapper[4897]: I0214 18:56:01.728415 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" containerID="cri-o://446e5cdc189ae2c51f665c763c60fe16201efbf3c0c2e1e9f8fe851134e12224" gracePeriod=600 Feb 14 18:56:02 crc kubenswrapper[4897]: I0214 18:56:02.365964 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" event={"ID":"cec4c0da-107d-4f6d-946d-2ffe925883e4","Type":"ContainerStarted","Data":"d659b6e608a5cde371843eb161239f969a16e654116686fd5e92bf65ffae7760"} Feb 14 18:56:02 crc kubenswrapper[4897]: I0214 18:56:02.366497 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:56:02 crc kubenswrapper[4897]: I0214 18:56:02.369904 4897 generic.go:334] "Generic (PLEG): container finished" podID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerID="446e5cdc189ae2c51f665c763c60fe16201efbf3c0c2e1e9f8fe851134e12224" exitCode=0 Feb 14 18:56:02 crc kubenswrapper[4897]: I0214 18:56:02.370060 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerDied","Data":"446e5cdc189ae2c51f665c763c60fe16201efbf3c0c2e1e9f8fe851134e12224"} Feb 14 18:56:02 crc kubenswrapper[4897]: I0214 18:56:02.370108 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerStarted","Data":"f530591baa3a6bc6b0de2a6354906a1508c867fd239d41af91ab4794b66dc167"} Feb 14 18:56:02 crc kubenswrapper[4897]: I0214 18:56:02.370163 4897 scope.go:117] "RemoveContainer" containerID="2c722bb3847b6caa173e38da195a6a74bd7b3547a2d4d41a8a85c1c5e17187d8" Feb 14 18:56:02 crc kubenswrapper[4897]: I0214 18:56:02.376874 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" event={"ID":"969ba5ce-9b29-41f2-ba75-76f548daa534","Type":"ContainerStarted","Data":"7766acabf2c8171cb21893f6e4bf66cf7e2f81803b4bd53fb58fe05db74b48da"} Feb 14 18:56:02 crc kubenswrapper[4897]: I0214 18:56:02.376923 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:56:02 crc kubenswrapper[4897]: I0214 18:56:02.377232 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:56:02 crc kubenswrapper[4897]: I0214 18:56:02.383825 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:56:02 crc kubenswrapper[4897]: I0214 18:56:02.390811 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:56:02 crc kubenswrapper[4897]: I0214 18:56:02.394019 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" Feb 14 18:56:02 crc kubenswrapper[4897]: I0214 18:56:02.396240 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" podStartSLOduration=2.351204796 podStartE2EDuration="8.396219704s" podCreationTimestamp="2026-02-14 18:55:54 +0000 UTC" firstStartedPulling="2026-02-14 18:55:55.402840805 +0000 UTC m=+808.379249288" lastFinishedPulling="2026-02-14 18:56:01.447855713 +0000 UTC m=+814.424264196" observedRunningTime="2026-02-14 18:56:02.392798892 +0000 UTC m=+815.369207465" watchObservedRunningTime="2026-02-14 18:56:02.396219704 +0000 UTC m=+815.372628227" Feb 14 18:56:02 crc kubenswrapper[4897]: I0214 18:56:02.441548 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" podStartSLOduration=2.344122255 podStartE2EDuration="8.441520002s" podCreationTimestamp="2026-02-14 18:55:54 +0000 UTC" firstStartedPulling="2026-02-14 18:55:55.355737143 +0000 UTC m=+808.332145626" lastFinishedPulling="2026-02-14 18:56:01.45313489 +0000 UTC m=+814.429543373" observedRunningTime="2026-02-14 18:56:02.439207994 +0000 UTC m=+815.415616487" watchObservedRunningTime="2026-02-14 18:56:02.441520002 +0000 UTC m=+815.417928525" Feb 14 18:56:03 crc kubenswrapper[4897]: I0214 18:56:03.388093 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:56:03 crc kubenswrapper[4897]: I0214 18:56:03.406575 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" Feb 14 18:56:14 crc kubenswrapper[4897]: I0214 18:56:14.441507 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2" Feb 14 18:56:14 crc kubenswrapper[4897]: I0214 18:56:14.669151 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" Feb 14 18:56:14 crc kubenswrapper[4897]: I0214 18:56:14.682514 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-zhtld" Feb 14 18:56:15 crc kubenswrapper[4897]: I0214 18:56:15.563225 4897 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 14 18:56:15 crc kubenswrapper[4897]: I0214 18:56:15.563310 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="740f1f83-6c75-4e47-a5c5-6a0ef1d40cca" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 14 18:56:15 crc kubenswrapper[4897]: I0214 18:56:15.647688 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Feb 14 18:56:15 crc kubenswrapper[4897]: I0214 18:56:15.743219 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 18:56:25 crc kubenswrapper[4897]: I0214 18:56:25.563557 4897 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 14 18:56:25 crc kubenswrapper[4897]: I0214 18:56:25.564333 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="740f1f83-6c75-4e47-a5c5-6a0ef1d40cca" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 14 18:56:35 crc kubenswrapper[4897]: I0214 18:56:35.563132 4897 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 14 18:56:35 crc kubenswrapper[4897]: I0214 18:56:35.564068 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="740f1f83-6c75-4e47-a5c5-6a0ef1d40cca" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 14 18:56:38 crc kubenswrapper[4897]: I0214 18:56:38.470915 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nwj6g"] Feb 14 18:56:38 crc kubenswrapper[4897]: I0214 18:56:38.475236 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nwj6g" Feb 14 18:56:38 crc kubenswrapper[4897]: I0214 18:56:38.501381 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nwj6g"] Feb 14 18:56:38 crc kubenswrapper[4897]: I0214 18:56:38.598890 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79077c6-e36c-4077-bd26-3c35c505b820-catalog-content\") pod \"community-operators-nwj6g\" (UID: \"f79077c6-e36c-4077-bd26-3c35c505b820\") " pod="openshift-marketplace/community-operators-nwj6g" Feb 14 18:56:38 crc kubenswrapper[4897]: I0214 18:56:38.598949 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvx4c\" (UniqueName: \"kubernetes.io/projected/f79077c6-e36c-4077-bd26-3c35c505b820-kube-api-access-lvx4c\") pod \"community-operators-nwj6g\" (UID: \"f79077c6-e36c-4077-bd26-3c35c505b820\") " pod="openshift-marketplace/community-operators-nwj6g" Feb 14 18:56:38 crc kubenswrapper[4897]: I0214 18:56:38.599463 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79077c6-e36c-4077-bd26-3c35c505b820-utilities\") pod \"community-operators-nwj6g\" (UID: \"f79077c6-e36c-4077-bd26-3c35c505b820\") " pod="openshift-marketplace/community-operators-nwj6g" Feb 14 18:56:38 crc kubenswrapper[4897]: I0214 18:56:38.700974 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79077c6-e36c-4077-bd26-3c35c505b820-utilities\") pod \"community-operators-nwj6g\" (UID: \"f79077c6-e36c-4077-bd26-3c35c505b820\") " pod="openshift-marketplace/community-operators-nwj6g" Feb 14 18:56:38 crc kubenswrapper[4897]: I0214 18:56:38.701109 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79077c6-e36c-4077-bd26-3c35c505b820-catalog-content\") pod \"community-operators-nwj6g\" (UID: \"f79077c6-e36c-4077-bd26-3c35c505b820\") " pod="openshift-marketplace/community-operators-nwj6g" Feb 14 18:56:38 crc kubenswrapper[4897]: I0214 18:56:38.701130 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvx4c\" (UniqueName: \"kubernetes.io/projected/f79077c6-e36c-4077-bd26-3c35c505b820-kube-api-access-lvx4c\") pod \"community-operators-nwj6g\" (UID: \"f79077c6-e36c-4077-bd26-3c35c505b820\") " pod="openshift-marketplace/community-operators-nwj6g" Feb 14 18:56:38 crc kubenswrapper[4897]: I0214 18:56:38.701640 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79077c6-e36c-4077-bd26-3c35c505b820-utilities\") pod \"community-operators-nwj6g\" (UID: \"f79077c6-e36c-4077-bd26-3c35c505b820\") " pod="openshift-marketplace/community-operators-nwj6g" Feb 14 18:56:38 crc kubenswrapper[4897]: I0214 18:56:38.701713 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79077c6-e36c-4077-bd26-3c35c505b820-catalog-content\") pod \"community-operators-nwj6g\" (UID: \"f79077c6-e36c-4077-bd26-3c35c505b820\") " pod="openshift-marketplace/community-operators-nwj6g" Feb 14 18:56:38 crc kubenswrapper[4897]: I0214 18:56:38.725771 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvx4c\" (UniqueName: \"kubernetes.io/projected/f79077c6-e36c-4077-bd26-3c35c505b820-kube-api-access-lvx4c\") pod \"community-operators-nwj6g\" (UID: \"f79077c6-e36c-4077-bd26-3c35c505b820\") " pod="openshift-marketplace/community-operators-nwj6g" Feb 14 18:56:38 crc kubenswrapper[4897]: I0214 18:56:38.816023 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nwj6g" Feb 14 18:56:39 crc kubenswrapper[4897]: I0214 18:56:39.264863 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nwj6g"] Feb 14 18:56:39 crc kubenswrapper[4897]: I0214 18:56:39.729591 4897 generic.go:334] "Generic (PLEG): container finished" podID="f79077c6-e36c-4077-bd26-3c35c505b820" containerID="14ee6460bcf3fcce031f5e5e179b067978d1fdc0f56282ea0b4656f05480e155" exitCode=0 Feb 14 18:56:39 crc kubenswrapper[4897]: I0214 18:56:39.729692 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwj6g" event={"ID":"f79077c6-e36c-4077-bd26-3c35c505b820","Type":"ContainerDied","Data":"14ee6460bcf3fcce031f5e5e179b067978d1fdc0f56282ea0b4656f05480e155"} Feb 14 18:56:39 crc kubenswrapper[4897]: I0214 18:56:39.730005 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwj6g" event={"ID":"f79077c6-e36c-4077-bd26-3c35c505b820","Type":"ContainerStarted","Data":"85d089721651ed6c4176f3fc24bdda7c992b4099da625ff606888785102163f4"} Feb 14 18:56:40 crc kubenswrapper[4897]: I0214 18:56:40.746524 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwj6g" event={"ID":"f79077c6-e36c-4077-bd26-3c35c505b820","Type":"ContainerStarted","Data":"75910b421fb230b6d42c633f004f6a698fbea278a44e26504bf710eea72f3d18"} Feb 14 18:56:41 crc kubenswrapper[4897]: I0214 18:56:41.760939 4897 generic.go:334] "Generic (PLEG): container finished" podID="f79077c6-e36c-4077-bd26-3c35c505b820" containerID="75910b421fb230b6d42c633f004f6a698fbea278a44e26504bf710eea72f3d18" exitCode=0 Feb 14 18:56:41 crc kubenswrapper[4897]: I0214 18:56:41.761078 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwj6g" event={"ID":"f79077c6-e36c-4077-bd26-3c35c505b820","Type":"ContainerDied","Data":"75910b421fb230b6d42c633f004f6a698fbea278a44e26504bf710eea72f3d18"} Feb 14 18:56:42 crc kubenswrapper[4897]: I0214 18:56:42.775789 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwj6g" event={"ID":"f79077c6-e36c-4077-bd26-3c35c505b820","Type":"ContainerStarted","Data":"e8ed4543d9c468ae45bdffc09f1bd5862afc2d89d1339b4f481288ae0a90413f"} Feb 14 18:56:42 crc kubenswrapper[4897]: I0214 18:56:42.805441 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nwj6g" podStartSLOduration=2.375787741 podStartE2EDuration="4.805420207s" podCreationTimestamp="2026-02-14 18:56:38 +0000 UTC" firstStartedPulling="2026-02-14 18:56:39.732996187 +0000 UTC m=+852.709404700" lastFinishedPulling="2026-02-14 18:56:42.162628653 +0000 UTC m=+855.139037166" observedRunningTime="2026-02-14 18:56:42.801250162 +0000 UTC m=+855.777658695" watchObservedRunningTime="2026-02-14 18:56:42.805420207 +0000 UTC m=+855.781828710" Feb 14 18:56:45 crc kubenswrapper[4897]: I0214 18:56:45.564616 4897 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 14 18:56:45 crc kubenswrapper[4897]: I0214 18:56:45.565123 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="740f1f83-6c75-4e47-a5c5-6a0ef1d40cca" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 14 18:56:48 crc kubenswrapper[4897]: I0214 18:56:48.446526 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pj59l"] Feb 14 18:56:48 crc kubenswrapper[4897]: I0214 18:56:48.449904 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pj59l" Feb 14 18:56:48 crc kubenswrapper[4897]: I0214 18:56:48.452640 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pj59l"] Feb 14 18:56:48 crc kubenswrapper[4897]: I0214 18:56:48.481234 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2ldb\" (UniqueName: \"kubernetes.io/projected/134c3238-8970-47dc-8b91-34e8f8f2579c-kube-api-access-s2ldb\") pod \"redhat-marketplace-pj59l\" (UID: \"134c3238-8970-47dc-8b91-34e8f8f2579c\") " pod="openshift-marketplace/redhat-marketplace-pj59l" Feb 14 18:56:48 crc kubenswrapper[4897]: I0214 18:56:48.481266 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134c3238-8970-47dc-8b91-34e8f8f2579c-utilities\") pod \"redhat-marketplace-pj59l\" (UID: \"134c3238-8970-47dc-8b91-34e8f8f2579c\") " pod="openshift-marketplace/redhat-marketplace-pj59l" Feb 14 18:56:48 crc kubenswrapper[4897]: I0214 18:56:48.481292 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134c3238-8970-47dc-8b91-34e8f8f2579c-catalog-content\") pod \"redhat-marketplace-pj59l\" (UID: \"134c3238-8970-47dc-8b91-34e8f8f2579c\") " pod="openshift-marketplace/redhat-marketplace-pj59l" Feb 14 18:56:48 crc kubenswrapper[4897]: I0214 18:56:48.582711 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2ldb\" (UniqueName: \"kubernetes.io/projected/134c3238-8970-47dc-8b91-34e8f8f2579c-kube-api-access-s2ldb\") pod \"redhat-marketplace-pj59l\" (UID: \"134c3238-8970-47dc-8b91-34e8f8f2579c\") " pod="openshift-marketplace/redhat-marketplace-pj59l" Feb 14 18:56:48 crc kubenswrapper[4897]: I0214 18:56:48.582754 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134c3238-8970-47dc-8b91-34e8f8f2579c-utilities\") pod \"redhat-marketplace-pj59l\" (UID: \"134c3238-8970-47dc-8b91-34e8f8f2579c\") " pod="openshift-marketplace/redhat-marketplace-pj59l" Feb 14 18:56:48 crc kubenswrapper[4897]: I0214 18:56:48.582781 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134c3238-8970-47dc-8b91-34e8f8f2579c-catalog-content\") pod \"redhat-marketplace-pj59l\" (UID: \"134c3238-8970-47dc-8b91-34e8f8f2579c\") " pod="openshift-marketplace/redhat-marketplace-pj59l" Feb 14 18:56:48 crc kubenswrapper[4897]: I0214 18:56:48.583432 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134c3238-8970-47dc-8b91-34e8f8f2579c-utilities\") pod \"redhat-marketplace-pj59l\" (UID: \"134c3238-8970-47dc-8b91-34e8f8f2579c\") " pod="openshift-marketplace/redhat-marketplace-pj59l" Feb 14 18:56:48 crc kubenswrapper[4897]: I0214 18:56:48.583741 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134c3238-8970-47dc-8b91-34e8f8f2579c-catalog-content\") pod \"redhat-marketplace-pj59l\" (UID: \"134c3238-8970-47dc-8b91-34e8f8f2579c\") " pod="openshift-marketplace/redhat-marketplace-pj59l" Feb 14 18:56:48 crc kubenswrapper[4897]: I0214 18:56:48.613987 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2ldb\" (UniqueName: \"kubernetes.io/projected/134c3238-8970-47dc-8b91-34e8f8f2579c-kube-api-access-s2ldb\") pod \"redhat-marketplace-pj59l\" (UID: \"134c3238-8970-47dc-8b91-34e8f8f2579c\") " pod="openshift-marketplace/redhat-marketplace-pj59l" Feb 14 18:56:48 crc kubenswrapper[4897]: I0214 18:56:48.789782 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pj59l" Feb 14 18:56:48 crc kubenswrapper[4897]: I0214 18:56:48.816706 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nwj6g" Feb 14 18:56:48 crc kubenswrapper[4897]: I0214 18:56:48.816758 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nwj6g" Feb 14 18:56:48 crc kubenswrapper[4897]: I0214 18:56:48.866233 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nwj6g" Feb 14 18:56:48 crc kubenswrapper[4897]: I0214 18:56:48.928285 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nwj6g" Feb 14 18:56:49 crc kubenswrapper[4897]: I0214 18:56:49.287569 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pj59l"] Feb 14 18:56:49 crc kubenswrapper[4897]: I0214 18:56:49.869711 4897 generic.go:334] "Generic (PLEG): container finished" podID="134c3238-8970-47dc-8b91-34e8f8f2579c" containerID="708602e0732c68707bb90b35a2a29ca88165549a035a4e5fdc437a0b3ff057bd" exitCode=0 Feb 14 18:56:49 crc kubenswrapper[4897]: I0214 18:56:49.869791 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pj59l" event={"ID":"134c3238-8970-47dc-8b91-34e8f8f2579c","Type":"ContainerDied","Data":"708602e0732c68707bb90b35a2a29ca88165549a035a4e5fdc437a0b3ff057bd"} Feb 14 18:56:49 crc kubenswrapper[4897]: I0214 18:56:49.870498 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pj59l" event={"ID":"134c3238-8970-47dc-8b91-34e8f8f2579c","Type":"ContainerStarted","Data":"c794dcff67171c66929fad288723059a3d60c972faccb50b983126f0dbaf4f3f"} Feb 14 18:56:50 crc kubenswrapper[4897]: I0214 18:56:50.882588 4897 generic.go:334] "Generic (PLEG): container finished" podID="134c3238-8970-47dc-8b91-34e8f8f2579c" containerID="4823ee9e4d593ebacc06679747e4a98a95bf43a9946ceb847efff5ed7575a233" exitCode=0 Feb 14 18:56:50 crc kubenswrapper[4897]: I0214 18:56:50.882680 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pj59l" event={"ID":"134c3238-8970-47dc-8b91-34e8f8f2579c","Type":"ContainerDied","Data":"4823ee9e4d593ebacc06679747e4a98a95bf43a9946ceb847efff5ed7575a233"} Feb 14 18:56:51 crc kubenswrapper[4897]: I0214 18:56:51.215796 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nwj6g"] Feb 14 18:56:51 crc kubenswrapper[4897]: I0214 18:56:51.216300 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nwj6g" podUID="f79077c6-e36c-4077-bd26-3c35c505b820" containerName="registry-server" containerID="cri-o://e8ed4543d9c468ae45bdffc09f1bd5862afc2d89d1339b4f481288ae0a90413f" gracePeriod=2 Feb 14 18:56:51 crc kubenswrapper[4897]: I0214 18:56:51.672457 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nwj6g" Feb 14 18:56:51 crc kubenswrapper[4897]: I0214 18:56:51.755103 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79077c6-e36c-4077-bd26-3c35c505b820-utilities\") pod \"f79077c6-e36c-4077-bd26-3c35c505b820\" (UID: \"f79077c6-e36c-4077-bd26-3c35c505b820\") " Feb 14 18:56:51 crc kubenswrapper[4897]: I0214 18:56:51.755190 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79077c6-e36c-4077-bd26-3c35c505b820-catalog-content\") pod \"f79077c6-e36c-4077-bd26-3c35c505b820\" (UID: \"f79077c6-e36c-4077-bd26-3c35c505b820\") " Feb 14 18:56:51 crc kubenswrapper[4897]: I0214 18:56:51.755311 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvx4c\" (UniqueName: \"kubernetes.io/projected/f79077c6-e36c-4077-bd26-3c35c505b820-kube-api-access-lvx4c\") pod \"f79077c6-e36c-4077-bd26-3c35c505b820\" (UID: \"f79077c6-e36c-4077-bd26-3c35c505b820\") " Feb 14 18:56:51 crc kubenswrapper[4897]: I0214 18:56:51.755964 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f79077c6-e36c-4077-bd26-3c35c505b820-utilities" (OuterVolumeSpecName: "utilities") pod "f79077c6-e36c-4077-bd26-3c35c505b820" (UID: "f79077c6-e36c-4077-bd26-3c35c505b820"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:56:51 crc kubenswrapper[4897]: I0214 18:56:51.762254 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f79077c6-e36c-4077-bd26-3c35c505b820-kube-api-access-lvx4c" (OuterVolumeSpecName: "kube-api-access-lvx4c") pod "f79077c6-e36c-4077-bd26-3c35c505b820" (UID: "f79077c6-e36c-4077-bd26-3c35c505b820"). InnerVolumeSpecName "kube-api-access-lvx4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:56:51 crc kubenswrapper[4897]: I0214 18:56:51.856833 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79077c6-e36c-4077-bd26-3c35c505b820-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 18:56:51 crc kubenswrapper[4897]: I0214 18:56:51.856876 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvx4c\" (UniqueName: \"kubernetes.io/projected/f79077c6-e36c-4077-bd26-3c35c505b820-kube-api-access-lvx4c\") on node \"crc\" DevicePath \"\"" Feb 14 18:56:51 crc kubenswrapper[4897]: I0214 18:56:51.891541 4897 generic.go:334] "Generic (PLEG): container finished" podID="f79077c6-e36c-4077-bd26-3c35c505b820" containerID="e8ed4543d9c468ae45bdffc09f1bd5862afc2d89d1339b4f481288ae0a90413f" exitCode=0 Feb 14 18:56:51 crc kubenswrapper[4897]: I0214 18:56:51.891602 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwj6g" event={"ID":"f79077c6-e36c-4077-bd26-3c35c505b820","Type":"ContainerDied","Data":"e8ed4543d9c468ae45bdffc09f1bd5862afc2d89d1339b4f481288ae0a90413f"} Feb 14 18:56:51 crc kubenswrapper[4897]: I0214 18:56:51.891652 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nwj6g" Feb 14 18:56:51 crc kubenswrapper[4897]: I0214 18:56:51.891670 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwj6g" event={"ID":"f79077c6-e36c-4077-bd26-3c35c505b820","Type":"ContainerDied","Data":"85d089721651ed6c4176f3fc24bdda7c992b4099da625ff606888785102163f4"} Feb 14 18:56:51 crc kubenswrapper[4897]: I0214 18:56:51.891701 4897 scope.go:117] "RemoveContainer" containerID="e8ed4543d9c468ae45bdffc09f1bd5862afc2d89d1339b4f481288ae0a90413f" Feb 14 18:56:51 crc kubenswrapper[4897]: I0214 18:56:51.894206 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pj59l" event={"ID":"134c3238-8970-47dc-8b91-34e8f8f2579c","Type":"ContainerStarted","Data":"6269b1ecf58ab298b70a7c783d8d5902144062ed74d66379a630c500c1a2788f"} Feb 14 18:56:51 crc kubenswrapper[4897]: I0214 18:56:51.918257 4897 scope.go:117] "RemoveContainer" containerID="75910b421fb230b6d42c633f004f6a698fbea278a44e26504bf710eea72f3d18" Feb 14 18:56:51 crc kubenswrapper[4897]: I0214 18:56:51.921625 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pj59l" podStartSLOduration=2.533072044 podStartE2EDuration="3.921606978s" podCreationTimestamp="2026-02-14 18:56:48 +0000 UTC" firstStartedPulling="2026-02-14 18:56:49.872209081 +0000 UTC m=+862.848617594" lastFinishedPulling="2026-02-14 18:56:51.260744035 +0000 UTC m=+864.237152528" observedRunningTime="2026-02-14 18:56:51.918585048 +0000 UTC m=+864.894993581" watchObservedRunningTime="2026-02-14 18:56:51.921606978 +0000 UTC m=+864.898015471" Feb 14 18:56:51 crc kubenswrapper[4897]: I0214 18:56:51.947067 4897 scope.go:117] "RemoveContainer" containerID="14ee6460bcf3fcce031f5e5e179b067978d1fdc0f56282ea0b4656f05480e155" Feb 14 18:56:51 crc kubenswrapper[4897]: I0214 18:56:51.976307 4897 scope.go:117] "RemoveContainer" containerID="e8ed4543d9c468ae45bdffc09f1bd5862afc2d89d1339b4f481288ae0a90413f" Feb 14 18:56:51 crc kubenswrapper[4897]: E0214 18:56:51.976935 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8ed4543d9c468ae45bdffc09f1bd5862afc2d89d1339b4f481288ae0a90413f\": container with ID starting with e8ed4543d9c468ae45bdffc09f1bd5862afc2d89d1339b4f481288ae0a90413f not found: ID does not exist" containerID="e8ed4543d9c468ae45bdffc09f1bd5862afc2d89d1339b4f481288ae0a90413f" Feb 14 18:56:51 crc kubenswrapper[4897]: I0214 18:56:51.976967 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8ed4543d9c468ae45bdffc09f1bd5862afc2d89d1339b4f481288ae0a90413f"} err="failed to get container status \"e8ed4543d9c468ae45bdffc09f1bd5862afc2d89d1339b4f481288ae0a90413f\": rpc error: code = NotFound desc = could not find container \"e8ed4543d9c468ae45bdffc09f1bd5862afc2d89d1339b4f481288ae0a90413f\": container with ID starting with e8ed4543d9c468ae45bdffc09f1bd5862afc2d89d1339b4f481288ae0a90413f not found: ID does not exist" Feb 14 18:56:51 crc kubenswrapper[4897]: I0214 18:56:51.976988 4897 scope.go:117] "RemoveContainer" containerID="75910b421fb230b6d42c633f004f6a698fbea278a44e26504bf710eea72f3d18" Feb 14 18:56:51 crc kubenswrapper[4897]: E0214 18:56:51.977406 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75910b421fb230b6d42c633f004f6a698fbea278a44e26504bf710eea72f3d18\": container with ID starting with 75910b421fb230b6d42c633f004f6a698fbea278a44e26504bf710eea72f3d18 not found: ID does not exist" containerID="75910b421fb230b6d42c633f004f6a698fbea278a44e26504bf710eea72f3d18" Feb 14 18:56:51 crc kubenswrapper[4897]: I0214 18:56:51.977431 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75910b421fb230b6d42c633f004f6a698fbea278a44e26504bf710eea72f3d18"} err="failed to get container status \"75910b421fb230b6d42c633f004f6a698fbea278a44e26504bf710eea72f3d18\": rpc error: code = NotFound desc = could not find container \"75910b421fb230b6d42c633f004f6a698fbea278a44e26504bf710eea72f3d18\": container with ID starting with 75910b421fb230b6d42c633f004f6a698fbea278a44e26504bf710eea72f3d18 not found: ID does not exist" Feb 14 18:56:51 crc kubenswrapper[4897]: I0214 18:56:51.977445 4897 scope.go:117] "RemoveContainer" containerID="14ee6460bcf3fcce031f5e5e179b067978d1fdc0f56282ea0b4656f05480e155" Feb 14 18:56:51 crc kubenswrapper[4897]: E0214 18:56:51.977736 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14ee6460bcf3fcce031f5e5e179b067978d1fdc0f56282ea0b4656f05480e155\": container with ID starting with 14ee6460bcf3fcce031f5e5e179b067978d1fdc0f56282ea0b4656f05480e155 not found: ID does not exist" containerID="14ee6460bcf3fcce031f5e5e179b067978d1fdc0f56282ea0b4656f05480e155" Feb 14 18:56:51 crc kubenswrapper[4897]: I0214 18:56:51.977758 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14ee6460bcf3fcce031f5e5e179b067978d1fdc0f56282ea0b4656f05480e155"} err="failed to get container status \"14ee6460bcf3fcce031f5e5e179b067978d1fdc0f56282ea0b4656f05480e155\": rpc error: code = NotFound desc = could not find container \"14ee6460bcf3fcce031f5e5e179b067978d1fdc0f56282ea0b4656f05480e155\": container with ID starting with 14ee6460bcf3fcce031f5e5e179b067978d1fdc0f56282ea0b4656f05480e155 not found: ID does not exist" Feb 14 18:56:52 crc kubenswrapper[4897]: I0214 18:56:52.631264 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f79077c6-e36c-4077-bd26-3c35c505b820-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f79077c6-e36c-4077-bd26-3c35c505b820" (UID: "f79077c6-e36c-4077-bd26-3c35c505b820"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:56:52 crc kubenswrapper[4897]: I0214 18:56:52.670369 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79077c6-e36c-4077-bd26-3c35c505b820-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 18:56:52 crc kubenswrapper[4897]: I0214 18:56:52.838120 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nwj6g"] Feb 14 18:56:52 crc kubenswrapper[4897]: I0214 18:56:52.846145 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nwj6g"] Feb 14 18:56:53 crc kubenswrapper[4897]: I0214 18:56:53.809644 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f79077c6-e36c-4077-bd26-3c35c505b820" path="/var/lib/kubelet/pods/f79077c6-e36c-4077-bd26-3c35c505b820/volumes" Feb 14 18:56:55 crc kubenswrapper[4897]: I0214 18:56:55.562294 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Feb 14 18:56:57 crc kubenswrapper[4897]: I0214 18:56:57.324928 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pcgwz"] Feb 14 18:56:57 crc kubenswrapper[4897]: E0214 18:56:57.325526 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79077c6-e36c-4077-bd26-3c35c505b820" containerName="registry-server" Feb 14 18:56:57 crc kubenswrapper[4897]: I0214 18:56:57.325562 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79077c6-e36c-4077-bd26-3c35c505b820" containerName="registry-server" Feb 14 18:56:57 crc kubenswrapper[4897]: E0214 18:56:57.325621 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79077c6-e36c-4077-bd26-3c35c505b820" containerName="extract-utilities" Feb 14 18:56:57 crc kubenswrapper[4897]: I0214 18:56:57.325641 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79077c6-e36c-4077-bd26-3c35c505b820" containerName="extract-utilities" Feb 14 18:56:57 crc kubenswrapper[4897]: E0214 18:56:57.325677 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79077c6-e36c-4077-bd26-3c35c505b820" containerName="extract-content" Feb 14 18:56:57 crc kubenswrapper[4897]: I0214 18:56:57.325695 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79077c6-e36c-4077-bd26-3c35c505b820" containerName="extract-content" Feb 14 18:56:57 crc kubenswrapper[4897]: I0214 18:56:57.326080 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f79077c6-e36c-4077-bd26-3c35c505b820" containerName="registry-server" Feb 14 18:56:57 crc kubenswrapper[4897]: I0214 18:56:57.329242 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pcgwz" Feb 14 18:56:57 crc kubenswrapper[4897]: I0214 18:56:57.346861 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pcgwz"] Feb 14 18:56:57 crc kubenswrapper[4897]: I0214 18:56:57.363646 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzph5\" (UniqueName: \"kubernetes.io/projected/6aec9c60-b00a-4d4e-8f6a-74d6aac98aba-kube-api-access-lzph5\") pod \"certified-operators-pcgwz\" (UID: \"6aec9c60-b00a-4d4e-8f6a-74d6aac98aba\") " pod="openshift-marketplace/certified-operators-pcgwz" Feb 14 18:56:57 crc kubenswrapper[4897]: I0214 18:56:57.363904 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aec9c60-b00a-4d4e-8f6a-74d6aac98aba-catalog-content\") pod \"certified-operators-pcgwz\" (UID: \"6aec9c60-b00a-4d4e-8f6a-74d6aac98aba\") " pod="openshift-marketplace/certified-operators-pcgwz" Feb 14 18:56:57 crc kubenswrapper[4897]: I0214 18:56:57.363995 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aec9c60-b00a-4d4e-8f6a-74d6aac98aba-utilities\") pod \"certified-operators-pcgwz\" (UID: \"6aec9c60-b00a-4d4e-8f6a-74d6aac98aba\") " pod="openshift-marketplace/certified-operators-pcgwz" Feb 14 18:56:57 crc kubenswrapper[4897]: I0214 18:56:57.465365 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzph5\" (UniqueName: \"kubernetes.io/projected/6aec9c60-b00a-4d4e-8f6a-74d6aac98aba-kube-api-access-lzph5\") pod \"certified-operators-pcgwz\" (UID: \"6aec9c60-b00a-4d4e-8f6a-74d6aac98aba\") " pod="openshift-marketplace/certified-operators-pcgwz" Feb 14 18:56:57 crc kubenswrapper[4897]: I0214 18:56:57.465446 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aec9c60-b00a-4d4e-8f6a-74d6aac98aba-catalog-content\") pod \"certified-operators-pcgwz\" (UID: \"6aec9c60-b00a-4d4e-8f6a-74d6aac98aba\") " pod="openshift-marketplace/certified-operators-pcgwz" Feb 14 18:56:57 crc kubenswrapper[4897]: I0214 18:56:57.465479 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aec9c60-b00a-4d4e-8f6a-74d6aac98aba-utilities\") pod \"certified-operators-pcgwz\" (UID: \"6aec9c60-b00a-4d4e-8f6a-74d6aac98aba\") " pod="openshift-marketplace/certified-operators-pcgwz" Feb 14 18:56:57 crc kubenswrapper[4897]: I0214 18:56:57.466005 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aec9c60-b00a-4d4e-8f6a-74d6aac98aba-utilities\") pod \"certified-operators-pcgwz\" (UID: \"6aec9c60-b00a-4d4e-8f6a-74d6aac98aba\") " pod="openshift-marketplace/certified-operators-pcgwz" Feb 14 18:56:57 crc kubenswrapper[4897]: I0214 18:56:57.466166 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aec9c60-b00a-4d4e-8f6a-74d6aac98aba-catalog-content\") pod \"certified-operators-pcgwz\" (UID: \"6aec9c60-b00a-4d4e-8f6a-74d6aac98aba\") " pod="openshift-marketplace/certified-operators-pcgwz" Feb 14 18:56:57 crc kubenswrapper[4897]: I0214 18:56:57.501763 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzph5\" (UniqueName: \"kubernetes.io/projected/6aec9c60-b00a-4d4e-8f6a-74d6aac98aba-kube-api-access-lzph5\") pod \"certified-operators-pcgwz\" (UID: \"6aec9c60-b00a-4d4e-8f6a-74d6aac98aba\") " pod="openshift-marketplace/certified-operators-pcgwz" Feb 14 18:56:57 crc kubenswrapper[4897]: I0214 18:56:57.692788 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pcgwz" Feb 14 18:56:57 crc kubenswrapper[4897]: I0214 18:56:57.980472 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pcgwz"] Feb 14 18:56:58 crc kubenswrapper[4897]: I0214 18:56:58.790212 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pj59l" Feb 14 18:56:58 crc kubenswrapper[4897]: I0214 18:56:58.790649 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pj59l" Feb 14 18:56:58 crc kubenswrapper[4897]: I0214 18:56:58.854214 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pj59l" Feb 14 18:56:58 crc kubenswrapper[4897]: I0214 18:56:58.959212 4897 generic.go:334] "Generic (PLEG): container finished" podID="6aec9c60-b00a-4d4e-8f6a-74d6aac98aba" containerID="faf0b04f13b91752b5902cacf65677d7fe02c0ebc194933545ee66949d4b781f" exitCode=0 Feb 14 18:56:58 crc kubenswrapper[4897]: I0214 18:56:58.959288 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcgwz" event={"ID":"6aec9c60-b00a-4d4e-8f6a-74d6aac98aba","Type":"ContainerDied","Data":"faf0b04f13b91752b5902cacf65677d7fe02c0ebc194933545ee66949d4b781f"} Feb 14 18:56:58 crc kubenswrapper[4897]: I0214 18:56:58.959468 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcgwz" event={"ID":"6aec9c60-b00a-4d4e-8f6a-74d6aac98aba","Type":"ContainerStarted","Data":"0bfadfd0bba9a3b9207b664cef1b5b7d5b2b0978c83657b8fe9fa16c3e75f8f5"} Feb 14 18:56:59 crc kubenswrapper[4897]: I0214 18:56:59.019156 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pj59l" Feb 14 18:57:00 crc kubenswrapper[4897]: I0214 18:57:00.975737 4897 generic.go:334] "Generic (PLEG): container finished" podID="6aec9c60-b00a-4d4e-8f6a-74d6aac98aba" containerID="e77df64ccd71db28ed7ec295cf0f9ae958019894962a72872dd2fcb801649e20" exitCode=0 Feb 14 18:57:00 crc kubenswrapper[4897]: I0214 18:57:00.975840 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcgwz" event={"ID":"6aec9c60-b00a-4d4e-8f6a-74d6aac98aba","Type":"ContainerDied","Data":"e77df64ccd71db28ed7ec295cf0f9ae958019894962a72872dd2fcb801649e20"} Feb 14 18:57:01 crc kubenswrapper[4897]: I0214 18:57:01.277827 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pj59l"] Feb 14 18:57:01 crc kubenswrapper[4897]: I0214 18:57:01.278100 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pj59l" podUID="134c3238-8970-47dc-8b91-34e8f8f2579c" containerName="registry-server" containerID="cri-o://6269b1ecf58ab298b70a7c783d8d5902144062ed74d66379a630c500c1a2788f" gracePeriod=2 Feb 14 18:57:01 crc kubenswrapper[4897]: I0214 18:57:01.720559 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pj59l" Feb 14 18:57:01 crc kubenswrapper[4897]: I0214 18:57:01.843341 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134c3238-8970-47dc-8b91-34e8f8f2579c-utilities\") pod \"134c3238-8970-47dc-8b91-34e8f8f2579c\" (UID: \"134c3238-8970-47dc-8b91-34e8f8f2579c\") " Feb 14 18:57:01 crc kubenswrapper[4897]: I0214 18:57:01.843471 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134c3238-8970-47dc-8b91-34e8f8f2579c-catalog-content\") pod \"134c3238-8970-47dc-8b91-34e8f8f2579c\" (UID: \"134c3238-8970-47dc-8b91-34e8f8f2579c\") " Feb 14 18:57:01 crc kubenswrapper[4897]: I0214 18:57:01.843494 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2ldb\" (UniqueName: \"kubernetes.io/projected/134c3238-8970-47dc-8b91-34e8f8f2579c-kube-api-access-s2ldb\") pod \"134c3238-8970-47dc-8b91-34e8f8f2579c\" (UID: \"134c3238-8970-47dc-8b91-34e8f8f2579c\") " Feb 14 18:57:01 crc kubenswrapper[4897]: I0214 18:57:01.844466 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/134c3238-8970-47dc-8b91-34e8f8f2579c-utilities" (OuterVolumeSpecName: "utilities") pod "134c3238-8970-47dc-8b91-34e8f8f2579c" (UID: "134c3238-8970-47dc-8b91-34e8f8f2579c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:57:01 crc kubenswrapper[4897]: I0214 18:57:01.848520 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/134c3238-8970-47dc-8b91-34e8f8f2579c-kube-api-access-s2ldb" (OuterVolumeSpecName: "kube-api-access-s2ldb") pod "134c3238-8970-47dc-8b91-34e8f8f2579c" (UID: "134c3238-8970-47dc-8b91-34e8f8f2579c"). InnerVolumeSpecName "kube-api-access-s2ldb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:57:01 crc kubenswrapper[4897]: I0214 18:57:01.879953 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/134c3238-8970-47dc-8b91-34e8f8f2579c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "134c3238-8970-47dc-8b91-34e8f8f2579c" (UID: "134c3238-8970-47dc-8b91-34e8f8f2579c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:57:01 crc kubenswrapper[4897]: I0214 18:57:01.947410 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134c3238-8970-47dc-8b91-34e8f8f2579c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 18:57:01 crc kubenswrapper[4897]: I0214 18:57:01.947447 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2ldb\" (UniqueName: \"kubernetes.io/projected/134c3238-8970-47dc-8b91-34e8f8f2579c-kube-api-access-s2ldb\") on node \"crc\" DevicePath \"\"" Feb 14 18:57:01 crc kubenswrapper[4897]: I0214 18:57:01.947461 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134c3238-8970-47dc-8b91-34e8f8f2579c-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 18:57:01 crc kubenswrapper[4897]: I0214 18:57:01.987049 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcgwz" event={"ID":"6aec9c60-b00a-4d4e-8f6a-74d6aac98aba","Type":"ContainerStarted","Data":"9eb5116496f6001b39ca08184ad3e5590eecbc8951ff1093215b31e9b9aa10c6"} Feb 14 18:57:01 crc kubenswrapper[4897]: I0214 18:57:01.990067 4897 generic.go:334] "Generic (PLEG): container finished" podID="134c3238-8970-47dc-8b91-34e8f8f2579c" containerID="6269b1ecf58ab298b70a7c783d8d5902144062ed74d66379a630c500c1a2788f" exitCode=0 Feb 14 18:57:01 crc kubenswrapper[4897]: I0214 18:57:01.990128 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pj59l" event={"ID":"134c3238-8970-47dc-8b91-34e8f8f2579c","Type":"ContainerDied","Data":"6269b1ecf58ab298b70a7c783d8d5902144062ed74d66379a630c500c1a2788f"} Feb 14 18:57:01 crc kubenswrapper[4897]: I0214 18:57:01.990136 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pj59l" Feb 14 18:57:01 crc kubenswrapper[4897]: I0214 18:57:01.990166 4897 scope.go:117] "RemoveContainer" containerID="6269b1ecf58ab298b70a7c783d8d5902144062ed74d66379a630c500c1a2788f" Feb 14 18:57:01 crc kubenswrapper[4897]: I0214 18:57:01.990154 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pj59l" event={"ID":"134c3238-8970-47dc-8b91-34e8f8f2579c","Type":"ContainerDied","Data":"c794dcff67171c66929fad288723059a3d60c972faccb50b983126f0dbaf4f3f"} Feb 14 18:57:02 crc kubenswrapper[4897]: I0214 18:57:02.013638 4897 scope.go:117] "RemoveContainer" containerID="4823ee9e4d593ebacc06679747e4a98a95bf43a9946ceb847efff5ed7575a233" Feb 14 18:57:02 crc kubenswrapper[4897]: I0214 18:57:02.029472 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pcgwz" podStartSLOduration=2.514893164 podStartE2EDuration="5.029456777s" podCreationTimestamp="2026-02-14 18:56:57 +0000 UTC" firstStartedPulling="2026-02-14 18:56:58.961629524 +0000 UTC m=+871.938038007" lastFinishedPulling="2026-02-14 18:57:01.476193117 +0000 UTC m=+874.452601620" observedRunningTime="2026-02-14 18:57:02.018330685 +0000 UTC m=+874.994739178" watchObservedRunningTime="2026-02-14 18:57:02.029456777 +0000 UTC m=+875.005865260" Feb 14 18:57:02 crc kubenswrapper[4897]: I0214 18:57:02.033560 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pj59l"] Feb 14 18:57:02 crc kubenswrapper[4897]: I0214 18:57:02.048022 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pj59l"] Feb 14 18:57:02 crc kubenswrapper[4897]: I0214 18:57:02.051690 4897 scope.go:117] "RemoveContainer" containerID="708602e0732c68707bb90b35a2a29ca88165549a035a4e5fdc437a0b3ff057bd" Feb 14 18:57:02 crc kubenswrapper[4897]: I0214 18:57:02.081626 4897 scope.go:117] "RemoveContainer" containerID="6269b1ecf58ab298b70a7c783d8d5902144062ed74d66379a630c500c1a2788f" Feb 14 18:57:02 crc kubenswrapper[4897]: E0214 18:57:02.081991 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6269b1ecf58ab298b70a7c783d8d5902144062ed74d66379a630c500c1a2788f\": container with ID starting with 6269b1ecf58ab298b70a7c783d8d5902144062ed74d66379a630c500c1a2788f not found: ID does not exist" containerID="6269b1ecf58ab298b70a7c783d8d5902144062ed74d66379a630c500c1a2788f" Feb 14 18:57:02 crc kubenswrapper[4897]: I0214 18:57:02.082020 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6269b1ecf58ab298b70a7c783d8d5902144062ed74d66379a630c500c1a2788f"} err="failed to get container status \"6269b1ecf58ab298b70a7c783d8d5902144062ed74d66379a630c500c1a2788f\": rpc error: code = NotFound desc = could not find container \"6269b1ecf58ab298b70a7c783d8d5902144062ed74d66379a630c500c1a2788f\": container with ID starting with 6269b1ecf58ab298b70a7c783d8d5902144062ed74d66379a630c500c1a2788f not found: ID does not exist" Feb 14 18:57:02 crc kubenswrapper[4897]: I0214 18:57:02.082052 4897 scope.go:117] "RemoveContainer" containerID="4823ee9e4d593ebacc06679747e4a98a95bf43a9946ceb847efff5ed7575a233" Feb 14 18:57:02 crc kubenswrapper[4897]: E0214 18:57:02.082799 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4823ee9e4d593ebacc06679747e4a98a95bf43a9946ceb847efff5ed7575a233\": container with ID starting with 4823ee9e4d593ebacc06679747e4a98a95bf43a9946ceb847efff5ed7575a233 not found: ID does not exist" containerID="4823ee9e4d593ebacc06679747e4a98a95bf43a9946ceb847efff5ed7575a233" Feb 14 18:57:02 crc kubenswrapper[4897]: I0214 18:57:02.082817 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4823ee9e4d593ebacc06679747e4a98a95bf43a9946ceb847efff5ed7575a233"} err="failed to get container status \"4823ee9e4d593ebacc06679747e4a98a95bf43a9946ceb847efff5ed7575a233\": rpc error: code = NotFound desc = could not find container \"4823ee9e4d593ebacc06679747e4a98a95bf43a9946ceb847efff5ed7575a233\": container with ID starting with 4823ee9e4d593ebacc06679747e4a98a95bf43a9946ceb847efff5ed7575a233 not found: ID does not exist" Feb 14 18:57:02 crc kubenswrapper[4897]: I0214 18:57:02.082828 4897 scope.go:117] "RemoveContainer" containerID="708602e0732c68707bb90b35a2a29ca88165549a035a4e5fdc437a0b3ff057bd" Feb 14 18:57:02 crc kubenswrapper[4897]: E0214 18:57:02.083097 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"708602e0732c68707bb90b35a2a29ca88165549a035a4e5fdc437a0b3ff057bd\": container with ID starting with 708602e0732c68707bb90b35a2a29ca88165549a035a4e5fdc437a0b3ff057bd not found: ID does not exist" containerID="708602e0732c68707bb90b35a2a29ca88165549a035a4e5fdc437a0b3ff057bd" Feb 14 18:57:02 crc kubenswrapper[4897]: I0214 18:57:02.083112 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"708602e0732c68707bb90b35a2a29ca88165549a035a4e5fdc437a0b3ff057bd"} err="failed to get container status \"708602e0732c68707bb90b35a2a29ca88165549a035a4e5fdc437a0b3ff057bd\": rpc error: code = NotFound desc = could not find container \"708602e0732c68707bb90b35a2a29ca88165549a035a4e5fdc437a0b3ff057bd\": container with ID starting with 708602e0732c68707bb90b35a2a29ca88165549a035a4e5fdc437a0b3ff057bd not found: ID does not exist" Feb 14 18:57:03 crc kubenswrapper[4897]: I0214 18:57:03.806729 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="134c3238-8970-47dc-8b91-34e8f8f2579c" path="/var/lib/kubelet/pods/134c3238-8970-47dc-8b91-34e8f8f2579c/volumes" Feb 14 18:57:07 crc kubenswrapper[4897]: I0214 18:57:07.694138 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pcgwz" Feb 14 18:57:07 crc kubenswrapper[4897]: I0214 18:57:07.694587 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pcgwz" Feb 14 18:57:07 crc kubenswrapper[4897]: I0214 18:57:07.787320 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pcgwz" Feb 14 18:57:08 crc kubenswrapper[4897]: I0214 18:57:08.161677 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pcgwz" Feb 14 18:57:08 crc kubenswrapper[4897]: I0214 18:57:08.235677 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pcgwz"] Feb 14 18:57:10 crc kubenswrapper[4897]: I0214 18:57:10.066823 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pcgwz" podUID="6aec9c60-b00a-4d4e-8f6a-74d6aac98aba" containerName="registry-server" containerID="cri-o://9eb5116496f6001b39ca08184ad3e5590eecbc8951ff1093215b31e9b9aa10c6" gracePeriod=2 Feb 14 18:57:10 crc kubenswrapper[4897]: I0214 18:57:10.552118 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pcgwz" Feb 14 18:57:10 crc kubenswrapper[4897]: I0214 18:57:10.687153 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aec9c60-b00a-4d4e-8f6a-74d6aac98aba-catalog-content\") pod \"6aec9c60-b00a-4d4e-8f6a-74d6aac98aba\" (UID: \"6aec9c60-b00a-4d4e-8f6a-74d6aac98aba\") " Feb 14 18:57:10 crc kubenswrapper[4897]: I0214 18:57:10.687356 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzph5\" (UniqueName: \"kubernetes.io/projected/6aec9c60-b00a-4d4e-8f6a-74d6aac98aba-kube-api-access-lzph5\") pod \"6aec9c60-b00a-4d4e-8f6a-74d6aac98aba\" (UID: \"6aec9c60-b00a-4d4e-8f6a-74d6aac98aba\") " Feb 14 18:57:10 crc kubenswrapper[4897]: I0214 18:57:10.687435 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aec9c60-b00a-4d4e-8f6a-74d6aac98aba-utilities\") pod \"6aec9c60-b00a-4d4e-8f6a-74d6aac98aba\" (UID: \"6aec9c60-b00a-4d4e-8f6a-74d6aac98aba\") " Feb 14 18:57:10 crc kubenswrapper[4897]: I0214 18:57:10.688817 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aec9c60-b00a-4d4e-8f6a-74d6aac98aba-utilities" (OuterVolumeSpecName: "utilities") pod "6aec9c60-b00a-4d4e-8f6a-74d6aac98aba" (UID: "6aec9c60-b00a-4d4e-8f6a-74d6aac98aba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:57:10 crc kubenswrapper[4897]: I0214 18:57:10.697736 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aec9c60-b00a-4d4e-8f6a-74d6aac98aba-kube-api-access-lzph5" (OuterVolumeSpecName: "kube-api-access-lzph5") pod "6aec9c60-b00a-4d4e-8f6a-74d6aac98aba" (UID: "6aec9c60-b00a-4d4e-8f6a-74d6aac98aba"). InnerVolumeSpecName "kube-api-access-lzph5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:57:10 crc kubenswrapper[4897]: I0214 18:57:10.789914 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzph5\" (UniqueName: \"kubernetes.io/projected/6aec9c60-b00a-4d4e-8f6a-74d6aac98aba-kube-api-access-lzph5\") on node \"crc\" DevicePath \"\"" Feb 14 18:57:10 crc kubenswrapper[4897]: I0214 18:57:10.789961 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aec9c60-b00a-4d4e-8f6a-74d6aac98aba-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 18:57:11 crc kubenswrapper[4897]: I0214 18:57:11.078011 4897 generic.go:334] "Generic (PLEG): container finished" podID="6aec9c60-b00a-4d4e-8f6a-74d6aac98aba" containerID="9eb5116496f6001b39ca08184ad3e5590eecbc8951ff1093215b31e9b9aa10c6" exitCode=0 Feb 14 18:57:11 crc kubenswrapper[4897]: I0214 18:57:11.078090 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcgwz" event={"ID":"6aec9c60-b00a-4d4e-8f6a-74d6aac98aba","Type":"ContainerDied","Data":"9eb5116496f6001b39ca08184ad3e5590eecbc8951ff1093215b31e9b9aa10c6"} Feb 14 18:57:11 crc kubenswrapper[4897]: I0214 18:57:11.078133 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pcgwz" Feb 14 18:57:11 crc kubenswrapper[4897]: I0214 18:57:11.078154 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcgwz" event={"ID":"6aec9c60-b00a-4d4e-8f6a-74d6aac98aba","Type":"ContainerDied","Data":"0bfadfd0bba9a3b9207b664cef1b5b7d5b2b0978c83657b8fe9fa16c3e75f8f5"} Feb 14 18:57:11 crc kubenswrapper[4897]: I0214 18:57:11.078191 4897 scope.go:117] "RemoveContainer" containerID="9eb5116496f6001b39ca08184ad3e5590eecbc8951ff1093215b31e9b9aa10c6" Feb 14 18:57:11 crc kubenswrapper[4897]: I0214 18:57:11.082742 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aec9c60-b00a-4d4e-8f6a-74d6aac98aba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6aec9c60-b00a-4d4e-8f6a-74d6aac98aba" (UID: "6aec9c60-b00a-4d4e-8f6a-74d6aac98aba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:57:11 crc kubenswrapper[4897]: I0214 18:57:11.095712 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aec9c60-b00a-4d4e-8f6a-74d6aac98aba-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 18:57:11 crc kubenswrapper[4897]: I0214 18:57:11.096672 4897 scope.go:117] "RemoveContainer" containerID="e77df64ccd71db28ed7ec295cf0f9ae958019894962a72872dd2fcb801649e20" Feb 14 18:57:11 crc kubenswrapper[4897]: I0214 18:57:11.117699 4897 scope.go:117] "RemoveContainer" containerID="faf0b04f13b91752b5902cacf65677d7fe02c0ebc194933545ee66949d4b781f" Feb 14 18:57:11 crc kubenswrapper[4897]: I0214 18:57:11.139360 4897 scope.go:117] "RemoveContainer" containerID="9eb5116496f6001b39ca08184ad3e5590eecbc8951ff1093215b31e9b9aa10c6" Feb 14 18:57:11 crc kubenswrapper[4897]: E0214 18:57:11.139933 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eb5116496f6001b39ca08184ad3e5590eecbc8951ff1093215b31e9b9aa10c6\": container with ID starting with 9eb5116496f6001b39ca08184ad3e5590eecbc8951ff1093215b31e9b9aa10c6 not found: ID does not exist" containerID="9eb5116496f6001b39ca08184ad3e5590eecbc8951ff1093215b31e9b9aa10c6" Feb 14 18:57:11 crc kubenswrapper[4897]: I0214 18:57:11.139965 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eb5116496f6001b39ca08184ad3e5590eecbc8951ff1093215b31e9b9aa10c6"} err="failed to get container status \"9eb5116496f6001b39ca08184ad3e5590eecbc8951ff1093215b31e9b9aa10c6\": rpc error: code = NotFound desc = could not find container \"9eb5116496f6001b39ca08184ad3e5590eecbc8951ff1093215b31e9b9aa10c6\": container with ID starting with 9eb5116496f6001b39ca08184ad3e5590eecbc8951ff1093215b31e9b9aa10c6 not found: ID does not exist" Feb 14 18:57:11 crc kubenswrapper[4897]: I0214 18:57:11.139988 4897 scope.go:117] "RemoveContainer" containerID="e77df64ccd71db28ed7ec295cf0f9ae958019894962a72872dd2fcb801649e20" Feb 14 18:57:11 crc kubenswrapper[4897]: E0214 18:57:11.140604 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e77df64ccd71db28ed7ec295cf0f9ae958019894962a72872dd2fcb801649e20\": container with ID starting with e77df64ccd71db28ed7ec295cf0f9ae958019894962a72872dd2fcb801649e20 not found: ID does not exist" containerID="e77df64ccd71db28ed7ec295cf0f9ae958019894962a72872dd2fcb801649e20" Feb 14 18:57:11 crc kubenswrapper[4897]: I0214 18:57:11.140654 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e77df64ccd71db28ed7ec295cf0f9ae958019894962a72872dd2fcb801649e20"} err="failed to get container status \"e77df64ccd71db28ed7ec295cf0f9ae958019894962a72872dd2fcb801649e20\": rpc error: code = NotFound desc = could not find container \"e77df64ccd71db28ed7ec295cf0f9ae958019894962a72872dd2fcb801649e20\": container with ID starting with e77df64ccd71db28ed7ec295cf0f9ae958019894962a72872dd2fcb801649e20 not found: ID does not exist" Feb 14 18:57:11 crc kubenswrapper[4897]: I0214 18:57:11.140673 4897 scope.go:117] "RemoveContainer" containerID="faf0b04f13b91752b5902cacf65677d7fe02c0ebc194933545ee66949d4b781f" Feb 14 18:57:11 crc kubenswrapper[4897]: E0214 18:57:11.141170 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faf0b04f13b91752b5902cacf65677d7fe02c0ebc194933545ee66949d4b781f\": container with ID starting with faf0b04f13b91752b5902cacf65677d7fe02c0ebc194933545ee66949d4b781f not found: ID does not exist" containerID="faf0b04f13b91752b5902cacf65677d7fe02c0ebc194933545ee66949d4b781f" Feb 14 18:57:11 crc kubenswrapper[4897]: I0214 18:57:11.141212 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faf0b04f13b91752b5902cacf65677d7fe02c0ebc194933545ee66949d4b781f"} err="failed to get container status \"faf0b04f13b91752b5902cacf65677d7fe02c0ebc194933545ee66949d4b781f\": rpc error: code = NotFound desc = could not find container \"faf0b04f13b91752b5902cacf65677d7fe02c0ebc194933545ee66949d4b781f\": container with ID starting with faf0b04f13b91752b5902cacf65677d7fe02c0ebc194933545ee66949d4b781f not found: ID does not exist" Feb 14 18:57:11 crc kubenswrapper[4897]: I0214 18:57:11.430025 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pcgwz"] Feb 14 18:57:11 crc kubenswrapper[4897]: I0214 18:57:11.440195 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pcgwz"] Feb 14 18:57:11 crc kubenswrapper[4897]: I0214 18:57:11.812521 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aec9c60-b00a-4d4e-8f6a-74d6aac98aba" path="/var/lib/kubelet/pods/6aec9c60-b00a-4d4e-8f6a-74d6aac98aba/volumes" Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.864686 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-b2kwp"] Feb 14 18:57:13 crc kubenswrapper[4897]: E0214 18:57:13.865143 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aec9c60-b00a-4d4e-8f6a-74d6aac98aba" containerName="registry-server" Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.865155 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aec9c60-b00a-4d4e-8f6a-74d6aac98aba" containerName="registry-server" Feb 14 18:57:13 crc kubenswrapper[4897]: E0214 18:57:13.865175 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="134c3238-8970-47dc-8b91-34e8f8f2579c" containerName="registry-server" Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.865181 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="134c3238-8970-47dc-8b91-34e8f8f2579c" containerName="registry-server" Feb 14 18:57:13 crc kubenswrapper[4897]: E0214 18:57:13.865190 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aec9c60-b00a-4d4e-8f6a-74d6aac98aba" containerName="extract-content" Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.865197 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aec9c60-b00a-4d4e-8f6a-74d6aac98aba" containerName="extract-content" Feb 14 18:57:13 crc kubenswrapper[4897]: E0214 18:57:13.865210 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aec9c60-b00a-4d4e-8f6a-74d6aac98aba" containerName="extract-utilities" Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.865215 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aec9c60-b00a-4d4e-8f6a-74d6aac98aba" containerName="extract-utilities" Feb 14 18:57:13 crc kubenswrapper[4897]: E0214 18:57:13.865224 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="134c3238-8970-47dc-8b91-34e8f8f2579c" containerName="extract-content" Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.865229 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="134c3238-8970-47dc-8b91-34e8f8f2579c" containerName="extract-content" Feb 14 18:57:13 crc kubenswrapper[4897]: E0214 18:57:13.865238 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="134c3238-8970-47dc-8b91-34e8f8f2579c" containerName="extract-utilities" Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.865244 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="134c3238-8970-47dc-8b91-34e8f8f2579c" containerName="extract-utilities" Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.865353 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="134c3238-8970-47dc-8b91-34e8f8f2579c" containerName="registry-server" Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.865366 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aec9c60-b00a-4d4e-8f6a-74d6aac98aba" containerName="registry-server" Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.865836 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-b2kwp" Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.868385 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.868650 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.869898 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.869959 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-487ld" Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.870067 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.880347 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.930745 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-b2kwp"] Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.953379 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0c7d383-f35c-4214-b781-50e549db1e0e-config\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.953438 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e0c7d383-f35c-4214-b781-50e549db1e0e-tmp\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.953473 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25d8s\" (UniqueName: \"kubernetes.io/projected/e0c7d383-f35c-4214-b781-50e549db1e0e-kube-api-access-25d8s\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.953502 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/e0c7d383-f35c-4214-b781-50e549db1e0e-sa-token\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.953518 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/e0c7d383-f35c-4214-b781-50e549db1e0e-collector-token\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.953536 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/e0c7d383-f35c-4214-b781-50e549db1e0e-config-openshift-service-cacrt\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.953689 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/e0c7d383-f35c-4214-b781-50e549db1e0e-collector-syslog-receiver\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.953796 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/e0c7d383-f35c-4214-b781-50e549db1e0e-metrics\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.953822 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/e0c7d383-f35c-4214-b781-50e549db1e0e-datadir\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.953873 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/e0c7d383-f35c-4214-b781-50e549db1e0e-entrypoint\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:13 crc kubenswrapper[4897]: I0214 18:57:13.954004 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0c7d383-f35c-4214-b781-50e549db1e0e-trusted-ca\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.019684 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-b2kwp"] Feb 14 18:57:14 crc kubenswrapper[4897]: E0214 18:57:14.020289 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-25d8s metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-b2kwp" podUID="e0c7d383-f35c-4214-b781-50e549db1e0e" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.055237 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/e0c7d383-f35c-4214-b781-50e549db1e0e-entrypoint\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.055310 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0c7d383-f35c-4214-b781-50e549db1e0e-trusted-ca\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.055335 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0c7d383-f35c-4214-b781-50e549db1e0e-config\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.055365 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e0c7d383-f35c-4214-b781-50e549db1e0e-tmp\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.055391 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25d8s\" (UniqueName: \"kubernetes.io/projected/e0c7d383-f35c-4214-b781-50e549db1e0e-kube-api-access-25d8s\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.055414 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/e0c7d383-f35c-4214-b781-50e549db1e0e-sa-token\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.055433 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/e0c7d383-f35c-4214-b781-50e549db1e0e-collector-token\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.055449 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/e0c7d383-f35c-4214-b781-50e549db1e0e-config-openshift-service-cacrt\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.055472 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/e0c7d383-f35c-4214-b781-50e549db1e0e-collector-syslog-receiver\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.055504 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/e0c7d383-f35c-4214-b781-50e549db1e0e-metrics\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.055520 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/e0c7d383-f35c-4214-b781-50e549db1e0e-datadir\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.055589 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/e0c7d383-f35c-4214-b781-50e549db1e0e-datadir\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:14 crc kubenswrapper[4897]: E0214 18:57:14.055685 4897 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Feb 14 18:57:14 crc kubenswrapper[4897]: E0214 18:57:14.055739 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0c7d383-f35c-4214-b781-50e549db1e0e-collector-syslog-receiver podName:e0c7d383-f35c-4214-b781-50e549db1e0e nodeName:}" failed. No retries permitted until 2026-02-14 18:57:14.555721313 +0000 UTC m=+887.532129796 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/e0c7d383-f35c-4214-b781-50e549db1e0e-collector-syslog-receiver") pod "collector-b2kwp" (UID: "e0c7d383-f35c-4214-b781-50e549db1e0e") : secret "collector-syslog-receiver" not found Feb 14 18:57:14 crc kubenswrapper[4897]: E0214 18:57:14.055827 4897 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Feb 14 18:57:14 crc kubenswrapper[4897]: E0214 18:57:14.055922 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0c7d383-f35c-4214-b781-50e549db1e0e-metrics podName:e0c7d383-f35c-4214-b781-50e549db1e0e nodeName:}" failed. No retries permitted until 2026-02-14 18:57:14.555899128 +0000 UTC m=+887.532307621 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/e0c7d383-f35c-4214-b781-50e549db1e0e-metrics") pod "collector-b2kwp" (UID: "e0c7d383-f35c-4214-b781-50e549db1e0e") : secret "collector-metrics" not found Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.056300 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/e0c7d383-f35c-4214-b781-50e549db1e0e-entrypoint\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.056525 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/e0c7d383-f35c-4214-b781-50e549db1e0e-config-openshift-service-cacrt\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.056698 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0c7d383-f35c-4214-b781-50e549db1e0e-trusted-ca\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.057228 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0c7d383-f35c-4214-b781-50e549db1e0e-config\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.060378 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e0c7d383-f35c-4214-b781-50e549db1e0e-tmp\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.061530 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/e0c7d383-f35c-4214-b781-50e549db1e0e-collector-token\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.078410 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25d8s\" (UniqueName: \"kubernetes.io/projected/e0c7d383-f35c-4214-b781-50e549db1e0e-kube-api-access-25d8s\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.079912 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/e0c7d383-f35c-4214-b781-50e549db1e0e-sa-token\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.105721 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-b2kwp" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.115750 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-b2kwp" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.257932 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/e0c7d383-f35c-4214-b781-50e549db1e0e-entrypoint\") pod \"e0c7d383-f35c-4214-b781-50e549db1e0e\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.258001 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0c7d383-f35c-4214-b781-50e549db1e0e-config\") pod \"e0c7d383-f35c-4214-b781-50e549db1e0e\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.258078 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25d8s\" (UniqueName: \"kubernetes.io/projected/e0c7d383-f35c-4214-b781-50e549db1e0e-kube-api-access-25d8s\") pod \"e0c7d383-f35c-4214-b781-50e549db1e0e\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.258179 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/e0c7d383-f35c-4214-b781-50e549db1e0e-datadir\") pod \"e0c7d383-f35c-4214-b781-50e549db1e0e\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.258203 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/e0c7d383-f35c-4214-b781-50e549db1e0e-config-openshift-service-cacrt\") pod \"e0c7d383-f35c-4214-b781-50e549db1e0e\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.258263 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0c7d383-f35c-4214-b781-50e549db1e0e-trusted-ca\") pod \"e0c7d383-f35c-4214-b781-50e549db1e0e\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.258291 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e0c7d383-f35c-4214-b781-50e549db1e0e-tmp\") pod \"e0c7d383-f35c-4214-b781-50e549db1e0e\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.258328 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/e0c7d383-f35c-4214-b781-50e549db1e0e-collector-token\") pod \"e0c7d383-f35c-4214-b781-50e549db1e0e\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.258415 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/e0c7d383-f35c-4214-b781-50e549db1e0e-sa-token\") pod \"e0c7d383-f35c-4214-b781-50e549db1e0e\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.258482 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0c7d383-f35c-4214-b781-50e549db1e0e-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "e0c7d383-f35c-4214-b781-50e549db1e0e" (UID: "e0c7d383-f35c-4214-b781-50e549db1e0e"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.258773 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0c7d383-f35c-4214-b781-50e549db1e0e-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "e0c7d383-f35c-4214-b781-50e549db1e0e" (UID: "e0c7d383-f35c-4214-b781-50e549db1e0e"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.258779 4897 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/e0c7d383-f35c-4214-b781-50e549db1e0e-entrypoint\") on node \"crc\" DevicePath \"\"" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.259062 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0c7d383-f35c-4214-b781-50e549db1e0e-config" (OuterVolumeSpecName: "config") pod "e0c7d383-f35c-4214-b781-50e549db1e0e" (UID: "e0c7d383-f35c-4214-b781-50e549db1e0e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.259357 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0c7d383-f35c-4214-b781-50e549db1e0e-datadir" (OuterVolumeSpecName: "datadir") pod "e0c7d383-f35c-4214-b781-50e549db1e0e" (UID: "e0c7d383-f35c-4214-b781-50e549db1e0e"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.259881 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0c7d383-f35c-4214-b781-50e549db1e0e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "e0c7d383-f35c-4214-b781-50e549db1e0e" (UID: "e0c7d383-f35c-4214-b781-50e549db1e0e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.262733 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0c7d383-f35c-4214-b781-50e549db1e0e-kube-api-access-25d8s" (OuterVolumeSpecName: "kube-api-access-25d8s") pod "e0c7d383-f35c-4214-b781-50e549db1e0e" (UID: "e0c7d383-f35c-4214-b781-50e549db1e0e"). InnerVolumeSpecName "kube-api-access-25d8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.263582 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c7d383-f35c-4214-b781-50e549db1e0e-collector-token" (OuterVolumeSpecName: "collector-token") pod "e0c7d383-f35c-4214-b781-50e549db1e0e" (UID: "e0c7d383-f35c-4214-b781-50e549db1e0e"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.263886 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0c7d383-f35c-4214-b781-50e549db1e0e-sa-token" (OuterVolumeSpecName: "sa-token") pod "e0c7d383-f35c-4214-b781-50e549db1e0e" (UID: "e0c7d383-f35c-4214-b781-50e549db1e0e"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.265280 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0c7d383-f35c-4214-b781-50e549db1e0e-tmp" (OuterVolumeSpecName: "tmp") pod "e0c7d383-f35c-4214-b781-50e549db1e0e" (UID: "e0c7d383-f35c-4214-b781-50e549db1e0e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.360054 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25d8s\" (UniqueName: \"kubernetes.io/projected/e0c7d383-f35c-4214-b781-50e549db1e0e-kube-api-access-25d8s\") on node \"crc\" DevicePath \"\"" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.360090 4897 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/e0c7d383-f35c-4214-b781-50e549db1e0e-datadir\") on node \"crc\" DevicePath \"\"" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.360100 4897 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/e0c7d383-f35c-4214-b781-50e549db1e0e-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.360113 4897 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0c7d383-f35c-4214-b781-50e549db1e0e-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.360122 4897 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e0c7d383-f35c-4214-b781-50e549db1e0e-tmp\") on node \"crc\" DevicePath \"\"" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.360134 4897 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/e0c7d383-f35c-4214-b781-50e549db1e0e-collector-token\") on node \"crc\" DevicePath \"\"" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.360144 4897 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/e0c7d383-f35c-4214-b781-50e549db1e0e-sa-token\") on node \"crc\" DevicePath \"\"" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.360153 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0c7d383-f35c-4214-b781-50e549db1e0e-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.563850 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/e0c7d383-f35c-4214-b781-50e549db1e0e-collector-syslog-receiver\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.563974 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/e0c7d383-f35c-4214-b781-50e549db1e0e-metrics\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.569021 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/e0c7d383-f35c-4214-b781-50e549db1e0e-collector-syslog-receiver\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.571669 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/e0c7d383-f35c-4214-b781-50e549db1e0e-metrics\") pod \"collector-b2kwp\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " pod="openshift-logging/collector-b2kwp" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.665576 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/e0c7d383-f35c-4214-b781-50e549db1e0e-metrics\") pod \"e0c7d383-f35c-4214-b781-50e549db1e0e\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.665807 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/e0c7d383-f35c-4214-b781-50e549db1e0e-collector-syslog-receiver\") pod \"e0c7d383-f35c-4214-b781-50e549db1e0e\" (UID: \"e0c7d383-f35c-4214-b781-50e549db1e0e\") " Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.669351 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c7d383-f35c-4214-b781-50e549db1e0e-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "e0c7d383-f35c-4214-b781-50e549db1e0e" (UID: "e0c7d383-f35c-4214-b781-50e549db1e0e"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.670392 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c7d383-f35c-4214-b781-50e549db1e0e-metrics" (OuterVolumeSpecName: "metrics") pod "e0c7d383-f35c-4214-b781-50e549db1e0e" (UID: "e0c7d383-f35c-4214-b781-50e549db1e0e"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.767608 4897 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/e0c7d383-f35c-4214-b781-50e549db1e0e-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Feb 14 18:57:14 crc kubenswrapper[4897]: I0214 18:57:14.767641 4897 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/e0c7d383-f35c-4214-b781-50e549db1e0e-metrics\") on node \"crc\" DevicePath \"\"" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.116215 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-b2kwp" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.202748 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-b2kwp"] Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.231108 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-b2kwp"] Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.234171 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-9q6vx"] Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.235846 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.240249 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.241866 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.242079 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-487ld" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.242639 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.243768 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.244793 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-9q6vx"] Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.250832 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.380090 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d86c8472-f6f6-46c3-9a79-6abfb848be75-config-openshift-service-cacrt\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.380176 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d86c8472-f6f6-46c3-9a79-6abfb848be75-sa-token\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.380434 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d86c8472-f6f6-46c3-9a79-6abfb848be75-config\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.380681 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d86c8472-f6f6-46c3-9a79-6abfb848be75-metrics\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.380779 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d86c8472-f6f6-46c3-9a79-6abfb848be75-collector-token\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.380836 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-476zp\" (UniqueName: \"kubernetes.io/projected/d86c8472-f6f6-46c3-9a79-6abfb848be75-kube-api-access-476zp\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.380876 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d86c8472-f6f6-46c3-9a79-6abfb848be75-trusted-ca\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.380964 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d86c8472-f6f6-46c3-9a79-6abfb848be75-tmp\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.381060 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d86c8472-f6f6-46c3-9a79-6abfb848be75-collector-syslog-receiver\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.381169 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d86c8472-f6f6-46c3-9a79-6abfb848be75-datadir\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.381211 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d86c8472-f6f6-46c3-9a79-6abfb848be75-entrypoint\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.482395 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d86c8472-f6f6-46c3-9a79-6abfb848be75-config-openshift-service-cacrt\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.482446 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d86c8472-f6f6-46c3-9a79-6abfb848be75-sa-token\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.482521 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d86c8472-f6f6-46c3-9a79-6abfb848be75-config\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.482592 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d86c8472-f6f6-46c3-9a79-6abfb848be75-metrics\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.482626 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d86c8472-f6f6-46c3-9a79-6abfb848be75-collector-token\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.482649 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-476zp\" (UniqueName: \"kubernetes.io/projected/d86c8472-f6f6-46c3-9a79-6abfb848be75-kube-api-access-476zp\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.482673 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d86c8472-f6f6-46c3-9a79-6abfb848be75-trusted-ca\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.482703 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d86c8472-f6f6-46c3-9a79-6abfb848be75-tmp\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.482729 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d86c8472-f6f6-46c3-9a79-6abfb848be75-collector-syslog-receiver\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.482759 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d86c8472-f6f6-46c3-9a79-6abfb848be75-datadir\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.482776 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d86c8472-f6f6-46c3-9a79-6abfb848be75-entrypoint\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.483706 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d86c8472-f6f6-46c3-9a79-6abfb848be75-entrypoint\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.484387 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d86c8472-f6f6-46c3-9a79-6abfb848be75-datadir\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.484686 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d86c8472-f6f6-46c3-9a79-6abfb848be75-config-openshift-service-cacrt\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.485366 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d86c8472-f6f6-46c3-9a79-6abfb848be75-trusted-ca\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.485378 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d86c8472-f6f6-46c3-9a79-6abfb848be75-config\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.489254 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d86c8472-f6f6-46c3-9a79-6abfb848be75-tmp\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.489780 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d86c8472-f6f6-46c3-9a79-6abfb848be75-metrics\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.492405 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d86c8472-f6f6-46c3-9a79-6abfb848be75-collector-syslog-receiver\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.502090 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d86c8472-f6f6-46c3-9a79-6abfb848be75-collector-token\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.516524 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-476zp\" (UniqueName: \"kubernetes.io/projected/d86c8472-f6f6-46c3-9a79-6abfb848be75-kube-api-access-476zp\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.525152 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d86c8472-f6f6-46c3-9a79-6abfb848be75-sa-token\") pod \"collector-9q6vx\" (UID: \"d86c8472-f6f6-46c3-9a79-6abfb848be75\") " pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.560471 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-9q6vx" Feb 14 18:57:15 crc kubenswrapper[4897]: I0214 18:57:15.804015 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0c7d383-f35c-4214-b781-50e549db1e0e" path="/var/lib/kubelet/pods/e0c7d383-f35c-4214-b781-50e549db1e0e/volumes" Feb 14 18:57:16 crc kubenswrapper[4897]: I0214 18:57:16.016459 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-9q6vx"] Feb 14 18:57:16 crc kubenswrapper[4897]: I0214 18:57:16.131844 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-9q6vx" event={"ID":"d86c8472-f6f6-46c3-9a79-6abfb848be75","Type":"ContainerStarted","Data":"e8a9dea05954c240d53be29f4e03d8e74746b949593bdfa677d9f4558a6027d4"} Feb 14 18:57:23 crc kubenswrapper[4897]: I0214 18:57:23.196502 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-9q6vx" event={"ID":"d86c8472-f6f6-46c3-9a79-6abfb848be75","Type":"ContainerStarted","Data":"551580a1cf1d2242309ae162bd8ea29e580974dc9c98b91ab8847e63d7bb1913"} Feb 14 18:57:23 crc kubenswrapper[4897]: I0214 18:57:23.227191 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-9q6vx" podStartSLOduration=1.5853173539999998 podStartE2EDuration="8.227166598s" podCreationTimestamp="2026-02-14 18:57:15 +0000 UTC" firstStartedPulling="2026-02-14 18:57:16.013843872 +0000 UTC m=+888.990252395" lastFinishedPulling="2026-02-14 18:57:22.655693126 +0000 UTC m=+895.632101639" observedRunningTime="2026-02-14 18:57:23.21815295 +0000 UTC m=+896.194561493" watchObservedRunningTime="2026-02-14 18:57:23.227166598 +0000 UTC m=+896.203575121" Feb 14 18:57:54 crc kubenswrapper[4897]: I0214 18:57:54.760898 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2"] Feb 14 18:57:54 crc kubenswrapper[4897]: I0214 18:57:54.762792 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2" Feb 14 18:57:54 crc kubenswrapper[4897]: I0214 18:57:54.766266 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 14 18:57:54 crc kubenswrapper[4897]: I0214 18:57:54.771096 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2"] Feb 14 18:57:54 crc kubenswrapper[4897]: I0214 18:57:54.800441 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4669d0a9-6bb7-4e10-9e83-88038ec23e72-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2\" (UID: \"4669d0a9-6bb7-4e10-9e83-88038ec23e72\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2" Feb 14 18:57:54 crc kubenswrapper[4897]: I0214 18:57:54.800507 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds7g8\" (UniqueName: \"kubernetes.io/projected/4669d0a9-6bb7-4e10-9e83-88038ec23e72-kube-api-access-ds7g8\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2\" (UID: \"4669d0a9-6bb7-4e10-9e83-88038ec23e72\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2" Feb 14 18:57:54 crc kubenswrapper[4897]: I0214 18:57:54.800583 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4669d0a9-6bb7-4e10-9e83-88038ec23e72-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2\" (UID: \"4669d0a9-6bb7-4e10-9e83-88038ec23e72\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2" Feb 14 18:57:54 crc kubenswrapper[4897]: I0214 18:57:54.902387 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4669d0a9-6bb7-4e10-9e83-88038ec23e72-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2\" (UID: \"4669d0a9-6bb7-4e10-9e83-88038ec23e72\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2" Feb 14 18:57:54 crc kubenswrapper[4897]: I0214 18:57:54.902524 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4669d0a9-6bb7-4e10-9e83-88038ec23e72-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2\" (UID: \"4669d0a9-6bb7-4e10-9e83-88038ec23e72\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2" Feb 14 18:57:54 crc kubenswrapper[4897]: I0214 18:57:54.902585 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds7g8\" (UniqueName: \"kubernetes.io/projected/4669d0a9-6bb7-4e10-9e83-88038ec23e72-kube-api-access-ds7g8\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2\" (UID: \"4669d0a9-6bb7-4e10-9e83-88038ec23e72\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2" Feb 14 18:57:54 crc kubenswrapper[4897]: I0214 18:57:54.903128 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4669d0a9-6bb7-4e10-9e83-88038ec23e72-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2\" (UID: \"4669d0a9-6bb7-4e10-9e83-88038ec23e72\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2" Feb 14 18:57:54 crc kubenswrapper[4897]: I0214 18:57:54.903137 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4669d0a9-6bb7-4e10-9e83-88038ec23e72-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2\" (UID: \"4669d0a9-6bb7-4e10-9e83-88038ec23e72\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2" Feb 14 18:57:54 crc kubenswrapper[4897]: I0214 18:57:54.926373 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds7g8\" (UniqueName: \"kubernetes.io/projected/4669d0a9-6bb7-4e10-9e83-88038ec23e72-kube-api-access-ds7g8\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2\" (UID: \"4669d0a9-6bb7-4e10-9e83-88038ec23e72\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2" Feb 14 18:57:55 crc kubenswrapper[4897]: I0214 18:57:55.077591 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2" Feb 14 18:57:55 crc kubenswrapper[4897]: I0214 18:57:55.539876 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2"] Feb 14 18:57:56 crc kubenswrapper[4897]: I0214 18:57:56.507586 4897 generic.go:334] "Generic (PLEG): container finished" podID="4669d0a9-6bb7-4e10-9e83-88038ec23e72" containerID="fae6aa5fa62e860ef09e3f688f76e5559475a6a78f0ddd5bed60569f331c95df" exitCode=0 Feb 14 18:57:56 crc kubenswrapper[4897]: I0214 18:57:56.507717 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2" event={"ID":"4669d0a9-6bb7-4e10-9e83-88038ec23e72","Type":"ContainerDied","Data":"fae6aa5fa62e860ef09e3f688f76e5559475a6a78f0ddd5bed60569f331c95df"} Feb 14 18:57:56 crc kubenswrapper[4897]: I0214 18:57:56.508231 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2" event={"ID":"4669d0a9-6bb7-4e10-9e83-88038ec23e72","Type":"ContainerStarted","Data":"52a9776fa0bb02384e65c1c61bdd56613f7e2e8838d6b3efb21a699b764fbbc5"} Feb 14 18:57:59 crc kubenswrapper[4897]: I0214 18:57:59.535707 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2" event={"ID":"4669d0a9-6bb7-4e10-9e83-88038ec23e72","Type":"ContainerStarted","Data":"1e3959507279abe4a1593463985103e518c0eea421ae9d102db786a5f83f6444"} Feb 14 18:58:00 crc kubenswrapper[4897]: I0214 18:58:00.545024 4897 generic.go:334] "Generic (PLEG): container finished" podID="4669d0a9-6bb7-4e10-9e83-88038ec23e72" containerID="1e3959507279abe4a1593463985103e518c0eea421ae9d102db786a5f83f6444" exitCode=0 Feb 14 18:58:00 crc kubenswrapper[4897]: I0214 18:58:00.545190 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2" event={"ID":"4669d0a9-6bb7-4e10-9e83-88038ec23e72","Type":"ContainerDied","Data":"1e3959507279abe4a1593463985103e518c0eea421ae9d102db786a5f83f6444"} Feb 14 18:58:01 crc kubenswrapper[4897]: I0214 18:58:01.559470 4897 generic.go:334] "Generic (PLEG): container finished" podID="4669d0a9-6bb7-4e10-9e83-88038ec23e72" containerID="5063059c03df3264f54c93c2931565b7b64e3f3b849d2c80897fc7c4babe3c18" exitCode=0 Feb 14 18:58:01 crc kubenswrapper[4897]: I0214 18:58:01.559558 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2" event={"ID":"4669d0a9-6bb7-4e10-9e83-88038ec23e72","Type":"ContainerDied","Data":"5063059c03df3264f54c93c2931565b7b64e3f3b849d2c80897fc7c4babe3c18"} Feb 14 18:58:02 crc kubenswrapper[4897]: I0214 18:58:02.968308 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2" Feb 14 18:58:03 crc kubenswrapper[4897]: I0214 18:58:03.046834 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4669d0a9-6bb7-4e10-9e83-88038ec23e72-bundle\") pod \"4669d0a9-6bb7-4e10-9e83-88038ec23e72\" (UID: \"4669d0a9-6bb7-4e10-9e83-88038ec23e72\") " Feb 14 18:58:03 crc kubenswrapper[4897]: I0214 18:58:03.046939 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4669d0a9-6bb7-4e10-9e83-88038ec23e72-util\") pod \"4669d0a9-6bb7-4e10-9e83-88038ec23e72\" (UID: \"4669d0a9-6bb7-4e10-9e83-88038ec23e72\") " Feb 14 18:58:03 crc kubenswrapper[4897]: I0214 18:58:03.046982 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds7g8\" (UniqueName: \"kubernetes.io/projected/4669d0a9-6bb7-4e10-9e83-88038ec23e72-kube-api-access-ds7g8\") pod \"4669d0a9-6bb7-4e10-9e83-88038ec23e72\" (UID: \"4669d0a9-6bb7-4e10-9e83-88038ec23e72\") " Feb 14 18:58:03 crc kubenswrapper[4897]: I0214 18:58:03.048041 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4669d0a9-6bb7-4e10-9e83-88038ec23e72-bundle" (OuterVolumeSpecName: "bundle") pod "4669d0a9-6bb7-4e10-9e83-88038ec23e72" (UID: "4669d0a9-6bb7-4e10-9e83-88038ec23e72"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:58:03 crc kubenswrapper[4897]: I0214 18:58:03.052214 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4669d0a9-6bb7-4e10-9e83-88038ec23e72-kube-api-access-ds7g8" (OuterVolumeSpecName: "kube-api-access-ds7g8") pod "4669d0a9-6bb7-4e10-9e83-88038ec23e72" (UID: "4669d0a9-6bb7-4e10-9e83-88038ec23e72"). InnerVolumeSpecName "kube-api-access-ds7g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:58:03 crc kubenswrapper[4897]: I0214 18:58:03.057887 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4669d0a9-6bb7-4e10-9e83-88038ec23e72-util" (OuterVolumeSpecName: "util") pod "4669d0a9-6bb7-4e10-9e83-88038ec23e72" (UID: "4669d0a9-6bb7-4e10-9e83-88038ec23e72"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:58:03 crc kubenswrapper[4897]: I0214 18:58:03.149135 4897 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4669d0a9-6bb7-4e10-9e83-88038ec23e72-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 18:58:03 crc kubenswrapper[4897]: I0214 18:58:03.149163 4897 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4669d0a9-6bb7-4e10-9e83-88038ec23e72-util\") on node \"crc\" DevicePath \"\"" Feb 14 18:58:03 crc kubenswrapper[4897]: I0214 18:58:03.149172 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds7g8\" (UniqueName: \"kubernetes.io/projected/4669d0a9-6bb7-4e10-9e83-88038ec23e72-kube-api-access-ds7g8\") on node \"crc\" DevicePath \"\"" Feb 14 18:58:03 crc kubenswrapper[4897]: I0214 18:58:03.585486 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2" event={"ID":"4669d0a9-6bb7-4e10-9e83-88038ec23e72","Type":"ContainerDied","Data":"52a9776fa0bb02384e65c1c61bdd56613f7e2e8838d6b3efb21a699b764fbbc5"} Feb 14 18:58:03 crc kubenswrapper[4897]: I0214 18:58:03.585562 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52a9776fa0bb02384e65c1c61bdd56613f7e2e8838d6b3efb21a699b764fbbc5" Feb 14 18:58:03 crc kubenswrapper[4897]: I0214 18:58:03.585718 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2" Feb 14 18:58:06 crc kubenswrapper[4897]: I0214 18:58:06.490138 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-lprfm"] Feb 14 18:58:06 crc kubenswrapper[4897]: E0214 18:58:06.490658 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4669d0a9-6bb7-4e10-9e83-88038ec23e72" containerName="pull" Feb 14 18:58:06 crc kubenswrapper[4897]: I0214 18:58:06.490670 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="4669d0a9-6bb7-4e10-9e83-88038ec23e72" containerName="pull" Feb 14 18:58:06 crc kubenswrapper[4897]: E0214 18:58:06.490685 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4669d0a9-6bb7-4e10-9e83-88038ec23e72" containerName="util" Feb 14 18:58:06 crc kubenswrapper[4897]: I0214 18:58:06.490691 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="4669d0a9-6bb7-4e10-9e83-88038ec23e72" containerName="util" Feb 14 18:58:06 crc kubenswrapper[4897]: E0214 18:58:06.490707 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4669d0a9-6bb7-4e10-9e83-88038ec23e72" containerName="extract" Feb 14 18:58:06 crc kubenswrapper[4897]: I0214 18:58:06.490712 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="4669d0a9-6bb7-4e10-9e83-88038ec23e72" containerName="extract" Feb 14 18:58:06 crc kubenswrapper[4897]: I0214 18:58:06.490842 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="4669d0a9-6bb7-4e10-9e83-88038ec23e72" containerName="extract" Feb 14 18:58:06 crc kubenswrapper[4897]: I0214 18:58:06.491334 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-lprfm" Feb 14 18:58:06 crc kubenswrapper[4897]: I0214 18:58:06.493698 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 14 18:58:06 crc kubenswrapper[4897]: I0214 18:58:06.493823 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 14 18:58:06 crc kubenswrapper[4897]: I0214 18:58:06.501122 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-zxxzc" Feb 14 18:58:06 crc kubenswrapper[4897]: I0214 18:58:06.503682 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-lprfm"] Feb 14 18:58:06 crc kubenswrapper[4897]: I0214 18:58:06.610704 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlk6b\" (UniqueName: \"kubernetes.io/projected/02f39a6b-a277-4235-a912-61b98953c097-kube-api-access-mlk6b\") pod \"nmstate-operator-694c9596b7-lprfm\" (UID: \"02f39a6b-a277-4235-a912-61b98953c097\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-lprfm" Feb 14 18:58:06 crc kubenswrapper[4897]: I0214 18:58:06.712981 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlk6b\" (UniqueName: \"kubernetes.io/projected/02f39a6b-a277-4235-a912-61b98953c097-kube-api-access-mlk6b\") pod \"nmstate-operator-694c9596b7-lprfm\" (UID: \"02f39a6b-a277-4235-a912-61b98953c097\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-lprfm" Feb 14 18:58:06 crc kubenswrapper[4897]: I0214 18:58:06.731566 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlk6b\" (UniqueName: \"kubernetes.io/projected/02f39a6b-a277-4235-a912-61b98953c097-kube-api-access-mlk6b\") pod \"nmstate-operator-694c9596b7-lprfm\" (UID: \"02f39a6b-a277-4235-a912-61b98953c097\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-lprfm" Feb 14 18:58:06 crc kubenswrapper[4897]: I0214 18:58:06.826687 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-lprfm" Feb 14 18:58:07 crc kubenswrapper[4897]: I0214 18:58:07.280169 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-lprfm"] Feb 14 18:58:07 crc kubenswrapper[4897]: I0214 18:58:07.619595 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-lprfm" event={"ID":"02f39a6b-a277-4235-a912-61b98953c097","Type":"ContainerStarted","Data":"9e3ed829c39c6c14402b26bd3221a1cbc70cf58274ab5b636b5ee8387fbf0fd7"} Feb 14 18:58:10 crc kubenswrapper[4897]: I0214 18:58:10.642472 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-lprfm" event={"ID":"02f39a6b-a277-4235-a912-61b98953c097","Type":"ContainerStarted","Data":"ef3700eda1ca2173e9466a969b52d451472e1fac11d4c37ef06bc41b519f01d7"} Feb 14 18:58:10 crc kubenswrapper[4897]: I0214 18:58:10.671617 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-lprfm" podStartSLOduration=1.8548597 podStartE2EDuration="4.671590428s" podCreationTimestamp="2026-02-14 18:58:06 +0000 UTC" firstStartedPulling="2026-02-14 18:58:07.285740939 +0000 UTC m=+940.262149422" lastFinishedPulling="2026-02-14 18:58:10.102471667 +0000 UTC m=+943.078880150" observedRunningTime="2026-02-14 18:58:10.663958941 +0000 UTC m=+943.640367454" watchObservedRunningTime="2026-02-14 18:58:10.671590428 +0000 UTC m=+943.647998921" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.188146 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-gg4wk"] Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.194846 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gg4wk" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.215098 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-dqnw8" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.232944 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-gg4wk"] Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.244825 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-tf6nv"] Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.246512 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-tf6nv" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.249355 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.268531 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-tf6nv"] Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.276546 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4wxd\" (UniqueName: \"kubernetes.io/projected/c70ba798-8c12-43e8-a0e2-d54617b6bb84-kube-api-access-b4wxd\") pod \"nmstate-webhook-866bcb46dc-tf6nv\" (UID: \"c70ba798-8c12-43e8-a0e2-d54617b6bb84\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-tf6nv" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.276671 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjctx\" (UniqueName: \"kubernetes.io/projected/9cc05bdf-cb61-462f-a326-9f8058bfa699-kube-api-access-kjctx\") pod \"nmstate-metrics-58c85c668d-gg4wk\" (UID: \"9cc05bdf-cb61-462f-a326-9f8058bfa699\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-gg4wk" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.276714 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c70ba798-8c12-43e8-a0e2-d54617b6bb84-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-tf6nv\" (UID: \"c70ba798-8c12-43e8-a0e2-d54617b6bb84\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-tf6nv" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.289116 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-d5lnt"] Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.290243 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-d5lnt" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.375193 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2vrtf"] Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.378019 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4wxd\" (UniqueName: \"kubernetes.io/projected/c70ba798-8c12-43e8-a0e2-d54617b6bb84-kube-api-access-b4wxd\") pod \"nmstate-webhook-866bcb46dc-tf6nv\" (UID: \"c70ba798-8c12-43e8-a0e2-d54617b6bb84\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-tf6nv" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.378088 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ff7e179e-a00c-436b-bf50-c14810288beb-ovs-socket\") pod \"nmstate-handler-d5lnt\" (UID: \"ff7e179e-a00c-436b-bf50-c14810288beb\") " pod="openshift-nmstate/nmstate-handler-d5lnt" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.378130 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29886\" (UniqueName: \"kubernetes.io/projected/ff7e179e-a00c-436b-bf50-c14810288beb-kube-api-access-29886\") pod \"nmstate-handler-d5lnt\" (UID: \"ff7e179e-a00c-436b-bf50-c14810288beb\") " pod="openshift-nmstate/nmstate-handler-d5lnt" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.378179 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjctx\" (UniqueName: \"kubernetes.io/projected/9cc05bdf-cb61-462f-a326-9f8058bfa699-kube-api-access-kjctx\") pod \"nmstate-metrics-58c85c668d-gg4wk\" (UID: \"9cc05bdf-cb61-462f-a326-9f8058bfa699\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-gg4wk" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.378205 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c70ba798-8c12-43e8-a0e2-d54617b6bb84-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-tf6nv\" (UID: \"c70ba798-8c12-43e8-a0e2-d54617b6bb84\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-tf6nv" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.378238 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ff7e179e-a00c-436b-bf50-c14810288beb-nmstate-lock\") pod \"nmstate-handler-d5lnt\" (UID: \"ff7e179e-a00c-436b-bf50-c14810288beb\") " pod="openshift-nmstate/nmstate-handler-d5lnt" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.378259 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ff7e179e-a00c-436b-bf50-c14810288beb-dbus-socket\") pod \"nmstate-handler-d5lnt\" (UID: \"ff7e179e-a00c-436b-bf50-c14810288beb\") " pod="openshift-nmstate/nmstate-handler-d5lnt" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.378606 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2vrtf" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.384435 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-4m4q7" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.384442 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.388335 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.405609 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4wxd\" (UniqueName: \"kubernetes.io/projected/c70ba798-8c12-43e8-a0e2-d54617b6bb84-kube-api-access-b4wxd\") pod \"nmstate-webhook-866bcb46dc-tf6nv\" (UID: \"c70ba798-8c12-43e8-a0e2-d54617b6bb84\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-tf6nv" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.413516 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2vrtf"] Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.416762 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c70ba798-8c12-43e8-a0e2-d54617b6bb84-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-tf6nv\" (UID: \"c70ba798-8c12-43e8-a0e2-d54617b6bb84\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-tf6nv" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.425433 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjctx\" (UniqueName: \"kubernetes.io/projected/9cc05bdf-cb61-462f-a326-9f8058bfa699-kube-api-access-kjctx\") pod \"nmstate-metrics-58c85c668d-gg4wk\" (UID: \"9cc05bdf-cb61-462f-a326-9f8058bfa699\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-gg4wk" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.479485 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ff7e179e-a00c-436b-bf50-c14810288beb-nmstate-lock\") pod \"nmstate-handler-d5lnt\" (UID: \"ff7e179e-a00c-436b-bf50-c14810288beb\") " pod="openshift-nmstate/nmstate-handler-d5lnt" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.479534 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ff7e179e-a00c-436b-bf50-c14810288beb-dbus-socket\") pod \"nmstate-handler-d5lnt\" (UID: \"ff7e179e-a00c-436b-bf50-c14810288beb\") " pod="openshift-nmstate/nmstate-handler-d5lnt" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.479619 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ff7e179e-a00c-436b-bf50-c14810288beb-nmstate-lock\") pod \"nmstate-handler-d5lnt\" (UID: \"ff7e179e-a00c-436b-bf50-c14810288beb\") " pod="openshift-nmstate/nmstate-handler-d5lnt" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.479870 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ff7e179e-a00c-436b-bf50-c14810288beb-dbus-socket\") pod \"nmstate-handler-d5lnt\" (UID: \"ff7e179e-a00c-436b-bf50-c14810288beb\") " pod="openshift-nmstate/nmstate-handler-d5lnt" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.481441 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gdtn\" (UniqueName: \"kubernetes.io/projected/d29745b2-a844-4447-bd55-859d755cf733-kube-api-access-5gdtn\") pod \"nmstate-console-plugin-5c78fc5d65-2vrtf\" (UID: \"d29745b2-a844-4447-bd55-859d755cf733\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2vrtf" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.481551 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d29745b2-a844-4447-bd55-859d755cf733-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-2vrtf\" (UID: \"d29745b2-a844-4447-bd55-859d755cf733\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2vrtf" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.481591 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d29745b2-a844-4447-bd55-859d755cf733-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-2vrtf\" (UID: \"d29745b2-a844-4447-bd55-859d755cf733\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2vrtf" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.481613 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ff7e179e-a00c-436b-bf50-c14810288beb-ovs-socket\") pod \"nmstate-handler-d5lnt\" (UID: \"ff7e179e-a00c-436b-bf50-c14810288beb\") " pod="openshift-nmstate/nmstate-handler-d5lnt" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.481655 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ff7e179e-a00c-436b-bf50-c14810288beb-ovs-socket\") pod \"nmstate-handler-d5lnt\" (UID: \"ff7e179e-a00c-436b-bf50-c14810288beb\") " pod="openshift-nmstate/nmstate-handler-d5lnt" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.481736 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29886\" (UniqueName: \"kubernetes.io/projected/ff7e179e-a00c-436b-bf50-c14810288beb-kube-api-access-29886\") pod \"nmstate-handler-d5lnt\" (UID: \"ff7e179e-a00c-436b-bf50-c14810288beb\") " pod="openshift-nmstate/nmstate-handler-d5lnt" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.511727 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29886\" (UniqueName: \"kubernetes.io/projected/ff7e179e-a00c-436b-bf50-c14810288beb-kube-api-access-29886\") pod \"nmstate-handler-d5lnt\" (UID: \"ff7e179e-a00c-436b-bf50-c14810288beb\") " pod="openshift-nmstate/nmstate-handler-d5lnt" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.548081 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gg4wk" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.559366 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7b9ddbfb7b-bnlsc"] Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.560302 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.565307 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-tf6nv" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.582477 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d29745b2-a844-4447-bd55-859d755cf733-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-2vrtf\" (UID: \"d29745b2-a844-4447-bd55-859d755cf733\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2vrtf" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.582695 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d29745b2-a844-4447-bd55-859d755cf733-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-2vrtf\" (UID: \"d29745b2-a844-4447-bd55-859d755cf733\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2vrtf" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.582792 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mjpg\" (UniqueName: \"kubernetes.io/projected/5431c44c-05b0-4319-867b-49e3bf15174c-kube-api-access-6mjpg\") pod \"console-7b9ddbfb7b-bnlsc\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.582874 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5431c44c-05b0-4319-867b-49e3bf15174c-trusted-ca-bundle\") pod \"console-7b9ddbfb7b-bnlsc\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.583072 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5431c44c-05b0-4319-867b-49e3bf15174c-console-serving-cert\") pod \"console-7b9ddbfb7b-bnlsc\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.583141 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5431c44c-05b0-4319-867b-49e3bf15174c-oauth-serving-cert\") pod \"console-7b9ddbfb7b-bnlsc\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.583224 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5431c44c-05b0-4319-867b-49e3bf15174c-console-oauth-config\") pod \"console-7b9ddbfb7b-bnlsc\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.583308 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gdtn\" (UniqueName: \"kubernetes.io/projected/d29745b2-a844-4447-bd55-859d755cf733-kube-api-access-5gdtn\") pod \"nmstate-console-plugin-5c78fc5d65-2vrtf\" (UID: \"d29745b2-a844-4447-bd55-859d755cf733\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2vrtf" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.583386 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5431c44c-05b0-4319-867b-49e3bf15174c-service-ca\") pod \"console-7b9ddbfb7b-bnlsc\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.583468 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5431c44c-05b0-4319-867b-49e3bf15174c-console-config\") pod \"console-7b9ddbfb7b-bnlsc\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 18:58:16 crc kubenswrapper[4897]: E0214 18:58:16.582594 4897 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 14 18:58:16 crc kubenswrapper[4897]: E0214 18:58:16.583654 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d29745b2-a844-4447-bd55-859d755cf733-plugin-serving-cert podName:d29745b2-a844-4447-bd55-859d755cf733 nodeName:}" failed. No retries permitted until 2026-02-14 18:58:17.083634158 +0000 UTC m=+950.060042641 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/d29745b2-a844-4447-bd55-859d755cf733-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-2vrtf" (UID: "d29745b2-a844-4447-bd55-859d755cf733") : secret "plugin-serving-cert" not found Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.584783 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d29745b2-a844-4447-bd55-859d755cf733-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-2vrtf\" (UID: \"d29745b2-a844-4447-bd55-859d755cf733\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2vrtf" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.608891 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gdtn\" (UniqueName: \"kubernetes.io/projected/d29745b2-a844-4447-bd55-859d755cf733-kube-api-access-5gdtn\") pod \"nmstate-console-plugin-5c78fc5d65-2vrtf\" (UID: \"d29745b2-a844-4447-bd55-859d755cf733\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2vrtf" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.620236 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7b9ddbfb7b-bnlsc"] Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.620628 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-d5lnt" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.688264 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5431c44c-05b0-4319-867b-49e3bf15174c-console-oauth-config\") pod \"console-7b9ddbfb7b-bnlsc\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.688640 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5431c44c-05b0-4319-867b-49e3bf15174c-service-ca\") pod \"console-7b9ddbfb7b-bnlsc\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.688677 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5431c44c-05b0-4319-867b-49e3bf15174c-console-config\") pod \"console-7b9ddbfb7b-bnlsc\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.688753 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mjpg\" (UniqueName: \"kubernetes.io/projected/5431c44c-05b0-4319-867b-49e3bf15174c-kube-api-access-6mjpg\") pod \"console-7b9ddbfb7b-bnlsc\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.688798 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5431c44c-05b0-4319-867b-49e3bf15174c-trusted-ca-bundle\") pod \"console-7b9ddbfb7b-bnlsc\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.688831 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5431c44c-05b0-4319-867b-49e3bf15174c-console-serving-cert\") pod \"console-7b9ddbfb7b-bnlsc\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.688855 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5431c44c-05b0-4319-867b-49e3bf15174c-oauth-serving-cert\") pod \"console-7b9ddbfb7b-bnlsc\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.690081 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5431c44c-05b0-4319-867b-49e3bf15174c-oauth-serving-cert\") pod \"console-7b9ddbfb7b-bnlsc\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.695466 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5431c44c-05b0-4319-867b-49e3bf15174c-service-ca\") pod \"console-7b9ddbfb7b-bnlsc\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.695787 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5431c44c-05b0-4319-867b-49e3bf15174c-console-config\") pod \"console-7b9ddbfb7b-bnlsc\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.696197 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5431c44c-05b0-4319-867b-49e3bf15174c-trusted-ca-bundle\") pod \"console-7b9ddbfb7b-bnlsc\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.697878 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5431c44c-05b0-4319-867b-49e3bf15174c-console-oauth-config\") pod \"console-7b9ddbfb7b-bnlsc\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.702919 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5431c44c-05b0-4319-867b-49e3bf15174c-console-serving-cert\") pod \"console-7b9ddbfb7b-bnlsc\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.707715 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-d5lnt" event={"ID":"ff7e179e-a00c-436b-bf50-c14810288beb","Type":"ContainerStarted","Data":"b85fb1a651dd371ef668a52e30ac495aa60a7b7044c077f1731f1658c2e1fff5"} Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.719603 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mjpg\" (UniqueName: \"kubernetes.io/projected/5431c44c-05b0-4319-867b-49e3bf15174c-kube-api-access-6mjpg\") pod \"console-7b9ddbfb7b-bnlsc\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.863395 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-gg4wk"] Feb 14 18:58:16 crc kubenswrapper[4897]: I0214 18:58:16.905462 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 18:58:17 crc kubenswrapper[4897]: I0214 18:58:17.095218 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d29745b2-a844-4447-bd55-859d755cf733-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-2vrtf\" (UID: \"d29745b2-a844-4447-bd55-859d755cf733\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2vrtf" Feb 14 18:58:17 crc kubenswrapper[4897]: I0214 18:58:17.101294 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d29745b2-a844-4447-bd55-859d755cf733-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-2vrtf\" (UID: \"d29745b2-a844-4447-bd55-859d755cf733\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2vrtf" Feb 14 18:58:17 crc kubenswrapper[4897]: I0214 18:58:17.146162 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-tf6nv"] Feb 14 18:58:17 crc kubenswrapper[4897]: I0214 18:58:17.318359 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7b9ddbfb7b-bnlsc"] Feb 14 18:58:17 crc kubenswrapper[4897]: W0214 18:58:17.326694 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5431c44c_05b0_4319_867b_49e3bf15174c.slice/crio-9bfebe629d9cafd8981a767f449fe725089170a10821983ed14d3eeaff8a45d0 WatchSource:0}: Error finding container 9bfebe629d9cafd8981a767f449fe725089170a10821983ed14d3eeaff8a45d0: Status 404 returned error can't find the container with id 9bfebe629d9cafd8981a767f449fe725089170a10821983ed14d3eeaff8a45d0 Feb 14 18:58:17 crc kubenswrapper[4897]: I0214 18:58:17.377251 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2vrtf" Feb 14 18:58:17 crc kubenswrapper[4897]: I0214 18:58:17.716803 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gg4wk" event={"ID":"9cc05bdf-cb61-462f-a326-9f8058bfa699","Type":"ContainerStarted","Data":"acff528db62c81564f5be6dd37fccc8a3f71526f94addb43b1828fba9955b4ca"} Feb 14 18:58:17 crc kubenswrapper[4897]: I0214 18:58:17.717788 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-tf6nv" event={"ID":"c70ba798-8c12-43e8-a0e2-d54617b6bb84","Type":"ContainerStarted","Data":"0d4bd0223ce8046a43ee27913f57e5bea4125eead02a99e314ca1743082021cc"} Feb 14 18:58:17 crc kubenswrapper[4897]: I0214 18:58:17.719803 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b9ddbfb7b-bnlsc" event={"ID":"5431c44c-05b0-4319-867b-49e3bf15174c","Type":"ContainerStarted","Data":"6063952d86797d4bae77425130ce9ce6013b306adac3ea54a297e35c746736af"} Feb 14 18:58:17 crc kubenswrapper[4897]: I0214 18:58:17.719833 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b9ddbfb7b-bnlsc" event={"ID":"5431c44c-05b0-4319-867b-49e3bf15174c","Type":"ContainerStarted","Data":"9bfebe629d9cafd8981a767f449fe725089170a10821983ed14d3eeaff8a45d0"} Feb 14 18:58:17 crc kubenswrapper[4897]: I0214 18:58:17.834530 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7b9ddbfb7b-bnlsc" podStartSLOduration=1.8345031330000001 podStartE2EDuration="1.834503133s" podCreationTimestamp="2026-02-14 18:58:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:58:17.740840155 +0000 UTC m=+950.717248678" watchObservedRunningTime="2026-02-14 18:58:17.834503133 +0000 UTC m=+950.810911656" Feb 14 18:58:17 crc kubenswrapper[4897]: I0214 18:58:17.843568 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2vrtf"] Feb 14 18:58:18 crc kubenswrapper[4897]: I0214 18:58:18.730759 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2vrtf" event={"ID":"d29745b2-a844-4447-bd55-859d755cf733","Type":"ContainerStarted","Data":"e5e81392f2df5f3f37d2a789f95593c7caeb3c6e7d41cf614a99910a2aaef70a"} Feb 14 18:58:22 crc kubenswrapper[4897]: I0214 18:58:22.779652 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-d5lnt" event={"ID":"ff7e179e-a00c-436b-bf50-c14810288beb","Type":"ContainerStarted","Data":"b0f337d6b81e7b700e35e38ba416464ea9bb2d887feff21a9b19ffac33311e16"} Feb 14 18:58:22 crc kubenswrapper[4897]: I0214 18:58:22.780214 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-d5lnt" Feb 14 18:58:22 crc kubenswrapper[4897]: I0214 18:58:22.782599 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-tf6nv" event={"ID":"c70ba798-8c12-43e8-a0e2-d54617b6bb84","Type":"ContainerStarted","Data":"d9ddeaf414161b351bb6cb28597a65955eb081d4a42be979fd19a65fb7b3fd34"} Feb 14 18:58:22 crc kubenswrapper[4897]: I0214 18:58:22.784192 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-tf6nv" Feb 14 18:58:22 crc kubenswrapper[4897]: I0214 18:58:22.805660 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-d5lnt" podStartSLOduration=1.724451267 podStartE2EDuration="6.805634943s" podCreationTimestamp="2026-02-14 18:58:16 +0000 UTC" firstStartedPulling="2026-02-14 18:58:16.686704386 +0000 UTC m=+949.663112869" lastFinishedPulling="2026-02-14 18:58:21.767888052 +0000 UTC m=+954.744296545" observedRunningTime="2026-02-14 18:58:22.805056966 +0000 UTC m=+955.781465489" watchObservedRunningTime="2026-02-14 18:58:22.805634943 +0000 UTC m=+955.782043436" Feb 14 18:58:22 crc kubenswrapper[4897]: I0214 18:58:22.830678 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-tf6nv" podStartSLOduration=2.2395501700000002 podStartE2EDuration="6.830656378s" podCreationTimestamp="2026-02-14 18:58:16 +0000 UTC" firstStartedPulling="2026-02-14 18:58:17.163131678 +0000 UTC m=+950.139540161" lastFinishedPulling="2026-02-14 18:58:21.754237886 +0000 UTC m=+954.730646369" observedRunningTime="2026-02-14 18:58:22.823357061 +0000 UTC m=+955.799765574" watchObservedRunningTime="2026-02-14 18:58:22.830656378 +0000 UTC m=+955.807064861" Feb 14 18:58:24 crc kubenswrapper[4897]: I0214 18:58:24.803281 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gg4wk" event={"ID":"9cc05bdf-cb61-462f-a326-9f8058bfa699","Type":"ContainerStarted","Data":"a7e5c3ad48ffb31e02e59ecb3a0458d7e3779ad4e94af476b500e882081a32e9"} Feb 14 18:58:26 crc kubenswrapper[4897]: I0214 18:58:26.906544 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 18:58:26 crc kubenswrapper[4897]: I0214 18:58:26.907100 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 18:58:26 crc kubenswrapper[4897]: I0214 18:58:26.911645 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 18:58:27 crc kubenswrapper[4897]: I0214 18:58:27.834044 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2vrtf" event={"ID":"d29745b2-a844-4447-bd55-859d755cf733","Type":"ContainerStarted","Data":"8cb4fa9d367941b5e226de2016f07889ad0e3cbcecc5000022bad3245c675121"} Feb 14 18:58:27 crc kubenswrapper[4897]: I0214 18:58:27.837190 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 18:58:27 crc kubenswrapper[4897]: I0214 18:58:27.878496 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2vrtf" podStartSLOduration=2.673327934 podStartE2EDuration="11.878469101s" podCreationTimestamp="2026-02-14 18:58:16 +0000 UTC" firstStartedPulling="2026-02-14 18:58:17.888213982 +0000 UTC m=+950.864622465" lastFinishedPulling="2026-02-14 18:58:27.093355139 +0000 UTC m=+960.069763632" observedRunningTime="2026-02-14 18:58:27.859440765 +0000 UTC m=+960.835849248" watchObservedRunningTime="2026-02-14 18:58:27.878469101 +0000 UTC m=+960.854877604" Feb 14 18:58:27 crc kubenswrapper[4897]: I0214 18:58:27.920364 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-74f65588b4-xzwdj"] Feb 14 18:58:31 crc kubenswrapper[4897]: I0214 18:58:31.650746 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-d5lnt" Feb 14 18:58:31 crc kubenswrapper[4897]: I0214 18:58:31.726924 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 18:58:31 crc kubenswrapper[4897]: I0214 18:58:31.727006 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 18:58:31 crc kubenswrapper[4897]: I0214 18:58:31.876447 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gg4wk" event={"ID":"9cc05bdf-cb61-462f-a326-9f8058bfa699","Type":"ContainerStarted","Data":"facdef21a86657ba4cfc44658e62248596458dbf552de88f1bad1dcc86caffcc"} Feb 14 18:58:31 crc kubenswrapper[4897]: I0214 18:58:31.911922 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gg4wk" podStartSLOduration=1.8711277339999999 podStartE2EDuration="15.911895218s" podCreationTimestamp="2026-02-14 18:58:16 +0000 UTC" firstStartedPulling="2026-02-14 18:58:16.877141945 +0000 UTC m=+949.853550428" lastFinishedPulling="2026-02-14 18:58:30.917909429 +0000 UTC m=+963.894317912" observedRunningTime="2026-02-14 18:58:31.897997324 +0000 UTC m=+964.874405817" watchObservedRunningTime="2026-02-14 18:58:31.911895218 +0000 UTC m=+964.888303731" Feb 14 18:58:36 crc kubenswrapper[4897]: I0214 18:58:36.574383 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-tf6nv" Feb 14 18:58:52 crc kubenswrapper[4897]: I0214 18:58:52.994579 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-74f65588b4-xzwdj" podUID="1c9e604b-a644-4bd8-a149-c91719694ea8" containerName="console" containerID="cri-o://6cc9520cefb4c4d021da629b847023015a6c3270feca4a3e6793837c6077c3b6" gracePeriod=15 Feb 14 18:58:53 crc kubenswrapper[4897]: I0214 18:58:53.471940 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-74f65588b4-xzwdj_1c9e604b-a644-4bd8-a149-c91719694ea8/console/0.log" Feb 14 18:58:53 crc kubenswrapper[4897]: I0214 18:58:53.472280 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:58:53 crc kubenswrapper[4897]: I0214 18:58:53.575941 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1c9e604b-a644-4bd8-a149-c91719694ea8-oauth-serving-cert\") pod \"1c9e604b-a644-4bd8-a149-c91719694ea8\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " Feb 14 18:58:53 crc kubenswrapper[4897]: I0214 18:58:53.575984 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1c9e604b-a644-4bd8-a149-c91719694ea8-service-ca\") pod \"1c9e604b-a644-4bd8-a149-c91719694ea8\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " Feb 14 18:58:53 crc kubenswrapper[4897]: I0214 18:58:53.576013 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c9e604b-a644-4bd8-a149-c91719694ea8-console-serving-cert\") pod \"1c9e604b-a644-4bd8-a149-c91719694ea8\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " Feb 14 18:58:53 crc kubenswrapper[4897]: I0214 18:58:53.576114 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1c9e604b-a644-4bd8-a149-c91719694ea8-console-config\") pod \"1c9e604b-a644-4bd8-a149-c91719694ea8\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " Feb 14 18:58:53 crc kubenswrapper[4897]: I0214 18:58:53.576171 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1c9e604b-a644-4bd8-a149-c91719694ea8-console-oauth-config\") pod \"1c9e604b-a644-4bd8-a149-c91719694ea8\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " Feb 14 18:58:53 crc kubenswrapper[4897]: I0214 18:58:53.576262 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c9e604b-a644-4bd8-a149-c91719694ea8-trusted-ca-bundle\") pod \"1c9e604b-a644-4bd8-a149-c91719694ea8\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " Feb 14 18:58:53 crc kubenswrapper[4897]: I0214 18:58:53.576334 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7868\" (UniqueName: \"kubernetes.io/projected/1c9e604b-a644-4bd8-a149-c91719694ea8-kube-api-access-w7868\") pod \"1c9e604b-a644-4bd8-a149-c91719694ea8\" (UID: \"1c9e604b-a644-4bd8-a149-c91719694ea8\") " Feb 14 18:58:53 crc kubenswrapper[4897]: I0214 18:58:53.577765 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c9e604b-a644-4bd8-a149-c91719694ea8-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "1c9e604b-a644-4bd8-a149-c91719694ea8" (UID: "1c9e604b-a644-4bd8-a149-c91719694ea8"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:58:53 crc kubenswrapper[4897]: I0214 18:58:53.578092 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c9e604b-a644-4bd8-a149-c91719694ea8-console-config" (OuterVolumeSpecName: "console-config") pod "1c9e604b-a644-4bd8-a149-c91719694ea8" (UID: "1c9e604b-a644-4bd8-a149-c91719694ea8"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:58:53 crc kubenswrapper[4897]: I0214 18:58:53.578170 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c9e604b-a644-4bd8-a149-c91719694ea8-service-ca" (OuterVolumeSpecName: "service-ca") pod "1c9e604b-a644-4bd8-a149-c91719694ea8" (UID: "1c9e604b-a644-4bd8-a149-c91719694ea8"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:58:53 crc kubenswrapper[4897]: I0214 18:58:53.578575 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c9e604b-a644-4bd8-a149-c91719694ea8-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1c9e604b-a644-4bd8-a149-c91719694ea8" (UID: "1c9e604b-a644-4bd8-a149-c91719694ea8"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 18:58:53 crc kubenswrapper[4897]: I0214 18:58:53.584735 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c9e604b-a644-4bd8-a149-c91719694ea8-kube-api-access-w7868" (OuterVolumeSpecName: "kube-api-access-w7868") pod "1c9e604b-a644-4bd8-a149-c91719694ea8" (UID: "1c9e604b-a644-4bd8-a149-c91719694ea8"). InnerVolumeSpecName "kube-api-access-w7868". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:58:53 crc kubenswrapper[4897]: I0214 18:58:53.585435 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c9e604b-a644-4bd8-a149-c91719694ea8-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "1c9e604b-a644-4bd8-a149-c91719694ea8" (UID: "1c9e604b-a644-4bd8-a149-c91719694ea8"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:58:53 crc kubenswrapper[4897]: I0214 18:58:53.586697 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c9e604b-a644-4bd8-a149-c91719694ea8-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "1c9e604b-a644-4bd8-a149-c91719694ea8" (UID: "1c9e604b-a644-4bd8-a149-c91719694ea8"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 18:58:53 crc kubenswrapper[4897]: I0214 18:58:53.678424 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7868\" (UniqueName: \"kubernetes.io/projected/1c9e604b-a644-4bd8-a149-c91719694ea8-kube-api-access-w7868\") on node \"crc\" DevicePath \"\"" Feb 14 18:58:53 crc kubenswrapper[4897]: I0214 18:58:53.678493 4897 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1c9e604b-a644-4bd8-a149-c91719694ea8-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 18:58:53 crc kubenswrapper[4897]: I0214 18:58:53.678522 4897 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1c9e604b-a644-4bd8-a149-c91719694ea8-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:58:53 crc kubenswrapper[4897]: I0214 18:58:53.678548 4897 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c9e604b-a644-4bd8-a149-c91719694ea8-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 18:58:53 crc kubenswrapper[4897]: I0214 18:58:53.678570 4897 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1c9e604b-a644-4bd8-a149-c91719694ea8-console-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:58:53 crc kubenswrapper[4897]: I0214 18:58:53.678594 4897 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1c9e604b-a644-4bd8-a149-c91719694ea8-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 14 18:58:53 crc kubenswrapper[4897]: I0214 18:58:53.678616 4897 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c9e604b-a644-4bd8-a149-c91719694ea8-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 18:58:54 crc kubenswrapper[4897]: I0214 18:58:54.071911 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-74f65588b4-xzwdj_1c9e604b-a644-4bd8-a149-c91719694ea8/console/0.log" Feb 14 18:58:54 crc kubenswrapper[4897]: I0214 18:58:54.072020 4897 generic.go:334] "Generic (PLEG): container finished" podID="1c9e604b-a644-4bd8-a149-c91719694ea8" containerID="6cc9520cefb4c4d021da629b847023015a6c3270feca4a3e6793837c6077c3b6" exitCode=2 Feb 14 18:58:54 crc kubenswrapper[4897]: I0214 18:58:54.072091 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74f65588b4-xzwdj" Feb 14 18:58:54 crc kubenswrapper[4897]: I0214 18:58:54.072075 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74f65588b4-xzwdj" event={"ID":"1c9e604b-a644-4bd8-a149-c91719694ea8","Type":"ContainerDied","Data":"6cc9520cefb4c4d021da629b847023015a6c3270feca4a3e6793837c6077c3b6"} Feb 14 18:58:54 crc kubenswrapper[4897]: I0214 18:58:54.073059 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74f65588b4-xzwdj" event={"ID":"1c9e604b-a644-4bd8-a149-c91719694ea8","Type":"ContainerDied","Data":"e7b4123865730ba7d0cd78ab4175cecbb731f7ea248ddabfea8f52364481096a"} Feb 14 18:58:54 crc kubenswrapper[4897]: I0214 18:58:54.073078 4897 scope.go:117] "RemoveContainer" containerID="6cc9520cefb4c4d021da629b847023015a6c3270feca4a3e6793837c6077c3b6" Feb 14 18:58:54 crc kubenswrapper[4897]: I0214 18:58:54.102507 4897 scope.go:117] "RemoveContainer" containerID="6cc9520cefb4c4d021da629b847023015a6c3270feca4a3e6793837c6077c3b6" Feb 14 18:58:54 crc kubenswrapper[4897]: E0214 18:58:54.102947 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cc9520cefb4c4d021da629b847023015a6c3270feca4a3e6793837c6077c3b6\": container with ID starting with 6cc9520cefb4c4d021da629b847023015a6c3270feca4a3e6793837c6077c3b6 not found: ID does not exist" containerID="6cc9520cefb4c4d021da629b847023015a6c3270feca4a3e6793837c6077c3b6" Feb 14 18:58:54 crc kubenswrapper[4897]: I0214 18:58:54.102980 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cc9520cefb4c4d021da629b847023015a6c3270feca4a3e6793837c6077c3b6"} err="failed to get container status \"6cc9520cefb4c4d021da629b847023015a6c3270feca4a3e6793837c6077c3b6\": rpc error: code = NotFound desc = could not find container \"6cc9520cefb4c4d021da629b847023015a6c3270feca4a3e6793837c6077c3b6\": container with ID starting with 6cc9520cefb4c4d021da629b847023015a6c3270feca4a3e6793837c6077c3b6 not found: ID does not exist" Feb 14 18:58:54 crc kubenswrapper[4897]: I0214 18:58:54.108089 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-74f65588b4-xzwdj"] Feb 14 18:58:54 crc kubenswrapper[4897]: I0214 18:58:54.112085 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-74f65588b4-xzwdj"] Feb 14 18:58:55 crc kubenswrapper[4897]: I0214 18:58:55.802894 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c9e604b-a644-4bd8-a149-c91719694ea8" path="/var/lib/kubelet/pods/1c9e604b-a644-4bd8-a149-c91719694ea8/volumes" Feb 14 18:58:56 crc kubenswrapper[4897]: I0214 18:58:56.913195 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv"] Feb 14 18:58:56 crc kubenswrapper[4897]: E0214 18:58:56.914119 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c9e604b-a644-4bd8-a149-c91719694ea8" containerName="console" Feb 14 18:58:56 crc kubenswrapper[4897]: I0214 18:58:56.914154 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c9e604b-a644-4bd8-a149-c91719694ea8" containerName="console" Feb 14 18:58:56 crc kubenswrapper[4897]: I0214 18:58:56.914547 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c9e604b-a644-4bd8-a149-c91719694ea8" containerName="console" Feb 14 18:58:56 crc kubenswrapper[4897]: I0214 18:58:56.918053 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv" Feb 14 18:58:56 crc kubenswrapper[4897]: I0214 18:58:56.921128 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 14 18:58:56 crc kubenswrapper[4897]: I0214 18:58:56.933922 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv"] Feb 14 18:58:57 crc kubenswrapper[4897]: I0214 18:58:57.027960 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/700ecb41-d155-4a7c-94c0-91daf79fef82-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv\" (UID: \"700ecb41-d155-4a7c-94c0-91daf79fef82\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv" Feb 14 18:58:57 crc kubenswrapper[4897]: I0214 18:58:57.028016 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhxm2\" (UniqueName: \"kubernetes.io/projected/700ecb41-d155-4a7c-94c0-91daf79fef82-kube-api-access-rhxm2\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv\" (UID: \"700ecb41-d155-4a7c-94c0-91daf79fef82\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv" Feb 14 18:58:57 crc kubenswrapper[4897]: I0214 18:58:57.028156 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/700ecb41-d155-4a7c-94c0-91daf79fef82-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv\" (UID: \"700ecb41-d155-4a7c-94c0-91daf79fef82\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv" Feb 14 18:58:57 crc kubenswrapper[4897]: I0214 18:58:57.130095 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/700ecb41-d155-4a7c-94c0-91daf79fef82-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv\" (UID: \"700ecb41-d155-4a7c-94c0-91daf79fef82\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv" Feb 14 18:58:57 crc kubenswrapper[4897]: I0214 18:58:57.130158 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhxm2\" (UniqueName: \"kubernetes.io/projected/700ecb41-d155-4a7c-94c0-91daf79fef82-kube-api-access-rhxm2\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv\" (UID: \"700ecb41-d155-4a7c-94c0-91daf79fef82\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv" Feb 14 18:58:57 crc kubenswrapper[4897]: I0214 18:58:57.130273 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/700ecb41-d155-4a7c-94c0-91daf79fef82-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv\" (UID: \"700ecb41-d155-4a7c-94c0-91daf79fef82\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv" Feb 14 18:58:57 crc kubenswrapper[4897]: I0214 18:58:57.130939 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/700ecb41-d155-4a7c-94c0-91daf79fef82-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv\" (UID: \"700ecb41-d155-4a7c-94c0-91daf79fef82\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv" Feb 14 18:58:57 crc kubenswrapper[4897]: I0214 18:58:57.130947 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/700ecb41-d155-4a7c-94c0-91daf79fef82-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv\" (UID: \"700ecb41-d155-4a7c-94c0-91daf79fef82\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv" Feb 14 18:58:57 crc kubenswrapper[4897]: I0214 18:58:57.160931 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhxm2\" (UniqueName: \"kubernetes.io/projected/700ecb41-d155-4a7c-94c0-91daf79fef82-kube-api-access-rhxm2\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv\" (UID: \"700ecb41-d155-4a7c-94c0-91daf79fef82\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv" Feb 14 18:58:57 crc kubenswrapper[4897]: I0214 18:58:57.246806 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv" Feb 14 18:58:57 crc kubenswrapper[4897]: I0214 18:58:57.516040 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv"] Feb 14 18:58:58 crc kubenswrapper[4897]: I0214 18:58:58.101118 4897 generic.go:334] "Generic (PLEG): container finished" podID="700ecb41-d155-4a7c-94c0-91daf79fef82" containerID="6a44840e2cb4d9cd1e4a9340fc3e68898fcf2ca2ead8e846b964c4a9d4ae2ca5" exitCode=0 Feb 14 18:58:58 crc kubenswrapper[4897]: I0214 18:58:58.101249 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv" event={"ID":"700ecb41-d155-4a7c-94c0-91daf79fef82","Type":"ContainerDied","Data":"6a44840e2cb4d9cd1e4a9340fc3e68898fcf2ca2ead8e846b964c4a9d4ae2ca5"} Feb 14 18:58:58 crc kubenswrapper[4897]: I0214 18:58:58.102085 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv" event={"ID":"700ecb41-d155-4a7c-94c0-91daf79fef82","Type":"ContainerStarted","Data":"bf333eb3f6b3ee6794386fc85a77763feba3e689b8f085436607e13641a17e85"} Feb 14 18:58:58 crc kubenswrapper[4897]: I0214 18:58:58.103355 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 18:59:00 crc kubenswrapper[4897]: I0214 18:59:00.121470 4897 generic.go:334] "Generic (PLEG): container finished" podID="700ecb41-d155-4a7c-94c0-91daf79fef82" containerID="615bc68bb509035c7ededa8c8c07dbda385a2853eef7a398f4d0276a7562e600" exitCode=0 Feb 14 18:59:00 crc kubenswrapper[4897]: I0214 18:59:00.121587 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv" event={"ID":"700ecb41-d155-4a7c-94c0-91daf79fef82","Type":"ContainerDied","Data":"615bc68bb509035c7ededa8c8c07dbda385a2853eef7a398f4d0276a7562e600"} Feb 14 18:59:01 crc kubenswrapper[4897]: I0214 18:59:01.134240 4897 generic.go:334] "Generic (PLEG): container finished" podID="700ecb41-d155-4a7c-94c0-91daf79fef82" containerID="253f141fe7f386716744c274c95f72197b8a027c3269461508a98ee06233ee50" exitCode=0 Feb 14 18:59:01 crc kubenswrapper[4897]: I0214 18:59:01.134294 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv" event={"ID":"700ecb41-d155-4a7c-94c0-91daf79fef82","Type":"ContainerDied","Data":"253f141fe7f386716744c274c95f72197b8a027c3269461508a98ee06233ee50"} Feb 14 18:59:01 crc kubenswrapper[4897]: I0214 18:59:01.728748 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 18:59:01 crc kubenswrapper[4897]: I0214 18:59:01.729222 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 18:59:02 crc kubenswrapper[4897]: I0214 18:59:02.461626 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv" Feb 14 18:59:02 crc kubenswrapper[4897]: I0214 18:59:02.521881 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhxm2\" (UniqueName: \"kubernetes.io/projected/700ecb41-d155-4a7c-94c0-91daf79fef82-kube-api-access-rhxm2\") pod \"700ecb41-d155-4a7c-94c0-91daf79fef82\" (UID: \"700ecb41-d155-4a7c-94c0-91daf79fef82\") " Feb 14 18:59:02 crc kubenswrapper[4897]: I0214 18:59:02.521955 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/700ecb41-d155-4a7c-94c0-91daf79fef82-bundle\") pod \"700ecb41-d155-4a7c-94c0-91daf79fef82\" (UID: \"700ecb41-d155-4a7c-94c0-91daf79fef82\") " Feb 14 18:59:02 crc kubenswrapper[4897]: I0214 18:59:02.522060 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/700ecb41-d155-4a7c-94c0-91daf79fef82-util\") pod \"700ecb41-d155-4a7c-94c0-91daf79fef82\" (UID: \"700ecb41-d155-4a7c-94c0-91daf79fef82\") " Feb 14 18:59:02 crc kubenswrapper[4897]: I0214 18:59:02.523724 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/700ecb41-d155-4a7c-94c0-91daf79fef82-bundle" (OuterVolumeSpecName: "bundle") pod "700ecb41-d155-4a7c-94c0-91daf79fef82" (UID: "700ecb41-d155-4a7c-94c0-91daf79fef82"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:59:02 crc kubenswrapper[4897]: I0214 18:59:02.534241 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/700ecb41-d155-4a7c-94c0-91daf79fef82-kube-api-access-rhxm2" (OuterVolumeSpecName: "kube-api-access-rhxm2") pod "700ecb41-d155-4a7c-94c0-91daf79fef82" (UID: "700ecb41-d155-4a7c-94c0-91daf79fef82"). InnerVolumeSpecName "kube-api-access-rhxm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 18:59:02 crc kubenswrapper[4897]: I0214 18:59:02.543280 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/700ecb41-d155-4a7c-94c0-91daf79fef82-util" (OuterVolumeSpecName: "util") pod "700ecb41-d155-4a7c-94c0-91daf79fef82" (UID: "700ecb41-d155-4a7c-94c0-91daf79fef82"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 18:59:02 crc kubenswrapper[4897]: I0214 18:59:02.625141 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhxm2\" (UniqueName: \"kubernetes.io/projected/700ecb41-d155-4a7c-94c0-91daf79fef82-kube-api-access-rhxm2\") on node \"crc\" DevicePath \"\"" Feb 14 18:59:02 crc kubenswrapper[4897]: I0214 18:59:02.625189 4897 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/700ecb41-d155-4a7c-94c0-91daf79fef82-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 18:59:02 crc kubenswrapper[4897]: I0214 18:59:02.625203 4897 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/700ecb41-d155-4a7c-94c0-91daf79fef82-util\") on node \"crc\" DevicePath \"\"" Feb 14 18:59:03 crc kubenswrapper[4897]: I0214 18:59:03.152021 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv" event={"ID":"700ecb41-d155-4a7c-94c0-91daf79fef82","Type":"ContainerDied","Data":"bf333eb3f6b3ee6794386fc85a77763feba3e689b8f085436607e13641a17e85"} Feb 14 18:59:03 crc kubenswrapper[4897]: I0214 18:59:03.152086 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf333eb3f6b3ee6794386fc85a77763feba3e689b8f085436607e13641a17e85" Feb 14 18:59:03 crc kubenswrapper[4897]: I0214 18:59:03.152157 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.415703 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7cc9d46ffd-mbftl"] Feb 14 18:59:12 crc kubenswrapper[4897]: E0214 18:59:12.416628 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="700ecb41-d155-4a7c-94c0-91daf79fef82" containerName="extract" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.416646 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="700ecb41-d155-4a7c-94c0-91daf79fef82" containerName="extract" Feb 14 18:59:12 crc kubenswrapper[4897]: E0214 18:59:12.416682 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="700ecb41-d155-4a7c-94c0-91daf79fef82" containerName="util" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.416690 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="700ecb41-d155-4a7c-94c0-91daf79fef82" containerName="util" Feb 14 18:59:12 crc kubenswrapper[4897]: E0214 18:59:12.416714 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="700ecb41-d155-4a7c-94c0-91daf79fef82" containerName="pull" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.416721 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="700ecb41-d155-4a7c-94c0-91daf79fef82" containerName="pull" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.416874 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="700ecb41-d155-4a7c-94c0-91daf79fef82" containerName="extract" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.417548 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7cc9d46ffd-mbftl" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.419440 4897 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.419664 4897 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-8dnmw" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.419689 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.420066 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.422332 4897 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.437332 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7cc9d46ffd-mbftl"] Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.510795 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1ef9cd33-5ad0-494f-9d50-177eadf0483f-webhook-cert\") pod \"metallb-operator-controller-manager-7cc9d46ffd-mbftl\" (UID: \"1ef9cd33-5ad0-494f-9d50-177eadf0483f\") " pod="metallb-system/metallb-operator-controller-manager-7cc9d46ffd-mbftl" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.510889 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kb7p\" (UniqueName: \"kubernetes.io/projected/1ef9cd33-5ad0-494f-9d50-177eadf0483f-kube-api-access-4kb7p\") pod \"metallb-operator-controller-manager-7cc9d46ffd-mbftl\" (UID: \"1ef9cd33-5ad0-494f-9d50-177eadf0483f\") " pod="metallb-system/metallb-operator-controller-manager-7cc9d46ffd-mbftl" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.511097 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1ef9cd33-5ad0-494f-9d50-177eadf0483f-apiservice-cert\") pod \"metallb-operator-controller-manager-7cc9d46ffd-mbftl\" (UID: \"1ef9cd33-5ad0-494f-9d50-177eadf0483f\") " pod="metallb-system/metallb-operator-controller-manager-7cc9d46ffd-mbftl" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.612762 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1ef9cd33-5ad0-494f-9d50-177eadf0483f-webhook-cert\") pod \"metallb-operator-controller-manager-7cc9d46ffd-mbftl\" (UID: \"1ef9cd33-5ad0-494f-9d50-177eadf0483f\") " pod="metallb-system/metallb-operator-controller-manager-7cc9d46ffd-mbftl" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.612866 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kb7p\" (UniqueName: \"kubernetes.io/projected/1ef9cd33-5ad0-494f-9d50-177eadf0483f-kube-api-access-4kb7p\") pod \"metallb-operator-controller-manager-7cc9d46ffd-mbftl\" (UID: \"1ef9cd33-5ad0-494f-9d50-177eadf0483f\") " pod="metallb-system/metallb-operator-controller-manager-7cc9d46ffd-mbftl" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.612924 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1ef9cd33-5ad0-494f-9d50-177eadf0483f-apiservice-cert\") pod \"metallb-operator-controller-manager-7cc9d46ffd-mbftl\" (UID: \"1ef9cd33-5ad0-494f-9d50-177eadf0483f\") " pod="metallb-system/metallb-operator-controller-manager-7cc9d46ffd-mbftl" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.619553 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1ef9cd33-5ad0-494f-9d50-177eadf0483f-apiservice-cert\") pod \"metallb-operator-controller-manager-7cc9d46ffd-mbftl\" (UID: \"1ef9cd33-5ad0-494f-9d50-177eadf0483f\") " pod="metallb-system/metallb-operator-controller-manager-7cc9d46ffd-mbftl" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.625085 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1ef9cd33-5ad0-494f-9d50-177eadf0483f-webhook-cert\") pod \"metallb-operator-controller-manager-7cc9d46ffd-mbftl\" (UID: \"1ef9cd33-5ad0-494f-9d50-177eadf0483f\") " pod="metallb-system/metallb-operator-controller-manager-7cc9d46ffd-mbftl" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.640107 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kb7p\" (UniqueName: \"kubernetes.io/projected/1ef9cd33-5ad0-494f-9d50-177eadf0483f-kube-api-access-4kb7p\") pod \"metallb-operator-controller-manager-7cc9d46ffd-mbftl\" (UID: \"1ef9cd33-5ad0-494f-9d50-177eadf0483f\") " pod="metallb-system/metallb-operator-controller-manager-7cc9d46ffd-mbftl" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.674983 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-c8d485b4-vdmjx"] Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.675926 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-c8d485b4-vdmjx" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.678812 4897 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-hv7xl" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.679956 4897 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.682438 4897 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.695215 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-c8d485b4-vdmjx"] Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.743703 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7cc9d46ffd-mbftl" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.817868 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/de593d8b-e41e-4a52-bead-28e46be05e4d-webhook-cert\") pod \"metallb-operator-webhook-server-c8d485b4-vdmjx\" (UID: \"de593d8b-e41e-4a52-bead-28e46be05e4d\") " pod="metallb-system/metallb-operator-webhook-server-c8d485b4-vdmjx" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.818280 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h22wf\" (UniqueName: \"kubernetes.io/projected/de593d8b-e41e-4a52-bead-28e46be05e4d-kube-api-access-h22wf\") pod \"metallb-operator-webhook-server-c8d485b4-vdmjx\" (UID: \"de593d8b-e41e-4a52-bead-28e46be05e4d\") " pod="metallb-system/metallb-operator-webhook-server-c8d485b4-vdmjx" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.818328 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/de593d8b-e41e-4a52-bead-28e46be05e4d-apiservice-cert\") pod \"metallb-operator-webhook-server-c8d485b4-vdmjx\" (UID: \"de593d8b-e41e-4a52-bead-28e46be05e4d\") " pod="metallb-system/metallb-operator-webhook-server-c8d485b4-vdmjx" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.920869 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/de593d8b-e41e-4a52-bead-28e46be05e4d-apiservice-cert\") pod \"metallb-operator-webhook-server-c8d485b4-vdmjx\" (UID: \"de593d8b-e41e-4a52-bead-28e46be05e4d\") " pod="metallb-system/metallb-operator-webhook-server-c8d485b4-vdmjx" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.921316 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/de593d8b-e41e-4a52-bead-28e46be05e4d-webhook-cert\") pod \"metallb-operator-webhook-server-c8d485b4-vdmjx\" (UID: \"de593d8b-e41e-4a52-bead-28e46be05e4d\") " pod="metallb-system/metallb-operator-webhook-server-c8d485b4-vdmjx" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.921447 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h22wf\" (UniqueName: \"kubernetes.io/projected/de593d8b-e41e-4a52-bead-28e46be05e4d-kube-api-access-h22wf\") pod \"metallb-operator-webhook-server-c8d485b4-vdmjx\" (UID: \"de593d8b-e41e-4a52-bead-28e46be05e4d\") " pod="metallb-system/metallb-operator-webhook-server-c8d485b4-vdmjx" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.946701 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h22wf\" (UniqueName: \"kubernetes.io/projected/de593d8b-e41e-4a52-bead-28e46be05e4d-kube-api-access-h22wf\") pod \"metallb-operator-webhook-server-c8d485b4-vdmjx\" (UID: \"de593d8b-e41e-4a52-bead-28e46be05e4d\") " pod="metallb-system/metallb-operator-webhook-server-c8d485b4-vdmjx" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.949767 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/de593d8b-e41e-4a52-bead-28e46be05e4d-webhook-cert\") pod \"metallb-operator-webhook-server-c8d485b4-vdmjx\" (UID: \"de593d8b-e41e-4a52-bead-28e46be05e4d\") " pod="metallb-system/metallb-operator-webhook-server-c8d485b4-vdmjx" Feb 14 18:59:12 crc kubenswrapper[4897]: I0214 18:59:12.953609 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/de593d8b-e41e-4a52-bead-28e46be05e4d-apiservice-cert\") pod \"metallb-operator-webhook-server-c8d485b4-vdmjx\" (UID: \"de593d8b-e41e-4a52-bead-28e46be05e4d\") " pod="metallb-system/metallb-operator-webhook-server-c8d485b4-vdmjx" Feb 14 18:59:13 crc kubenswrapper[4897]: I0214 18:59:13.005422 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-c8d485b4-vdmjx" Feb 14 18:59:13 crc kubenswrapper[4897]: I0214 18:59:13.397727 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7cc9d46ffd-mbftl"] Feb 14 18:59:13 crc kubenswrapper[4897]: W0214 18:59:13.407483 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ef9cd33_5ad0_494f_9d50_177eadf0483f.slice/crio-ce233a3c05f77c4d96a0f49366c3eda3b978b2dceb13a8369bcf796d8977a35b WatchSource:0}: Error finding container ce233a3c05f77c4d96a0f49366c3eda3b978b2dceb13a8369bcf796d8977a35b: Status 404 returned error can't find the container with id ce233a3c05f77c4d96a0f49366c3eda3b978b2dceb13a8369bcf796d8977a35b Feb 14 18:59:13 crc kubenswrapper[4897]: I0214 18:59:13.501726 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-c8d485b4-vdmjx"] Feb 14 18:59:13 crc kubenswrapper[4897]: W0214 18:59:13.503004 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde593d8b_e41e_4a52_bead_28e46be05e4d.slice/crio-a0d4d8cae3ec2ca7b9a3ef0af6d847bbb1b8aea424bfc1c7ff086c534a3c48bf WatchSource:0}: Error finding container a0d4d8cae3ec2ca7b9a3ef0af6d847bbb1b8aea424bfc1c7ff086c534a3c48bf: Status 404 returned error can't find the container with id a0d4d8cae3ec2ca7b9a3ef0af6d847bbb1b8aea424bfc1c7ff086c534a3c48bf Feb 14 18:59:14 crc kubenswrapper[4897]: I0214 18:59:14.237795 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-c8d485b4-vdmjx" event={"ID":"de593d8b-e41e-4a52-bead-28e46be05e4d","Type":"ContainerStarted","Data":"a0d4d8cae3ec2ca7b9a3ef0af6d847bbb1b8aea424bfc1c7ff086c534a3c48bf"} Feb 14 18:59:14 crc kubenswrapper[4897]: I0214 18:59:14.241111 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7cc9d46ffd-mbftl" event={"ID":"1ef9cd33-5ad0-494f-9d50-177eadf0483f","Type":"ContainerStarted","Data":"ce233a3c05f77c4d96a0f49366c3eda3b978b2dceb13a8369bcf796d8977a35b"} Feb 14 18:59:19 crc kubenswrapper[4897]: I0214 18:59:19.277060 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7cc9d46ffd-mbftl" event={"ID":"1ef9cd33-5ad0-494f-9d50-177eadf0483f","Type":"ContainerStarted","Data":"fd30dd9ebcc7c72829486e74038fdef218ee8772b478ad3b602f0a6bcb08a92b"} Feb 14 18:59:19 crc kubenswrapper[4897]: I0214 18:59:19.277601 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7cc9d46ffd-mbftl" Feb 14 18:59:19 crc kubenswrapper[4897]: I0214 18:59:19.279239 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-c8d485b4-vdmjx" event={"ID":"de593d8b-e41e-4a52-bead-28e46be05e4d","Type":"ContainerStarted","Data":"d7d1fba24477252b375051d9fc0653a074c5444ef6c5a5e6c5b26e37b19ab09b"} Feb 14 18:59:19 crc kubenswrapper[4897]: I0214 18:59:19.279387 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-c8d485b4-vdmjx" Feb 14 18:59:19 crc kubenswrapper[4897]: I0214 18:59:19.302694 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7cc9d46ffd-mbftl" podStartSLOduration=2.2616610120000002 podStartE2EDuration="7.302672327s" podCreationTimestamp="2026-02-14 18:59:12 +0000 UTC" firstStartedPulling="2026-02-14 18:59:13.410874418 +0000 UTC m=+1006.387282901" lastFinishedPulling="2026-02-14 18:59:18.451885723 +0000 UTC m=+1011.428294216" observedRunningTime="2026-02-14 18:59:19.292237925 +0000 UTC m=+1012.268646418" watchObservedRunningTime="2026-02-14 18:59:19.302672327 +0000 UTC m=+1012.279080820" Feb 14 18:59:19 crc kubenswrapper[4897]: I0214 18:59:19.323557 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-c8d485b4-vdmjx" podStartSLOduration=2.340110496 podStartE2EDuration="7.323538552s" podCreationTimestamp="2026-02-14 18:59:12 +0000 UTC" firstStartedPulling="2026-02-14 18:59:13.506451461 +0000 UTC m=+1006.482859954" lastFinishedPulling="2026-02-14 18:59:18.489879527 +0000 UTC m=+1011.466288010" observedRunningTime="2026-02-14 18:59:19.314956517 +0000 UTC m=+1012.291365020" watchObservedRunningTime="2026-02-14 18:59:19.323538552 +0000 UTC m=+1012.299947035" Feb 14 18:59:31 crc kubenswrapper[4897]: I0214 18:59:31.726239 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 18:59:31 crc kubenswrapper[4897]: I0214 18:59:31.726724 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 18:59:31 crc kubenswrapper[4897]: I0214 18:59:31.726775 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 18:59:31 crc kubenswrapper[4897]: I0214 18:59:31.727584 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f530591baa3a6bc6b0de2a6354906a1508c867fd239d41af91ab4794b66dc167"} pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 18:59:31 crc kubenswrapper[4897]: I0214 18:59:31.727642 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" containerID="cri-o://f530591baa3a6bc6b0de2a6354906a1508c867fd239d41af91ab4794b66dc167" gracePeriod=600 Feb 14 18:59:32 crc kubenswrapper[4897]: I0214 18:59:32.388925 4897 generic.go:334] "Generic (PLEG): container finished" podID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerID="f530591baa3a6bc6b0de2a6354906a1508c867fd239d41af91ab4794b66dc167" exitCode=0 Feb 14 18:59:32 crc kubenswrapper[4897]: I0214 18:59:32.388973 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerDied","Data":"f530591baa3a6bc6b0de2a6354906a1508c867fd239d41af91ab4794b66dc167"} Feb 14 18:59:32 crc kubenswrapper[4897]: I0214 18:59:32.389001 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerStarted","Data":"68d22528009a2caf1cd383d357574b535616ffbac78d6b95052fe2b58aa80740"} Feb 14 18:59:32 crc kubenswrapper[4897]: I0214 18:59:32.389017 4897 scope.go:117] "RemoveContainer" containerID="446e5cdc189ae2c51f665c763c60fe16201efbf3c0c2e1e9f8fe851134e12224" Feb 14 18:59:33 crc kubenswrapper[4897]: I0214 18:59:33.009553 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-c8d485b4-vdmjx" Feb 14 18:59:52 crc kubenswrapper[4897]: I0214 18:59:52.747074 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7cc9d46ffd-mbftl" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.549184 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-n6ptt"] Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.550449 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n6ptt" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.554846 4897 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.555152 4897 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-x6crb" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.558511 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-ks77p"] Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.563320 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-ks77p" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.568373 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.568378 4897 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.576096 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-n6ptt"] Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.663293 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-4r6x6"] Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.664529 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-4r6x6" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.666994 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.667225 4897 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-749d2" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.667399 4897 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.668165 4897 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.676012 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7ea0a9e9-940c-4856-8fd0-f19994e3b810-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-n6ptt\" (UID: \"7ea0a9e9-940c-4856-8fd0-f19994e3b810\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n6ptt" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.676120 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/1b139a41-dd2e-42ba-a86d-01ade60da46f-reloader\") pod \"frr-k8s-ks77p\" (UID: \"1b139a41-dd2e-42ba-a86d-01ade60da46f\") " pod="metallb-system/frr-k8s-ks77p" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.676184 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mj4j\" (UniqueName: \"kubernetes.io/projected/7ea0a9e9-940c-4856-8fd0-f19994e3b810-kube-api-access-9mj4j\") pod \"frr-k8s-webhook-server-78b44bf5bb-n6ptt\" (UID: \"7ea0a9e9-940c-4856-8fd0-f19994e3b810\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n6ptt" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.676212 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/1b139a41-dd2e-42ba-a86d-01ade60da46f-metrics\") pod \"frr-k8s-ks77p\" (UID: \"1b139a41-dd2e-42ba-a86d-01ade60da46f\") " pod="metallb-system/frr-k8s-ks77p" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.676262 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b139a41-dd2e-42ba-a86d-01ade60da46f-metrics-certs\") pod \"frr-k8s-ks77p\" (UID: \"1b139a41-dd2e-42ba-a86d-01ade60da46f\") " pod="metallb-system/frr-k8s-ks77p" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.676448 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9hmx\" (UniqueName: \"kubernetes.io/projected/1b139a41-dd2e-42ba-a86d-01ade60da46f-kube-api-access-w9hmx\") pod \"frr-k8s-ks77p\" (UID: \"1b139a41-dd2e-42ba-a86d-01ade60da46f\") " pod="metallb-system/frr-k8s-ks77p" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.676632 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/1b139a41-dd2e-42ba-a86d-01ade60da46f-frr-startup\") pod \"frr-k8s-ks77p\" (UID: \"1b139a41-dd2e-42ba-a86d-01ade60da46f\") " pod="metallb-system/frr-k8s-ks77p" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.676671 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/1b139a41-dd2e-42ba-a86d-01ade60da46f-frr-sockets\") pod \"frr-k8s-ks77p\" (UID: \"1b139a41-dd2e-42ba-a86d-01ade60da46f\") " pod="metallb-system/frr-k8s-ks77p" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.676719 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/1b139a41-dd2e-42ba-a86d-01ade60da46f-frr-conf\") pod \"frr-k8s-ks77p\" (UID: \"1b139a41-dd2e-42ba-a86d-01ade60da46f\") " pod="metallb-system/frr-k8s-ks77p" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.688762 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-mdj4b"] Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.689968 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-mdj4b" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.692820 4897 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.703532 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-mdj4b"] Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.777918 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae82eac1-c909-47f2-b4b5-2f3f1267345e-metrics-certs\") pod \"speaker-4r6x6\" (UID: \"ae82eac1-c909-47f2-b4b5-2f3f1267345e\") " pod="metallb-system/speaker-4r6x6" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.778937 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b139a41-dd2e-42ba-a86d-01ade60da46f-metrics-certs\") pod \"frr-k8s-ks77p\" (UID: \"1b139a41-dd2e-42ba-a86d-01ade60da46f\") " pod="metallb-system/frr-k8s-ks77p" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.779080 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4bfd25e6-cec1-440a-8bfb-3ce2bcb4ab8b-cert\") pod \"controller-69bbfbf88f-mdj4b\" (UID: \"4bfd25e6-cec1-440a-8bfb-3ce2bcb4ab8b\") " pod="metallb-system/controller-69bbfbf88f-mdj4b" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.779192 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9hmx\" (UniqueName: \"kubernetes.io/projected/1b139a41-dd2e-42ba-a86d-01ade60da46f-kube-api-access-w9hmx\") pod \"frr-k8s-ks77p\" (UID: \"1b139a41-dd2e-42ba-a86d-01ade60da46f\") " pod="metallb-system/frr-k8s-ks77p" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.779310 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/1b139a41-dd2e-42ba-a86d-01ade60da46f-frr-startup\") pod \"frr-k8s-ks77p\" (UID: \"1b139a41-dd2e-42ba-a86d-01ade60da46f\") " pod="metallb-system/frr-k8s-ks77p" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.779395 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/1b139a41-dd2e-42ba-a86d-01ade60da46f-frr-sockets\") pod \"frr-k8s-ks77p\" (UID: \"1b139a41-dd2e-42ba-a86d-01ade60da46f\") " pod="metallb-system/frr-k8s-ks77p" Feb 14 18:59:53 crc kubenswrapper[4897]: E0214 18:59:53.779103 4897 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.779530 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/1b139a41-dd2e-42ba-a86d-01ade60da46f-frr-conf\") pod \"frr-k8s-ks77p\" (UID: \"1b139a41-dd2e-42ba-a86d-01ade60da46f\") " pod="metallb-system/frr-k8s-ks77p" Feb 14 18:59:53 crc kubenswrapper[4897]: E0214 18:59:53.779573 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b139a41-dd2e-42ba-a86d-01ade60da46f-metrics-certs podName:1b139a41-dd2e-42ba-a86d-01ade60da46f nodeName:}" failed. No retries permitted until 2026-02-14 18:59:54.279536427 +0000 UTC m=+1047.255944910 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1b139a41-dd2e-42ba-a86d-01ade60da46f-metrics-certs") pod "frr-k8s-ks77p" (UID: "1b139a41-dd2e-42ba-a86d-01ade60da46f") : secret "frr-k8s-certs-secret" not found Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.779685 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjjrd\" (UniqueName: \"kubernetes.io/projected/4bfd25e6-cec1-440a-8bfb-3ce2bcb4ab8b-kube-api-access-rjjrd\") pod \"controller-69bbfbf88f-mdj4b\" (UID: \"4bfd25e6-cec1-440a-8bfb-3ce2bcb4ab8b\") " pod="metallb-system/controller-69bbfbf88f-mdj4b" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.779777 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4bfd25e6-cec1-440a-8bfb-3ce2bcb4ab8b-metrics-certs\") pod \"controller-69bbfbf88f-mdj4b\" (UID: \"4bfd25e6-cec1-440a-8bfb-3ce2bcb4ab8b\") " pod="metallb-system/controller-69bbfbf88f-mdj4b" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.779904 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlmfx\" (UniqueName: \"kubernetes.io/projected/ae82eac1-c909-47f2-b4b5-2f3f1267345e-kube-api-access-rlmfx\") pod \"speaker-4r6x6\" (UID: \"ae82eac1-c909-47f2-b4b5-2f3f1267345e\") " pod="metallb-system/speaker-4r6x6" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.779992 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/1b139a41-dd2e-42ba-a86d-01ade60da46f-frr-conf\") pod \"frr-k8s-ks77p\" (UID: \"1b139a41-dd2e-42ba-a86d-01ade60da46f\") " pod="metallb-system/frr-k8s-ks77p" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.779999 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7ea0a9e9-940c-4856-8fd0-f19994e3b810-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-n6ptt\" (UID: \"7ea0a9e9-940c-4856-8fd0-f19994e3b810\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n6ptt" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.780085 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/1b139a41-dd2e-42ba-a86d-01ade60da46f-frr-sockets\") pod \"frr-k8s-ks77p\" (UID: \"1b139a41-dd2e-42ba-a86d-01ade60da46f\") " pod="metallb-system/frr-k8s-ks77p" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.780114 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ae82eac1-c909-47f2-b4b5-2f3f1267345e-memberlist\") pod \"speaker-4r6x6\" (UID: \"ae82eac1-c909-47f2-b4b5-2f3f1267345e\") " pod="metallb-system/speaker-4r6x6" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.780208 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/1b139a41-dd2e-42ba-a86d-01ade60da46f-reloader\") pod \"frr-k8s-ks77p\" (UID: \"1b139a41-dd2e-42ba-a86d-01ade60da46f\") " pod="metallb-system/frr-k8s-ks77p" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.780305 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mj4j\" (UniqueName: \"kubernetes.io/projected/7ea0a9e9-940c-4856-8fd0-f19994e3b810-kube-api-access-9mj4j\") pod \"frr-k8s-webhook-server-78b44bf5bb-n6ptt\" (UID: \"7ea0a9e9-940c-4856-8fd0-f19994e3b810\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n6ptt" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.780342 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/1b139a41-dd2e-42ba-a86d-01ade60da46f-metrics\") pod \"frr-k8s-ks77p\" (UID: \"1b139a41-dd2e-42ba-a86d-01ade60da46f\") " pod="metallb-system/frr-k8s-ks77p" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.780383 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ae82eac1-c909-47f2-b4b5-2f3f1267345e-metallb-excludel2\") pod \"speaker-4r6x6\" (UID: \"ae82eac1-c909-47f2-b4b5-2f3f1267345e\") " pod="metallb-system/speaker-4r6x6" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.780444 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/1b139a41-dd2e-42ba-a86d-01ade60da46f-reloader\") pod \"frr-k8s-ks77p\" (UID: \"1b139a41-dd2e-42ba-a86d-01ade60da46f\") " pod="metallb-system/frr-k8s-ks77p" Feb 14 18:59:53 crc kubenswrapper[4897]: E0214 18:59:53.780547 4897 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 14 18:59:53 crc kubenswrapper[4897]: E0214 18:59:53.780680 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ea0a9e9-940c-4856-8fd0-f19994e3b810-cert podName:7ea0a9e9-940c-4856-8fd0-f19994e3b810 nodeName:}" failed. No retries permitted until 2026-02-14 18:59:54.280660642 +0000 UTC m=+1047.257069125 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7ea0a9e9-940c-4856-8fd0-f19994e3b810-cert") pod "frr-k8s-webhook-server-78b44bf5bb-n6ptt" (UID: "7ea0a9e9-940c-4856-8fd0-f19994e3b810") : secret "frr-k8s-webhook-server-cert" not found Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.780758 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/1b139a41-dd2e-42ba-a86d-01ade60da46f-metrics\") pod \"frr-k8s-ks77p\" (UID: \"1b139a41-dd2e-42ba-a86d-01ade60da46f\") " pod="metallb-system/frr-k8s-ks77p" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.781501 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/1b139a41-dd2e-42ba-a86d-01ade60da46f-frr-startup\") pod \"frr-k8s-ks77p\" (UID: \"1b139a41-dd2e-42ba-a86d-01ade60da46f\") " pod="metallb-system/frr-k8s-ks77p" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.804627 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mj4j\" (UniqueName: \"kubernetes.io/projected/7ea0a9e9-940c-4856-8fd0-f19994e3b810-kube-api-access-9mj4j\") pod \"frr-k8s-webhook-server-78b44bf5bb-n6ptt\" (UID: \"7ea0a9e9-940c-4856-8fd0-f19994e3b810\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n6ptt" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.815607 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9hmx\" (UniqueName: \"kubernetes.io/projected/1b139a41-dd2e-42ba-a86d-01ade60da46f-kube-api-access-w9hmx\") pod \"frr-k8s-ks77p\" (UID: \"1b139a41-dd2e-42ba-a86d-01ade60da46f\") " pod="metallb-system/frr-k8s-ks77p" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.881756 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjjrd\" (UniqueName: \"kubernetes.io/projected/4bfd25e6-cec1-440a-8bfb-3ce2bcb4ab8b-kube-api-access-rjjrd\") pod \"controller-69bbfbf88f-mdj4b\" (UID: \"4bfd25e6-cec1-440a-8bfb-3ce2bcb4ab8b\") " pod="metallb-system/controller-69bbfbf88f-mdj4b" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.881811 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4bfd25e6-cec1-440a-8bfb-3ce2bcb4ab8b-metrics-certs\") pod \"controller-69bbfbf88f-mdj4b\" (UID: \"4bfd25e6-cec1-440a-8bfb-3ce2bcb4ab8b\") " pod="metallb-system/controller-69bbfbf88f-mdj4b" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.881845 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlmfx\" (UniqueName: \"kubernetes.io/projected/ae82eac1-c909-47f2-b4b5-2f3f1267345e-kube-api-access-rlmfx\") pod \"speaker-4r6x6\" (UID: \"ae82eac1-c909-47f2-b4b5-2f3f1267345e\") " pod="metallb-system/speaker-4r6x6" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.881882 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ae82eac1-c909-47f2-b4b5-2f3f1267345e-memberlist\") pod \"speaker-4r6x6\" (UID: \"ae82eac1-c909-47f2-b4b5-2f3f1267345e\") " pod="metallb-system/speaker-4r6x6" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.881928 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ae82eac1-c909-47f2-b4b5-2f3f1267345e-metallb-excludel2\") pod \"speaker-4r6x6\" (UID: \"ae82eac1-c909-47f2-b4b5-2f3f1267345e\") " pod="metallb-system/speaker-4r6x6" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.881949 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae82eac1-c909-47f2-b4b5-2f3f1267345e-metrics-certs\") pod \"speaker-4r6x6\" (UID: \"ae82eac1-c909-47f2-b4b5-2f3f1267345e\") " pod="metallb-system/speaker-4r6x6" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.882012 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4bfd25e6-cec1-440a-8bfb-3ce2bcb4ab8b-cert\") pod \"controller-69bbfbf88f-mdj4b\" (UID: \"4bfd25e6-cec1-440a-8bfb-3ce2bcb4ab8b\") " pod="metallb-system/controller-69bbfbf88f-mdj4b" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.883255 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ae82eac1-c909-47f2-b4b5-2f3f1267345e-metallb-excludel2\") pod \"speaker-4r6x6\" (UID: \"ae82eac1-c909-47f2-b4b5-2f3f1267345e\") " pod="metallb-system/speaker-4r6x6" Feb 14 18:59:53 crc kubenswrapper[4897]: E0214 18:59:53.883487 4897 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 14 18:59:53 crc kubenswrapper[4897]: E0214 18:59:53.883646 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae82eac1-c909-47f2-b4b5-2f3f1267345e-memberlist podName:ae82eac1-c909-47f2-b4b5-2f3f1267345e nodeName:}" failed. No retries permitted until 2026-02-14 18:59:54.383624273 +0000 UTC m=+1047.360032776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ae82eac1-c909-47f2-b4b5-2f3f1267345e-memberlist") pod "speaker-4r6x6" (UID: "ae82eac1-c909-47f2-b4b5-2f3f1267345e") : secret "metallb-memberlist" not found Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.886007 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae82eac1-c909-47f2-b4b5-2f3f1267345e-metrics-certs\") pod \"speaker-4r6x6\" (UID: \"ae82eac1-c909-47f2-b4b5-2f3f1267345e\") " pod="metallb-system/speaker-4r6x6" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.886143 4897 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.887594 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4bfd25e6-cec1-440a-8bfb-3ce2bcb4ab8b-metrics-certs\") pod \"controller-69bbfbf88f-mdj4b\" (UID: \"4bfd25e6-cec1-440a-8bfb-3ce2bcb4ab8b\") " pod="metallb-system/controller-69bbfbf88f-mdj4b" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.898361 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4bfd25e6-cec1-440a-8bfb-3ce2bcb4ab8b-cert\") pod \"controller-69bbfbf88f-mdj4b\" (UID: \"4bfd25e6-cec1-440a-8bfb-3ce2bcb4ab8b\") " pod="metallb-system/controller-69bbfbf88f-mdj4b" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.900815 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlmfx\" (UniqueName: \"kubernetes.io/projected/ae82eac1-c909-47f2-b4b5-2f3f1267345e-kube-api-access-rlmfx\") pod \"speaker-4r6x6\" (UID: \"ae82eac1-c909-47f2-b4b5-2f3f1267345e\") " pod="metallb-system/speaker-4r6x6" Feb 14 18:59:53 crc kubenswrapper[4897]: I0214 18:59:53.904399 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjjrd\" (UniqueName: \"kubernetes.io/projected/4bfd25e6-cec1-440a-8bfb-3ce2bcb4ab8b-kube-api-access-rjjrd\") pod \"controller-69bbfbf88f-mdj4b\" (UID: \"4bfd25e6-cec1-440a-8bfb-3ce2bcb4ab8b\") " pod="metallb-system/controller-69bbfbf88f-mdj4b" Feb 14 18:59:54 crc kubenswrapper[4897]: I0214 18:59:54.006536 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-mdj4b" Feb 14 18:59:54 crc kubenswrapper[4897]: I0214 18:59:54.288004 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7ea0a9e9-940c-4856-8fd0-f19994e3b810-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-n6ptt\" (UID: \"7ea0a9e9-940c-4856-8fd0-f19994e3b810\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n6ptt" Feb 14 18:59:54 crc kubenswrapper[4897]: I0214 18:59:54.288387 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b139a41-dd2e-42ba-a86d-01ade60da46f-metrics-certs\") pod \"frr-k8s-ks77p\" (UID: \"1b139a41-dd2e-42ba-a86d-01ade60da46f\") " pod="metallb-system/frr-k8s-ks77p" Feb 14 18:59:54 crc kubenswrapper[4897]: I0214 18:59:54.292708 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7ea0a9e9-940c-4856-8fd0-f19994e3b810-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-n6ptt\" (UID: \"7ea0a9e9-940c-4856-8fd0-f19994e3b810\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n6ptt" Feb 14 18:59:54 crc kubenswrapper[4897]: I0214 18:59:54.293556 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b139a41-dd2e-42ba-a86d-01ade60da46f-metrics-certs\") pod \"frr-k8s-ks77p\" (UID: \"1b139a41-dd2e-42ba-a86d-01ade60da46f\") " pod="metallb-system/frr-k8s-ks77p" Feb 14 18:59:54 crc kubenswrapper[4897]: I0214 18:59:54.390352 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ae82eac1-c909-47f2-b4b5-2f3f1267345e-memberlist\") pod \"speaker-4r6x6\" (UID: \"ae82eac1-c909-47f2-b4b5-2f3f1267345e\") " pod="metallb-system/speaker-4r6x6" Feb 14 18:59:54 crc kubenswrapper[4897]: E0214 18:59:54.390517 4897 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 14 18:59:54 crc kubenswrapper[4897]: E0214 18:59:54.390602 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae82eac1-c909-47f2-b4b5-2f3f1267345e-memberlist podName:ae82eac1-c909-47f2-b4b5-2f3f1267345e nodeName:}" failed. No retries permitted until 2026-02-14 18:59:55.390585915 +0000 UTC m=+1048.366994388 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ae82eac1-c909-47f2-b4b5-2f3f1267345e-memberlist") pod "speaker-4r6x6" (UID: "ae82eac1-c909-47f2-b4b5-2f3f1267345e") : secret "metallb-memberlist" not found Feb 14 18:59:54 crc kubenswrapper[4897]: I0214 18:59:54.435767 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-mdj4b"] Feb 14 18:59:54 crc kubenswrapper[4897]: I0214 18:59:54.471371 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n6ptt" Feb 14 18:59:54 crc kubenswrapper[4897]: I0214 18:59:54.482900 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-ks77p" Feb 14 18:59:54 crc kubenswrapper[4897]: I0214 18:59:54.572092 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-mdj4b" event={"ID":"4bfd25e6-cec1-440a-8bfb-3ce2bcb4ab8b","Type":"ContainerStarted","Data":"6f984eac2b5876365e000f5d8fa3f259f72eb011f772ab9ebfc4bf4660674016"} Feb 14 18:59:54 crc kubenswrapper[4897]: I0214 18:59:54.746926 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-n6ptt"] Feb 14 18:59:55 crc kubenswrapper[4897]: I0214 18:59:55.408270 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ae82eac1-c909-47f2-b4b5-2f3f1267345e-memberlist\") pod \"speaker-4r6x6\" (UID: \"ae82eac1-c909-47f2-b4b5-2f3f1267345e\") " pod="metallb-system/speaker-4r6x6" Feb 14 18:59:55 crc kubenswrapper[4897]: I0214 18:59:55.428888 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ae82eac1-c909-47f2-b4b5-2f3f1267345e-memberlist\") pod \"speaker-4r6x6\" (UID: \"ae82eac1-c909-47f2-b4b5-2f3f1267345e\") " pod="metallb-system/speaker-4r6x6" Feb 14 18:59:55 crc kubenswrapper[4897]: I0214 18:59:55.482590 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-4r6x6" Feb 14 18:59:55 crc kubenswrapper[4897]: W0214 18:59:55.505283 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae82eac1_c909_47f2_b4b5_2f3f1267345e.slice/crio-6611daaaead6478c51844df8ea72ab81da48371086dcf071fc8dcea7cf7e1ba5 WatchSource:0}: Error finding container 6611daaaead6478c51844df8ea72ab81da48371086dcf071fc8dcea7cf7e1ba5: Status 404 returned error can't find the container with id 6611daaaead6478c51844df8ea72ab81da48371086dcf071fc8dcea7cf7e1ba5 Feb 14 18:59:55 crc kubenswrapper[4897]: I0214 18:59:55.581557 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4r6x6" event={"ID":"ae82eac1-c909-47f2-b4b5-2f3f1267345e","Type":"ContainerStarted","Data":"6611daaaead6478c51844df8ea72ab81da48371086dcf071fc8dcea7cf7e1ba5"} Feb 14 18:59:55 crc kubenswrapper[4897]: I0214 18:59:55.586129 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ks77p" event={"ID":"1b139a41-dd2e-42ba-a86d-01ade60da46f","Type":"ContainerStarted","Data":"72a26f015f47a5004d79982dea7a8e3cd09ff603782eb7f4ac03396ff7933706"} Feb 14 18:59:55 crc kubenswrapper[4897]: I0214 18:59:55.587567 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n6ptt" event={"ID":"7ea0a9e9-940c-4856-8fd0-f19994e3b810","Type":"ContainerStarted","Data":"edc1c5eb233aa44d5f8a81b892f53246888e3a6c69ec6a5dd24bd1d6024d1881"} Feb 14 18:59:55 crc kubenswrapper[4897]: I0214 18:59:55.589494 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-mdj4b" event={"ID":"4bfd25e6-cec1-440a-8bfb-3ce2bcb4ab8b","Type":"ContainerStarted","Data":"6d0091a52d9187319fce89249e4a1fb0b882fc16d7e1041d2809bb4d50944a28"} Feb 14 18:59:55 crc kubenswrapper[4897]: I0214 18:59:55.589520 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-mdj4b" event={"ID":"4bfd25e6-cec1-440a-8bfb-3ce2bcb4ab8b","Type":"ContainerStarted","Data":"cc8ea5c57f536c4ef9c7b973a05cd4d4984dac18aa51fc4be62c0a9589ce42b3"} Feb 14 18:59:55 crc kubenswrapper[4897]: I0214 18:59:55.590177 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-mdj4b" Feb 14 18:59:55 crc kubenswrapper[4897]: I0214 18:59:55.631392 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-mdj4b" podStartSLOduration=2.631377638 podStartE2EDuration="2.631377638s" podCreationTimestamp="2026-02-14 18:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:59:55.625459475 +0000 UTC m=+1048.601867958" watchObservedRunningTime="2026-02-14 18:59:55.631377638 +0000 UTC m=+1048.607786121" Feb 14 18:59:56 crc kubenswrapper[4897]: I0214 18:59:56.601694 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4r6x6" event={"ID":"ae82eac1-c909-47f2-b4b5-2f3f1267345e","Type":"ContainerStarted","Data":"9c57e280cbf88950b56668de2eca130796711cdc64dd4a1e5a9e2a458b6e3948"} Feb 14 18:59:56 crc kubenswrapper[4897]: I0214 18:59:56.602080 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4r6x6" event={"ID":"ae82eac1-c909-47f2-b4b5-2f3f1267345e","Type":"ContainerStarted","Data":"cdd36adc7057fc3433070fc4e73e01adb4d0d49093ff7b7a488e76ebf0957fbf"} Feb 14 18:59:56 crc kubenswrapper[4897]: I0214 18:59:56.627882 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-4r6x6" podStartSLOduration=3.627865273 podStartE2EDuration="3.627865273s" podCreationTimestamp="2026-02-14 18:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 18:59:56.624193999 +0000 UTC m=+1049.600602492" watchObservedRunningTime="2026-02-14 18:59:56.627865273 +0000 UTC m=+1049.604273756" Feb 14 18:59:57 crc kubenswrapper[4897]: I0214 18:59:57.610416 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-4r6x6" Feb 14 19:00:00 crc kubenswrapper[4897]: I0214 19:00:00.145700 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518260-wrkrg"] Feb 14 19:00:00 crc kubenswrapper[4897]: I0214 19:00:00.147761 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518260-wrkrg" Feb 14 19:00:00 crc kubenswrapper[4897]: I0214 19:00:00.150045 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 19:00:00 crc kubenswrapper[4897]: I0214 19:00:00.153168 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518260-wrkrg"] Feb 14 19:00:00 crc kubenswrapper[4897]: I0214 19:00:00.155550 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 19:00:00 crc kubenswrapper[4897]: I0214 19:00:00.318521 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91b297e5-cb98-47d7-96bf-9a680217ecfe-config-volume\") pod \"collect-profiles-29518260-wrkrg\" (UID: \"91b297e5-cb98-47d7-96bf-9a680217ecfe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518260-wrkrg" Feb 14 19:00:00 crc kubenswrapper[4897]: I0214 19:00:00.318602 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91b297e5-cb98-47d7-96bf-9a680217ecfe-secret-volume\") pod \"collect-profiles-29518260-wrkrg\" (UID: \"91b297e5-cb98-47d7-96bf-9a680217ecfe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518260-wrkrg" Feb 14 19:00:00 crc kubenswrapper[4897]: I0214 19:00:00.318639 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6nwr\" (UniqueName: \"kubernetes.io/projected/91b297e5-cb98-47d7-96bf-9a680217ecfe-kube-api-access-m6nwr\") pod \"collect-profiles-29518260-wrkrg\" (UID: \"91b297e5-cb98-47d7-96bf-9a680217ecfe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518260-wrkrg" Feb 14 19:00:00 crc kubenswrapper[4897]: I0214 19:00:00.420052 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91b297e5-cb98-47d7-96bf-9a680217ecfe-config-volume\") pod \"collect-profiles-29518260-wrkrg\" (UID: \"91b297e5-cb98-47d7-96bf-9a680217ecfe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518260-wrkrg" Feb 14 19:00:00 crc kubenswrapper[4897]: I0214 19:00:00.420140 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91b297e5-cb98-47d7-96bf-9a680217ecfe-secret-volume\") pod \"collect-profiles-29518260-wrkrg\" (UID: \"91b297e5-cb98-47d7-96bf-9a680217ecfe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518260-wrkrg" Feb 14 19:00:00 crc kubenswrapper[4897]: I0214 19:00:00.420211 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6nwr\" (UniqueName: \"kubernetes.io/projected/91b297e5-cb98-47d7-96bf-9a680217ecfe-kube-api-access-m6nwr\") pod \"collect-profiles-29518260-wrkrg\" (UID: \"91b297e5-cb98-47d7-96bf-9a680217ecfe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518260-wrkrg" Feb 14 19:00:00 crc kubenswrapper[4897]: I0214 19:00:00.422427 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91b297e5-cb98-47d7-96bf-9a680217ecfe-config-volume\") pod \"collect-profiles-29518260-wrkrg\" (UID: \"91b297e5-cb98-47d7-96bf-9a680217ecfe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518260-wrkrg" Feb 14 19:00:00 crc kubenswrapper[4897]: I0214 19:00:00.437882 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6nwr\" (UniqueName: \"kubernetes.io/projected/91b297e5-cb98-47d7-96bf-9a680217ecfe-kube-api-access-m6nwr\") pod \"collect-profiles-29518260-wrkrg\" (UID: \"91b297e5-cb98-47d7-96bf-9a680217ecfe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518260-wrkrg" Feb 14 19:00:00 crc kubenswrapper[4897]: I0214 19:00:00.437919 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91b297e5-cb98-47d7-96bf-9a680217ecfe-secret-volume\") pod \"collect-profiles-29518260-wrkrg\" (UID: \"91b297e5-cb98-47d7-96bf-9a680217ecfe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518260-wrkrg" Feb 14 19:00:00 crc kubenswrapper[4897]: I0214 19:00:00.463806 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518260-wrkrg" Feb 14 19:00:02 crc kubenswrapper[4897]: I0214 19:00:02.872602 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518260-wrkrg"] Feb 14 19:00:03 crc kubenswrapper[4897]: I0214 19:00:03.672523 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518260-wrkrg" event={"ID":"91b297e5-cb98-47d7-96bf-9a680217ecfe","Type":"ContainerStarted","Data":"0ab894dc8e727385b753abb8e4553a750d12f7d61a2d4e60d807fff50993237c"} Feb 14 19:00:03 crc kubenswrapper[4897]: I0214 19:00:03.672580 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518260-wrkrg" event={"ID":"91b297e5-cb98-47d7-96bf-9a680217ecfe","Type":"ContainerStarted","Data":"53faeb8923b25419d9b6a973109eb3d317504584828ad04d07027eb9f8a5f5d5"} Feb 14 19:00:03 crc kubenswrapper[4897]: I0214 19:00:03.700556 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29518260-wrkrg" podStartSLOduration=3.700536524 podStartE2EDuration="3.700536524s" podCreationTimestamp="2026-02-14 19:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:00:03.693968421 +0000 UTC m=+1056.670376914" watchObservedRunningTime="2026-02-14 19:00:03.700536524 +0000 UTC m=+1056.676944997" Feb 14 19:00:04 crc kubenswrapper[4897]: I0214 19:00:04.011925 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-mdj4b" Feb 14 19:00:04 crc kubenswrapper[4897]: I0214 19:00:04.683974 4897 generic.go:334] "Generic (PLEG): container finished" podID="91b297e5-cb98-47d7-96bf-9a680217ecfe" containerID="0ab894dc8e727385b753abb8e4553a750d12f7d61a2d4e60d807fff50993237c" exitCode=0 Feb 14 19:00:04 crc kubenswrapper[4897]: I0214 19:00:04.684076 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518260-wrkrg" event={"ID":"91b297e5-cb98-47d7-96bf-9a680217ecfe","Type":"ContainerDied","Data":"0ab894dc8e727385b753abb8e4553a750d12f7d61a2d4e60d807fff50993237c"} Feb 14 19:00:05 crc kubenswrapper[4897]: I0214 19:00:05.487054 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-4r6x6" Feb 14 19:00:11 crc kubenswrapper[4897]: I0214 19:00:11.500548 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-ncd25"] Feb 14 19:00:11 crc kubenswrapper[4897]: I0214 19:00:11.503585 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ncd25" Feb 14 19:00:11 crc kubenswrapper[4897]: I0214 19:00:11.506276 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 14 19:00:11 crc kubenswrapper[4897]: I0214 19:00:11.506383 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-x4rfc" Feb 14 19:00:11 crc kubenswrapper[4897]: I0214 19:00:11.514384 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 14 19:00:11 crc kubenswrapper[4897]: I0214 19:00:11.523590 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ncd25"] Feb 14 19:00:11 crc kubenswrapper[4897]: I0214 19:00:11.647534 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vcrw\" (UniqueName: \"kubernetes.io/projected/152118b1-55ba-4201-8dce-8d916862a55f-kube-api-access-7vcrw\") pod \"openstack-operator-index-ncd25\" (UID: \"152118b1-55ba-4201-8dce-8d916862a55f\") " pod="openstack-operators/openstack-operator-index-ncd25" Feb 14 19:00:11 crc kubenswrapper[4897]: I0214 19:00:11.749492 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vcrw\" (UniqueName: \"kubernetes.io/projected/152118b1-55ba-4201-8dce-8d916862a55f-kube-api-access-7vcrw\") pod \"openstack-operator-index-ncd25\" (UID: \"152118b1-55ba-4201-8dce-8d916862a55f\") " pod="openstack-operators/openstack-operator-index-ncd25" Feb 14 19:00:11 crc kubenswrapper[4897]: I0214 19:00:11.772759 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vcrw\" (UniqueName: \"kubernetes.io/projected/152118b1-55ba-4201-8dce-8d916862a55f-kube-api-access-7vcrw\") pod \"openstack-operator-index-ncd25\" (UID: \"152118b1-55ba-4201-8dce-8d916862a55f\") " pod="openstack-operators/openstack-operator-index-ncd25" Feb 14 19:00:11 crc kubenswrapper[4897]: I0214 19:00:11.834189 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ncd25" Feb 14 19:00:13 crc kubenswrapper[4897]: I0214 19:00:13.298625 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518260-wrkrg" Feb 14 19:00:13 crc kubenswrapper[4897]: I0214 19:00:13.401625 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6nwr\" (UniqueName: \"kubernetes.io/projected/91b297e5-cb98-47d7-96bf-9a680217ecfe-kube-api-access-m6nwr\") pod \"91b297e5-cb98-47d7-96bf-9a680217ecfe\" (UID: \"91b297e5-cb98-47d7-96bf-9a680217ecfe\") " Feb 14 19:00:13 crc kubenswrapper[4897]: I0214 19:00:13.401765 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91b297e5-cb98-47d7-96bf-9a680217ecfe-config-volume\") pod \"91b297e5-cb98-47d7-96bf-9a680217ecfe\" (UID: \"91b297e5-cb98-47d7-96bf-9a680217ecfe\") " Feb 14 19:00:13 crc kubenswrapper[4897]: I0214 19:00:13.401826 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91b297e5-cb98-47d7-96bf-9a680217ecfe-secret-volume\") pod \"91b297e5-cb98-47d7-96bf-9a680217ecfe\" (UID: \"91b297e5-cb98-47d7-96bf-9a680217ecfe\") " Feb 14 19:00:13 crc kubenswrapper[4897]: I0214 19:00:13.403106 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91b297e5-cb98-47d7-96bf-9a680217ecfe-config-volume" (OuterVolumeSpecName: "config-volume") pod "91b297e5-cb98-47d7-96bf-9a680217ecfe" (UID: "91b297e5-cb98-47d7-96bf-9a680217ecfe"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:00:13 crc kubenswrapper[4897]: I0214 19:00:13.408154 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91b297e5-cb98-47d7-96bf-9a680217ecfe-kube-api-access-m6nwr" (OuterVolumeSpecName: "kube-api-access-m6nwr") pod "91b297e5-cb98-47d7-96bf-9a680217ecfe" (UID: "91b297e5-cb98-47d7-96bf-9a680217ecfe"). InnerVolumeSpecName "kube-api-access-m6nwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:00:13 crc kubenswrapper[4897]: I0214 19:00:13.414233 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b297e5-cb98-47d7-96bf-9a680217ecfe-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "91b297e5-cb98-47d7-96bf-9a680217ecfe" (UID: "91b297e5-cb98-47d7-96bf-9a680217ecfe"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:00:13 crc kubenswrapper[4897]: I0214 19:00:13.506012 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6nwr\" (UniqueName: \"kubernetes.io/projected/91b297e5-cb98-47d7-96bf-9a680217ecfe-kube-api-access-m6nwr\") on node \"crc\" DevicePath \"\"" Feb 14 19:00:13 crc kubenswrapper[4897]: I0214 19:00:13.506291 4897 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91b297e5-cb98-47d7-96bf-9a680217ecfe-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 19:00:13 crc kubenswrapper[4897]: I0214 19:00:13.506301 4897 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91b297e5-cb98-47d7-96bf-9a680217ecfe-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 19:00:13 crc kubenswrapper[4897]: I0214 19:00:13.809374 4897 generic.go:334] "Generic (PLEG): container finished" podID="1b139a41-dd2e-42ba-a86d-01ade60da46f" containerID="77e6aed195a60ec748d8a8b35e0177d3d77629fe94842d5479483d10df4d877e" exitCode=0 Feb 14 19:00:13 crc kubenswrapper[4897]: I0214 19:00:13.809424 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ks77p" event={"ID":"1b139a41-dd2e-42ba-a86d-01ade60da46f","Type":"ContainerDied","Data":"77e6aed195a60ec748d8a8b35e0177d3d77629fe94842d5479483d10df4d877e"} Feb 14 19:00:13 crc kubenswrapper[4897]: I0214 19:00:13.812479 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n6ptt" event={"ID":"7ea0a9e9-940c-4856-8fd0-f19994e3b810","Type":"ContainerStarted","Data":"95b254c3fa34e8b764ab01f049e2e0d43a7ac2a23213fd6fa403601c29262a18"} Feb 14 19:00:13 crc kubenswrapper[4897]: I0214 19:00:13.812779 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n6ptt" Feb 14 19:00:13 crc kubenswrapper[4897]: I0214 19:00:13.814664 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518260-wrkrg" event={"ID":"91b297e5-cb98-47d7-96bf-9a680217ecfe","Type":"ContainerDied","Data":"53faeb8923b25419d9b6a973109eb3d317504584828ad04d07027eb9f8a5f5d5"} Feb 14 19:00:13 crc kubenswrapper[4897]: I0214 19:00:13.814699 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53faeb8923b25419d9b6a973109eb3d317504584828ad04d07027eb9f8a5f5d5" Feb 14 19:00:13 crc kubenswrapper[4897]: I0214 19:00:13.814735 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518260-wrkrg" Feb 14 19:00:13 crc kubenswrapper[4897]: I0214 19:00:13.867383 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ncd25"] Feb 14 19:00:13 crc kubenswrapper[4897]: W0214 19:00:13.886228 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod152118b1_55ba_4201_8dce_8d916862a55f.slice/crio-9cb2c47efc2a282f737d685ba86a9f914db0b20b958c72592d54d2d4ae2a8686 WatchSource:0}: Error finding container 9cb2c47efc2a282f737d685ba86a9f914db0b20b958c72592d54d2d4ae2a8686: Status 404 returned error can't find the container with id 9cb2c47efc2a282f737d685ba86a9f914db0b20b958c72592d54d2d4ae2a8686 Feb 14 19:00:14 crc kubenswrapper[4897]: I0214 19:00:14.345176 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n6ptt" podStartSLOduration=2.761360353 podStartE2EDuration="21.345156807s" podCreationTimestamp="2026-02-14 18:59:53 +0000 UTC" firstStartedPulling="2026-02-14 18:59:54.767504509 +0000 UTC m=+1047.743912982" lastFinishedPulling="2026-02-14 19:00:13.351300953 +0000 UTC m=+1066.327709436" observedRunningTime="2026-02-14 19:00:13.883418792 +0000 UTC m=+1066.859827295" watchObservedRunningTime="2026-02-14 19:00:14.345156807 +0000 UTC m=+1067.321565310" Feb 14 19:00:14 crc kubenswrapper[4897]: I0214 19:00:14.826205 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ncd25" event={"ID":"152118b1-55ba-4201-8dce-8d916862a55f","Type":"ContainerStarted","Data":"9cb2c47efc2a282f737d685ba86a9f914db0b20b958c72592d54d2d4ae2a8686"} Feb 14 19:00:14 crc kubenswrapper[4897]: I0214 19:00:14.832342 4897 generic.go:334] "Generic (PLEG): container finished" podID="1b139a41-dd2e-42ba-a86d-01ade60da46f" containerID="a7862b831f65b3c8d642bff7b3f48ae653d09d25aac3ecf1333de227894ecc4a" exitCode=0 Feb 14 19:00:14 crc kubenswrapper[4897]: I0214 19:00:14.832839 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ks77p" event={"ID":"1b139a41-dd2e-42ba-a86d-01ade60da46f","Type":"ContainerDied","Data":"a7862b831f65b3c8d642bff7b3f48ae653d09d25aac3ecf1333de227894ecc4a"} Feb 14 19:00:15 crc kubenswrapper[4897]: I0214 19:00:15.277378 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-ncd25"] Feb 14 19:00:15 crc kubenswrapper[4897]: I0214 19:00:15.708015 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-bdg8n"] Feb 14 19:00:15 crc kubenswrapper[4897]: E0214 19:00:15.708606 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91b297e5-cb98-47d7-96bf-9a680217ecfe" containerName="collect-profiles" Feb 14 19:00:15 crc kubenswrapper[4897]: I0214 19:00:15.708628 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="91b297e5-cb98-47d7-96bf-9a680217ecfe" containerName="collect-profiles" Feb 14 19:00:15 crc kubenswrapper[4897]: I0214 19:00:15.708965 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="91b297e5-cb98-47d7-96bf-9a680217ecfe" containerName="collect-profiles" Feb 14 19:00:15 crc kubenswrapper[4897]: I0214 19:00:15.710280 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bdg8n" Feb 14 19:00:15 crc kubenswrapper[4897]: I0214 19:00:15.730128 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bdg8n"] Feb 14 19:00:15 crc kubenswrapper[4897]: I0214 19:00:15.842302 4897 generic.go:334] "Generic (PLEG): container finished" podID="1b139a41-dd2e-42ba-a86d-01ade60da46f" containerID="a1c2a66caa9c087743e1d0b8356c02c5cc6b092d7a72899663f6b6282bdfbcae" exitCode=0 Feb 14 19:00:15 crc kubenswrapper[4897]: I0214 19:00:15.842344 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ks77p" event={"ID":"1b139a41-dd2e-42ba-a86d-01ade60da46f","Type":"ContainerDied","Data":"a1c2a66caa9c087743e1d0b8356c02c5cc6b092d7a72899663f6b6282bdfbcae"} Feb 14 19:00:15 crc kubenswrapper[4897]: I0214 19:00:15.844325 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r99v\" (UniqueName: \"kubernetes.io/projected/afb2923f-489f-4ce0-bd55-f95a6c59f809-kube-api-access-8r99v\") pod \"openstack-operator-index-bdg8n\" (UID: \"afb2923f-489f-4ce0-bd55-f95a6c59f809\") " pod="openstack-operators/openstack-operator-index-bdg8n" Feb 14 19:00:15 crc kubenswrapper[4897]: I0214 19:00:15.945682 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r99v\" (UniqueName: \"kubernetes.io/projected/afb2923f-489f-4ce0-bd55-f95a6c59f809-kube-api-access-8r99v\") pod \"openstack-operator-index-bdg8n\" (UID: \"afb2923f-489f-4ce0-bd55-f95a6c59f809\") " pod="openstack-operators/openstack-operator-index-bdg8n" Feb 14 19:00:15 crc kubenswrapper[4897]: I0214 19:00:15.980560 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r99v\" (UniqueName: \"kubernetes.io/projected/afb2923f-489f-4ce0-bd55-f95a6c59f809-kube-api-access-8r99v\") pod \"openstack-operator-index-bdg8n\" (UID: \"afb2923f-489f-4ce0-bd55-f95a6c59f809\") " pod="openstack-operators/openstack-operator-index-bdg8n" Feb 14 19:00:16 crc kubenswrapper[4897]: I0214 19:00:16.048190 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bdg8n" Feb 14 19:00:16 crc kubenswrapper[4897]: I0214 19:00:16.858504 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bdg8n"] Feb 14 19:00:17 crc kubenswrapper[4897]: I0214 19:00:17.861893 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ks77p" event={"ID":"1b139a41-dd2e-42ba-a86d-01ade60da46f","Type":"ContainerStarted","Data":"2070f0f5e76011779948f4bb2f7804767e5592f243f23c39e6925406dd870e0d"} Feb 14 19:00:17 crc kubenswrapper[4897]: I0214 19:00:17.862797 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bdg8n" event={"ID":"afb2923f-489f-4ce0-bd55-f95a6c59f809","Type":"ContainerStarted","Data":"ff3df8ff281304f778aea7ac8721584472bccc16ff18206b0e1ab092b61e7f4d"} Feb 14 19:00:18 crc kubenswrapper[4897]: I0214 19:00:18.871336 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ncd25" event={"ID":"152118b1-55ba-4201-8dce-8d916862a55f","Type":"ContainerStarted","Data":"9cd458dbb8141ba6499661bb2064a9aac2c0f3eb5b9cfdd6efb2ab31aedddf20"} Feb 14 19:00:18 crc kubenswrapper[4897]: I0214 19:00:18.871404 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-ncd25" podUID="152118b1-55ba-4201-8dce-8d916862a55f" containerName="registry-server" containerID="cri-o://9cd458dbb8141ba6499661bb2064a9aac2c0f3eb5b9cfdd6efb2ab31aedddf20" gracePeriod=2 Feb 14 19:00:18 crc kubenswrapper[4897]: I0214 19:00:18.876336 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bdg8n" event={"ID":"afb2923f-489f-4ce0-bd55-f95a6c59f809","Type":"ContainerStarted","Data":"20cd88d7ef7068626c30ed5a8d5449d741b985e090e71376fa7e9b492a6417a3"} Feb 14 19:00:18 crc kubenswrapper[4897]: I0214 19:00:18.882434 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ks77p" event={"ID":"1b139a41-dd2e-42ba-a86d-01ade60da46f","Type":"ContainerStarted","Data":"419d23265fba07492f4e7bb29806eb120f108467bac494e17a7894fd0c49659b"} Feb 14 19:00:18 crc kubenswrapper[4897]: I0214 19:00:18.882466 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ks77p" event={"ID":"1b139a41-dd2e-42ba-a86d-01ade60da46f","Type":"ContainerStarted","Data":"55e45edd60f53605a0d9db6efba9c95616e8ee0122a47b40c12eacac6442e68c"} Feb 14 19:00:18 crc kubenswrapper[4897]: I0214 19:00:18.882478 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ks77p" event={"ID":"1b139a41-dd2e-42ba-a86d-01ade60da46f","Type":"ContainerStarted","Data":"fc2cdb99653f97ed8d8570cea902e81296c6fc50f0895c5cd7c664be66c4fd51"} Feb 14 19:00:18 crc kubenswrapper[4897]: I0214 19:00:18.882489 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ks77p" event={"ID":"1b139a41-dd2e-42ba-a86d-01ade60da46f","Type":"ContainerStarted","Data":"f5267356897e21abb6ac6f691db815dc5386d4bddbd5b8b5c76c31d53c208242"} Feb 14 19:00:18 crc kubenswrapper[4897]: I0214 19:00:18.894787 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-ncd25" podStartSLOduration=3.873589722 podStartE2EDuration="7.894764561s" podCreationTimestamp="2026-02-14 19:00:11 +0000 UTC" firstStartedPulling="2026-02-14 19:00:13.888284843 +0000 UTC m=+1066.864693326" lastFinishedPulling="2026-02-14 19:00:17.909459682 +0000 UTC m=+1070.885868165" observedRunningTime="2026-02-14 19:00:18.885147995 +0000 UTC m=+1071.861556498" watchObservedRunningTime="2026-02-14 19:00:18.894764561 +0000 UTC m=+1071.871173034" Feb 14 19:00:18 crc kubenswrapper[4897]: I0214 19:00:18.904710 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-bdg8n" podStartSLOduration=2.951364687 podStartE2EDuration="3.904690809s" podCreationTimestamp="2026-02-14 19:00:15 +0000 UTC" firstStartedPulling="2026-02-14 19:00:16.959990189 +0000 UTC m=+1069.936398672" lastFinishedPulling="2026-02-14 19:00:17.913316321 +0000 UTC m=+1070.889724794" observedRunningTime="2026-02-14 19:00:18.901103197 +0000 UTC m=+1071.877511680" watchObservedRunningTime="2026-02-14 19:00:18.904690809 +0000 UTC m=+1071.881099292" Feb 14 19:00:19 crc kubenswrapper[4897]: I0214 19:00:19.449852 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ncd25" Feb 14 19:00:19 crc kubenswrapper[4897]: I0214 19:00:19.524053 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vcrw\" (UniqueName: \"kubernetes.io/projected/152118b1-55ba-4201-8dce-8d916862a55f-kube-api-access-7vcrw\") pod \"152118b1-55ba-4201-8dce-8d916862a55f\" (UID: \"152118b1-55ba-4201-8dce-8d916862a55f\") " Feb 14 19:00:19 crc kubenswrapper[4897]: I0214 19:00:19.536489 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/152118b1-55ba-4201-8dce-8d916862a55f-kube-api-access-7vcrw" (OuterVolumeSpecName: "kube-api-access-7vcrw") pod "152118b1-55ba-4201-8dce-8d916862a55f" (UID: "152118b1-55ba-4201-8dce-8d916862a55f"). InnerVolumeSpecName "kube-api-access-7vcrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:00:19 crc kubenswrapper[4897]: I0214 19:00:19.626298 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vcrw\" (UniqueName: \"kubernetes.io/projected/152118b1-55ba-4201-8dce-8d916862a55f-kube-api-access-7vcrw\") on node \"crc\" DevicePath \"\"" Feb 14 19:00:19 crc kubenswrapper[4897]: I0214 19:00:19.891138 4897 generic.go:334] "Generic (PLEG): container finished" podID="152118b1-55ba-4201-8dce-8d916862a55f" containerID="9cd458dbb8141ba6499661bb2064a9aac2c0f3eb5b9cfdd6efb2ab31aedddf20" exitCode=0 Feb 14 19:00:19 crc kubenswrapper[4897]: I0214 19:00:19.891197 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ncd25" event={"ID":"152118b1-55ba-4201-8dce-8d916862a55f","Type":"ContainerDied","Data":"9cd458dbb8141ba6499661bb2064a9aac2c0f3eb5b9cfdd6efb2ab31aedddf20"} Feb 14 19:00:19 crc kubenswrapper[4897]: I0214 19:00:19.891223 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ncd25" event={"ID":"152118b1-55ba-4201-8dce-8d916862a55f","Type":"ContainerDied","Data":"9cb2c47efc2a282f737d685ba86a9f914db0b20b958c72592d54d2d4ae2a8686"} Feb 14 19:00:19 crc kubenswrapper[4897]: I0214 19:00:19.891238 4897 scope.go:117] "RemoveContainer" containerID="9cd458dbb8141ba6499661bb2064a9aac2c0f3eb5b9cfdd6efb2ab31aedddf20" Feb 14 19:00:19 crc kubenswrapper[4897]: I0214 19:00:19.891325 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ncd25" Feb 14 19:00:19 crc kubenswrapper[4897]: I0214 19:00:19.897744 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ks77p" event={"ID":"1b139a41-dd2e-42ba-a86d-01ade60da46f","Type":"ContainerStarted","Data":"86c98a95f0aa5dc9a799f4e838c5513463c7989b1474a77673fa2fe26ea701fa"} Feb 14 19:00:19 crc kubenswrapper[4897]: I0214 19:00:19.897795 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-ks77p" Feb 14 19:00:19 crc kubenswrapper[4897]: I0214 19:00:19.912819 4897 scope.go:117] "RemoveContainer" containerID="9cd458dbb8141ba6499661bb2064a9aac2c0f3eb5b9cfdd6efb2ab31aedddf20" Feb 14 19:00:19 crc kubenswrapper[4897]: E0214 19:00:19.913287 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cd458dbb8141ba6499661bb2064a9aac2c0f3eb5b9cfdd6efb2ab31aedddf20\": container with ID starting with 9cd458dbb8141ba6499661bb2064a9aac2c0f3eb5b9cfdd6efb2ab31aedddf20 not found: ID does not exist" containerID="9cd458dbb8141ba6499661bb2064a9aac2c0f3eb5b9cfdd6efb2ab31aedddf20" Feb 14 19:00:19 crc kubenswrapper[4897]: I0214 19:00:19.913328 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cd458dbb8141ba6499661bb2064a9aac2c0f3eb5b9cfdd6efb2ab31aedddf20"} err="failed to get container status \"9cd458dbb8141ba6499661bb2064a9aac2c0f3eb5b9cfdd6efb2ab31aedddf20\": rpc error: code = NotFound desc = could not find container \"9cd458dbb8141ba6499661bb2064a9aac2c0f3eb5b9cfdd6efb2ab31aedddf20\": container with ID starting with 9cd458dbb8141ba6499661bb2064a9aac2c0f3eb5b9cfdd6efb2ab31aedddf20 not found: ID does not exist" Feb 14 19:00:19 crc kubenswrapper[4897]: I0214 19:00:19.925826 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-ks77p" podStartSLOduration=8.163517746 podStartE2EDuration="26.925804124s" podCreationTimestamp="2026-02-14 18:59:53 +0000 UTC" firstStartedPulling="2026-02-14 18:59:54.637205754 +0000 UTC m=+1047.613614237" lastFinishedPulling="2026-02-14 19:00:13.399492122 +0000 UTC m=+1066.375900615" observedRunningTime="2026-02-14 19:00:19.922804622 +0000 UTC m=+1072.899213125" watchObservedRunningTime="2026-02-14 19:00:19.925804124 +0000 UTC m=+1072.902212617" Feb 14 19:00:19 crc kubenswrapper[4897]: I0214 19:00:19.942236 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-ncd25"] Feb 14 19:00:19 crc kubenswrapper[4897]: I0214 19:00:19.950074 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-ncd25"] Feb 14 19:00:21 crc kubenswrapper[4897]: I0214 19:00:21.811007 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="152118b1-55ba-4201-8dce-8d916862a55f" path="/var/lib/kubelet/pods/152118b1-55ba-4201-8dce-8d916862a55f/volumes" Feb 14 19:00:24 crc kubenswrapper[4897]: I0214 19:00:24.479291 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n6ptt" Feb 14 19:00:24 crc kubenswrapper[4897]: I0214 19:00:24.483414 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-ks77p" Feb 14 19:00:24 crc kubenswrapper[4897]: I0214 19:00:24.563876 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-ks77p" Feb 14 19:00:26 crc kubenswrapper[4897]: I0214 19:00:26.049090 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-bdg8n" Feb 14 19:00:26 crc kubenswrapper[4897]: I0214 19:00:26.049230 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-bdg8n" Feb 14 19:00:26 crc kubenswrapper[4897]: I0214 19:00:26.108811 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-bdg8n" Feb 14 19:00:27 crc kubenswrapper[4897]: I0214 19:00:27.000713 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-bdg8n" Feb 14 19:00:34 crc kubenswrapper[4897]: I0214 19:00:34.486525 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-ks77p" Feb 14 19:00:40 crc kubenswrapper[4897]: I0214 19:00:40.958953 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t"] Feb 14 19:00:40 crc kubenswrapper[4897]: E0214 19:00:40.962226 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="152118b1-55ba-4201-8dce-8d916862a55f" containerName="registry-server" Feb 14 19:00:40 crc kubenswrapper[4897]: I0214 19:00:40.962263 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="152118b1-55ba-4201-8dce-8d916862a55f" containerName="registry-server" Feb 14 19:00:40 crc kubenswrapper[4897]: I0214 19:00:40.962559 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="152118b1-55ba-4201-8dce-8d916862a55f" containerName="registry-server" Feb 14 19:00:40 crc kubenswrapper[4897]: I0214 19:00:40.964886 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t" Feb 14 19:00:40 crc kubenswrapper[4897]: I0214 19:00:40.968653 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t"] Feb 14 19:00:40 crc kubenswrapper[4897]: I0214 19:00:40.969446 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-4dsff" Feb 14 19:00:41 crc kubenswrapper[4897]: I0214 19:00:41.065264 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q2bc\" (UniqueName: \"kubernetes.io/projected/cb7d19c8-0cd4-48cb-bea2-1178ad5801c7-kube-api-access-5q2bc\") pod \"5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t\" (UID: \"cb7d19c8-0cd4-48cb-bea2-1178ad5801c7\") " pod="openstack-operators/5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t" Feb 14 19:00:41 crc kubenswrapper[4897]: I0214 19:00:41.065352 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cb7d19c8-0cd4-48cb-bea2-1178ad5801c7-util\") pod \"5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t\" (UID: \"cb7d19c8-0cd4-48cb-bea2-1178ad5801c7\") " pod="openstack-operators/5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t" Feb 14 19:00:41 crc kubenswrapper[4897]: I0214 19:00:41.065421 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cb7d19c8-0cd4-48cb-bea2-1178ad5801c7-bundle\") pod \"5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t\" (UID: \"cb7d19c8-0cd4-48cb-bea2-1178ad5801c7\") " pod="openstack-operators/5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t" Feb 14 19:00:41 crc kubenswrapper[4897]: I0214 19:00:41.167679 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q2bc\" (UniqueName: \"kubernetes.io/projected/cb7d19c8-0cd4-48cb-bea2-1178ad5801c7-kube-api-access-5q2bc\") pod \"5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t\" (UID: \"cb7d19c8-0cd4-48cb-bea2-1178ad5801c7\") " pod="openstack-operators/5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t" Feb 14 19:00:41 crc kubenswrapper[4897]: I0214 19:00:41.167778 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cb7d19c8-0cd4-48cb-bea2-1178ad5801c7-util\") pod \"5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t\" (UID: \"cb7d19c8-0cd4-48cb-bea2-1178ad5801c7\") " pod="openstack-operators/5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t" Feb 14 19:00:41 crc kubenswrapper[4897]: I0214 19:00:41.167840 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cb7d19c8-0cd4-48cb-bea2-1178ad5801c7-bundle\") pod \"5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t\" (UID: \"cb7d19c8-0cd4-48cb-bea2-1178ad5801c7\") " pod="openstack-operators/5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t" Feb 14 19:00:41 crc kubenswrapper[4897]: I0214 19:00:41.169081 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cb7d19c8-0cd4-48cb-bea2-1178ad5801c7-util\") pod \"5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t\" (UID: \"cb7d19c8-0cd4-48cb-bea2-1178ad5801c7\") " pod="openstack-operators/5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t" Feb 14 19:00:41 crc kubenswrapper[4897]: I0214 19:00:41.169105 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cb7d19c8-0cd4-48cb-bea2-1178ad5801c7-bundle\") pod \"5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t\" (UID: \"cb7d19c8-0cd4-48cb-bea2-1178ad5801c7\") " pod="openstack-operators/5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t" Feb 14 19:00:41 crc kubenswrapper[4897]: I0214 19:00:41.204840 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q2bc\" (UniqueName: \"kubernetes.io/projected/cb7d19c8-0cd4-48cb-bea2-1178ad5801c7-kube-api-access-5q2bc\") pod \"5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t\" (UID: \"cb7d19c8-0cd4-48cb-bea2-1178ad5801c7\") " pod="openstack-operators/5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t" Feb 14 19:00:41 crc kubenswrapper[4897]: I0214 19:00:41.327727 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t" Feb 14 19:00:41 crc kubenswrapper[4897]: I0214 19:00:41.818984 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t"] Feb 14 19:00:42 crc kubenswrapper[4897]: I0214 19:00:42.154913 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t" event={"ID":"cb7d19c8-0cd4-48cb-bea2-1178ad5801c7","Type":"ContainerStarted","Data":"d582f8cea573c574b472cb4f4b3f6d6bbd213625b5a1724cbe0ee159ccdef3cc"} Feb 14 19:00:42 crc kubenswrapper[4897]: I0214 19:00:42.154979 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t" event={"ID":"cb7d19c8-0cd4-48cb-bea2-1178ad5801c7","Type":"ContainerStarted","Data":"9c70ce8c6e938a7ba0d1b546b617ccb1a9446b8877fa0868e518107c47c35847"} Feb 14 19:00:42 crc kubenswrapper[4897]: E0214 19:00:42.352638 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb7d19c8_0cd4_48cb_bea2_1178ad5801c7.slice/crio-conmon-d582f8cea573c574b472cb4f4b3f6d6bbd213625b5a1724cbe0ee159ccdef3cc.scope\": RecentStats: unable to find data in memory cache]" Feb 14 19:00:43 crc kubenswrapper[4897]: I0214 19:00:43.165757 4897 generic.go:334] "Generic (PLEG): container finished" podID="cb7d19c8-0cd4-48cb-bea2-1178ad5801c7" containerID="d582f8cea573c574b472cb4f4b3f6d6bbd213625b5a1724cbe0ee159ccdef3cc" exitCode=0 Feb 14 19:00:43 crc kubenswrapper[4897]: I0214 19:00:43.165848 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t" event={"ID":"cb7d19c8-0cd4-48cb-bea2-1178ad5801c7","Type":"ContainerDied","Data":"d582f8cea573c574b472cb4f4b3f6d6bbd213625b5a1724cbe0ee159ccdef3cc"} Feb 14 19:00:44 crc kubenswrapper[4897]: I0214 19:00:44.180959 4897 generic.go:334] "Generic (PLEG): container finished" podID="cb7d19c8-0cd4-48cb-bea2-1178ad5801c7" containerID="47f5424ca9dbfb9c3a25b413f8fd624dbd80ac085bcf44ac856a30baed99bc32" exitCode=0 Feb 14 19:00:44 crc kubenswrapper[4897]: I0214 19:00:44.181200 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t" event={"ID":"cb7d19c8-0cd4-48cb-bea2-1178ad5801c7","Type":"ContainerDied","Data":"47f5424ca9dbfb9c3a25b413f8fd624dbd80ac085bcf44ac856a30baed99bc32"} Feb 14 19:00:45 crc kubenswrapper[4897]: I0214 19:00:45.192126 4897 generic.go:334] "Generic (PLEG): container finished" podID="cb7d19c8-0cd4-48cb-bea2-1178ad5801c7" containerID="2acc7756044efc61be354104cc35f5f76199928166c364a715f1f495c8c96cfb" exitCode=0 Feb 14 19:00:45 crc kubenswrapper[4897]: I0214 19:00:45.192173 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t" event={"ID":"cb7d19c8-0cd4-48cb-bea2-1178ad5801c7","Type":"ContainerDied","Data":"2acc7756044efc61be354104cc35f5f76199928166c364a715f1f495c8c96cfb"} Feb 14 19:00:46 crc kubenswrapper[4897]: I0214 19:00:46.584890 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t" Feb 14 19:00:46 crc kubenswrapper[4897]: I0214 19:00:46.677523 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q2bc\" (UniqueName: \"kubernetes.io/projected/cb7d19c8-0cd4-48cb-bea2-1178ad5801c7-kube-api-access-5q2bc\") pod \"cb7d19c8-0cd4-48cb-bea2-1178ad5801c7\" (UID: \"cb7d19c8-0cd4-48cb-bea2-1178ad5801c7\") " Feb 14 19:00:46 crc kubenswrapper[4897]: I0214 19:00:46.677668 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cb7d19c8-0cd4-48cb-bea2-1178ad5801c7-util\") pod \"cb7d19c8-0cd4-48cb-bea2-1178ad5801c7\" (UID: \"cb7d19c8-0cd4-48cb-bea2-1178ad5801c7\") " Feb 14 19:00:46 crc kubenswrapper[4897]: I0214 19:00:46.677749 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cb7d19c8-0cd4-48cb-bea2-1178ad5801c7-bundle\") pod \"cb7d19c8-0cd4-48cb-bea2-1178ad5801c7\" (UID: \"cb7d19c8-0cd4-48cb-bea2-1178ad5801c7\") " Feb 14 19:00:46 crc kubenswrapper[4897]: I0214 19:00:46.678412 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb7d19c8-0cd4-48cb-bea2-1178ad5801c7-bundle" (OuterVolumeSpecName: "bundle") pod "cb7d19c8-0cd4-48cb-bea2-1178ad5801c7" (UID: "cb7d19c8-0cd4-48cb-bea2-1178ad5801c7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:00:46 crc kubenswrapper[4897]: I0214 19:00:46.682320 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb7d19c8-0cd4-48cb-bea2-1178ad5801c7-kube-api-access-5q2bc" (OuterVolumeSpecName: "kube-api-access-5q2bc") pod "cb7d19c8-0cd4-48cb-bea2-1178ad5801c7" (UID: "cb7d19c8-0cd4-48cb-bea2-1178ad5801c7"). InnerVolumeSpecName "kube-api-access-5q2bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:00:46 crc kubenswrapper[4897]: I0214 19:00:46.703153 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb7d19c8-0cd4-48cb-bea2-1178ad5801c7-util" (OuterVolumeSpecName: "util") pod "cb7d19c8-0cd4-48cb-bea2-1178ad5801c7" (UID: "cb7d19c8-0cd4-48cb-bea2-1178ad5801c7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:00:46 crc kubenswrapper[4897]: I0214 19:00:46.781473 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q2bc\" (UniqueName: \"kubernetes.io/projected/cb7d19c8-0cd4-48cb-bea2-1178ad5801c7-kube-api-access-5q2bc\") on node \"crc\" DevicePath \"\"" Feb 14 19:00:46 crc kubenswrapper[4897]: I0214 19:00:46.781543 4897 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cb7d19c8-0cd4-48cb-bea2-1178ad5801c7-util\") on node \"crc\" DevicePath \"\"" Feb 14 19:00:46 crc kubenswrapper[4897]: I0214 19:00:46.781558 4897 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cb7d19c8-0cd4-48cb-bea2-1178ad5801c7-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:00:47 crc kubenswrapper[4897]: I0214 19:00:47.214985 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t" event={"ID":"cb7d19c8-0cd4-48cb-bea2-1178ad5801c7","Type":"ContainerDied","Data":"9c70ce8c6e938a7ba0d1b546b617ccb1a9446b8877fa0868e518107c47c35847"} Feb 14 19:00:47 crc kubenswrapper[4897]: I0214 19:00:47.215081 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c70ce8c6e938a7ba0d1b546b617ccb1a9446b8877fa0868e518107c47c35847" Feb 14 19:00:47 crc kubenswrapper[4897]: I0214 19:00:47.215094 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t" Feb 14 19:00:59 crc kubenswrapper[4897]: I0214 19:00:59.858726 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-99cb98555-5nrbh"] Feb 14 19:00:59 crc kubenswrapper[4897]: E0214 19:00:59.859709 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb7d19c8-0cd4-48cb-bea2-1178ad5801c7" containerName="pull" Feb 14 19:00:59 crc kubenswrapper[4897]: I0214 19:00:59.859727 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb7d19c8-0cd4-48cb-bea2-1178ad5801c7" containerName="pull" Feb 14 19:00:59 crc kubenswrapper[4897]: E0214 19:00:59.859772 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb7d19c8-0cd4-48cb-bea2-1178ad5801c7" containerName="extract" Feb 14 19:00:59 crc kubenswrapper[4897]: I0214 19:00:59.859780 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb7d19c8-0cd4-48cb-bea2-1178ad5801c7" containerName="extract" Feb 14 19:00:59 crc kubenswrapper[4897]: E0214 19:00:59.859798 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb7d19c8-0cd4-48cb-bea2-1178ad5801c7" containerName="util" Feb 14 19:00:59 crc kubenswrapper[4897]: I0214 19:00:59.859805 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb7d19c8-0cd4-48cb-bea2-1178ad5801c7" containerName="util" Feb 14 19:00:59 crc kubenswrapper[4897]: I0214 19:00:59.865232 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb7d19c8-0cd4-48cb-bea2-1178ad5801c7" containerName="extract" Feb 14 19:00:59 crc kubenswrapper[4897]: I0214 19:00:59.865965 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-99cb98555-5nrbh" Feb 14 19:00:59 crc kubenswrapper[4897]: I0214 19:00:59.869576 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-xxvh4" Feb 14 19:00:59 crc kubenswrapper[4897]: I0214 19:00:59.881898 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-99cb98555-5nrbh"] Feb 14 19:00:59 crc kubenswrapper[4897]: I0214 19:00:59.919001 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw7tp\" (UniqueName: \"kubernetes.io/projected/55ee13ff-72a6-4bdb-8461-fb545f66b881-kube-api-access-zw7tp\") pod \"openstack-operator-controller-init-99cb98555-5nrbh\" (UID: \"55ee13ff-72a6-4bdb-8461-fb545f66b881\") " pod="openstack-operators/openstack-operator-controller-init-99cb98555-5nrbh" Feb 14 19:01:00 crc kubenswrapper[4897]: I0214 19:01:00.020370 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw7tp\" (UniqueName: \"kubernetes.io/projected/55ee13ff-72a6-4bdb-8461-fb545f66b881-kube-api-access-zw7tp\") pod \"openstack-operator-controller-init-99cb98555-5nrbh\" (UID: \"55ee13ff-72a6-4bdb-8461-fb545f66b881\") " pod="openstack-operators/openstack-operator-controller-init-99cb98555-5nrbh" Feb 14 19:01:00 crc kubenswrapper[4897]: I0214 19:01:00.042897 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw7tp\" (UniqueName: \"kubernetes.io/projected/55ee13ff-72a6-4bdb-8461-fb545f66b881-kube-api-access-zw7tp\") pod \"openstack-operator-controller-init-99cb98555-5nrbh\" (UID: \"55ee13ff-72a6-4bdb-8461-fb545f66b881\") " pod="openstack-operators/openstack-operator-controller-init-99cb98555-5nrbh" Feb 14 19:01:00 crc kubenswrapper[4897]: I0214 19:01:00.187385 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-99cb98555-5nrbh" Feb 14 19:01:00 crc kubenswrapper[4897]: I0214 19:01:00.651836 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-99cb98555-5nrbh"] Feb 14 19:01:00 crc kubenswrapper[4897]: W0214 19:01:00.656078 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55ee13ff_72a6_4bdb_8461_fb545f66b881.slice/crio-d1a151a94f4dcd646d976ce249f92b2bcb614b464d2b2fe3984d42a27ae599d7 WatchSource:0}: Error finding container d1a151a94f4dcd646d976ce249f92b2bcb614b464d2b2fe3984d42a27ae599d7: Status 404 returned error can't find the container with id d1a151a94f4dcd646d976ce249f92b2bcb614b464d2b2fe3984d42a27ae599d7 Feb 14 19:01:01 crc kubenswrapper[4897]: I0214 19:01:01.355381 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-99cb98555-5nrbh" event={"ID":"55ee13ff-72a6-4bdb-8461-fb545f66b881","Type":"ContainerStarted","Data":"d1a151a94f4dcd646d976ce249f92b2bcb614b464d2b2fe3984d42a27ae599d7"} Feb 14 19:01:05 crc kubenswrapper[4897]: I0214 19:01:05.399395 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-99cb98555-5nrbh" event={"ID":"55ee13ff-72a6-4bdb-8461-fb545f66b881","Type":"ContainerStarted","Data":"c39133b4eb15992136e3ca674c0b0aa667bb1d0fbe00872d8fb40a1a05aa8097"} Feb 14 19:01:05 crc kubenswrapper[4897]: I0214 19:01:05.399872 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-99cb98555-5nrbh" Feb 14 19:01:05 crc kubenswrapper[4897]: I0214 19:01:05.456395 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-99cb98555-5nrbh" podStartSLOduration=2.362424058 podStartE2EDuration="6.456366086s" podCreationTimestamp="2026-02-14 19:00:59 +0000 UTC" firstStartedPulling="2026-02-14 19:01:00.658257173 +0000 UTC m=+1113.634665666" lastFinishedPulling="2026-02-14 19:01:04.752199221 +0000 UTC m=+1117.728607694" observedRunningTime="2026-02-14 19:01:05.441346501 +0000 UTC m=+1118.417755024" watchObservedRunningTime="2026-02-14 19:01:05.456366086 +0000 UTC m=+1118.432774609" Feb 14 19:01:10 crc kubenswrapper[4897]: I0214 19:01:10.190936 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-99cb98555-5nrbh" Feb 14 19:01:30 crc kubenswrapper[4897]: I0214 19:01:30.917442 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-2lvr5"] Feb 14 19:01:30 crc kubenswrapper[4897]: I0214 19:01:30.920653 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2lvr5" Feb 14 19:01:30 crc kubenswrapper[4897]: I0214 19:01:30.927305 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-2x5f8" Feb 14 19:01:30 crc kubenswrapper[4897]: I0214 19:01:30.962409 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-ts22t"] Feb 14 19:01:30 crc kubenswrapper[4897]: I0214 19:01:30.963687 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ts22t" Feb 14 19:01:30 crc kubenswrapper[4897]: I0214 19:01:30.970118 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-2lvr5"] Feb 14 19:01:30 crc kubenswrapper[4897]: I0214 19:01:30.975524 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-pktvd" Feb 14 19:01:30 crc kubenswrapper[4897]: I0214 19:01:30.985658 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-drm7d"] Feb 14 19:01:30 crc kubenswrapper[4897]: I0214 19:01:30.986708 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-drm7d" Feb 14 19:01:30 crc kubenswrapper[4897]: I0214 19:01:30.993558 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-lhlpg" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.009732 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-ts22t"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.022859 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-drm7d"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.032292 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-5v2tq"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.033436 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5v2tq" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.036406 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-dsq88" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.047788 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-wsghb"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.048955 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-wsghb" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.053329 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-659lk" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.078631 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc77r\" (UniqueName: \"kubernetes.io/projected/0128668e-be83-412e-96e6-8c158ab45cc5-kube-api-access-dc77r\") pod \"glance-operator-controller-manager-77987464f4-wsghb\" (UID: \"0128668e-be83-412e-96e6-8c158ab45cc5\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-wsghb" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.078688 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzzpf\" (UniqueName: \"kubernetes.io/projected/10c98e4f-ae22-481b-992d-6804a1b5d0cc-kube-api-access-zzzpf\") pod \"heat-operator-controller-manager-69f49c598c-5v2tq\" (UID: \"10c98e4f-ae22-481b-992d-6804a1b5d0cc\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5v2tq" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.078715 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsbj5\" (UniqueName: \"kubernetes.io/projected/fe513351-3f7b-436d-9218-a66a6f579948-kube-api-access-gsbj5\") pod \"designate-operator-controller-manager-6d8bf5c495-drm7d\" (UID: \"fe513351-3f7b-436d-9218-a66a6f579948\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-drm7d" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.078835 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgv8z\" (UniqueName: \"kubernetes.io/projected/8dffc7df-2563-4f02-8dfc-83ab824af909-kube-api-access-sgv8z\") pod \"cinder-operator-controller-manager-5d946d989d-ts22t\" (UID: \"8dffc7df-2563-4f02-8dfc-83ab824af909\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ts22t" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.078864 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-242tn\" (UniqueName: \"kubernetes.io/projected/48e0b91f-f946-4ecc-b36c-fc280e728f77-kube-api-access-242tn\") pod \"barbican-operator-controller-manager-868647ff47-2lvr5\" (UID: \"48e0b91f-f946-4ecc-b36c-fc280e728f77\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2lvr5" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.078947 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-wsghb"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.097253 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-5v2tq"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.112365 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-tsqnc"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.113373 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-tsqnc" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.117673 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-95885" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.139010 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-9ht86"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.140144 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-9ht86" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.143219 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.143265 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-chrk2" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.147811 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-fzgws"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.148837 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fzgws" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.151249 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-txkn4" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.159466 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-tsqnc"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.173906 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-9ht86"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.179725 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc77r\" (UniqueName: \"kubernetes.io/projected/0128668e-be83-412e-96e6-8c158ab45cc5-kube-api-access-dc77r\") pod \"glance-operator-controller-manager-77987464f4-wsghb\" (UID: \"0128668e-be83-412e-96e6-8c158ab45cc5\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-wsghb" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.179767 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzzpf\" (UniqueName: \"kubernetes.io/projected/10c98e4f-ae22-481b-992d-6804a1b5d0cc-kube-api-access-zzzpf\") pod \"heat-operator-controller-manager-69f49c598c-5v2tq\" (UID: \"10c98e4f-ae22-481b-992d-6804a1b5d0cc\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5v2tq" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.179798 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsbj5\" (UniqueName: \"kubernetes.io/projected/fe513351-3f7b-436d-9218-a66a6f579948-kube-api-access-gsbj5\") pod \"designate-operator-controller-manager-6d8bf5c495-drm7d\" (UID: \"fe513351-3f7b-436d-9218-a66a6f579948\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-drm7d" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.179893 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgv8z\" (UniqueName: \"kubernetes.io/projected/8dffc7df-2563-4f02-8dfc-83ab824af909-kube-api-access-sgv8z\") pod \"cinder-operator-controller-manager-5d946d989d-ts22t\" (UID: \"8dffc7df-2563-4f02-8dfc-83ab824af909\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ts22t" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.179923 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-242tn\" (UniqueName: \"kubernetes.io/projected/48e0b91f-f946-4ecc-b36c-fc280e728f77-kube-api-access-242tn\") pod \"barbican-operator-controller-manager-868647ff47-2lvr5\" (UID: \"48e0b91f-f946-4ecc-b36c-fc280e728f77\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2lvr5" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.195231 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-nwjnd"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.196350 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nwjnd" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.197152 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-fzgws"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.198795 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-cttlr" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.208266 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsbj5\" (UniqueName: \"kubernetes.io/projected/fe513351-3f7b-436d-9218-a66a6f579948-kube-api-access-gsbj5\") pod \"designate-operator-controller-manager-6d8bf5c495-drm7d\" (UID: \"fe513351-3f7b-436d-9218-a66a6f579948\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-drm7d" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.212230 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc77r\" (UniqueName: \"kubernetes.io/projected/0128668e-be83-412e-96e6-8c158ab45cc5-kube-api-access-dc77r\") pod \"glance-operator-controller-manager-77987464f4-wsghb\" (UID: \"0128668e-be83-412e-96e6-8c158ab45cc5\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-wsghb" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.226689 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-nwjnd"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.227572 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgv8z\" (UniqueName: \"kubernetes.io/projected/8dffc7df-2563-4f02-8dfc-83ab824af909-kube-api-access-sgv8z\") pod \"cinder-operator-controller-manager-5d946d989d-ts22t\" (UID: \"8dffc7df-2563-4f02-8dfc-83ab824af909\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ts22t" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.227829 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-242tn\" (UniqueName: \"kubernetes.io/projected/48e0b91f-f946-4ecc-b36c-fc280e728f77-kube-api-access-242tn\") pod \"barbican-operator-controller-manager-868647ff47-2lvr5\" (UID: \"48e0b91f-f946-4ecc-b36c-fc280e728f77\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2lvr5" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.230633 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzzpf\" (UniqueName: \"kubernetes.io/projected/10c98e4f-ae22-481b-992d-6804a1b5d0cc-kube-api-access-zzzpf\") pod \"heat-operator-controller-manager-69f49c598c-5v2tq\" (UID: \"10c98e4f-ae22-481b-992d-6804a1b5d0cc\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5v2tq" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.238191 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-5dg28"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.239334 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-rtvvf"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.240097 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rtvvf" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.240572 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-5dg28" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.242494 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-dx685" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.242494 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-jrwtq" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.247771 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-5dg28"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.254855 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2lvr5" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.268927 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bl5g8"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.270767 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bl5g8" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.271996 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-snxkw" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.281684 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd9aef55-ad36-4675-a79a-a1829c9b3b3e-cert\") pod \"infra-operator-controller-manager-79d975b745-9ht86\" (UID: \"bd9aef55-ad36-4675-a79a-a1829c9b3b3e\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-9ht86" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.281744 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhnbs\" (UniqueName: \"kubernetes.io/projected/de1e8e22-10a4-4d2a-855f-4c7bb6a49096-kube-api-access-bhnbs\") pod \"horizon-operator-controller-manager-5b9b8895d5-tsqnc\" (UID: \"de1e8e22-10a4-4d2a-855f-4c7bb6a49096\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-tsqnc" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.281766 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cndqf\" (UniqueName: \"kubernetes.io/projected/a2a15c49-cac6-4772-be07-69fd7597b692-kube-api-access-cndqf\") pod \"ironic-operator-controller-manager-554564d7fc-fzgws\" (UID: \"a2a15c49-cac6-4772-be07-69fd7597b692\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fzgws" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.281817 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s99gv\" (UniqueName: \"kubernetes.io/projected/bd9aef55-ad36-4675-a79a-a1829c9b3b3e-kube-api-access-s99gv\") pod \"infra-operator-controller-manager-79d975b745-9ht86\" (UID: \"bd9aef55-ad36-4675-a79a-a1829c9b3b3e\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-9ht86" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.284993 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-gvcdc"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.286205 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gvcdc" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.287551 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-w8rrq" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.298589 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-rtvvf"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.305160 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ts22t" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.311947 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bl5g8"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.318087 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-qbz5t"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.319217 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-qbz5t" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.321218 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-tzxss" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.329529 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-gvcdc"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.336459 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-qbz5t"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.336790 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-drm7d" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.341934 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-bh95f"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.344078 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bh95f" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.346453 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-plkr9" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.348539 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csghqz"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.349663 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.351068 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-ks8ff" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.351288 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.357583 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-bh95f"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.375907 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5v2tq" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.388457 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q88gj\" (UniqueName: \"kubernetes.io/projected/6fe73ade-8031-493c-9628-018ad436c7a5-kube-api-access-q88gj\") pod \"manila-operator-controller-manager-54f6768c69-5dg28\" (UID: \"6fe73ade-8031-493c-9628-018ad436c7a5\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-5dg28" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.391658 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd9aef55-ad36-4675-a79a-a1829c9b3b3e-cert\") pod \"infra-operator-controller-manager-79d975b745-9ht86\" (UID: \"bd9aef55-ad36-4675-a79a-a1829c9b3b3e\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-9ht86" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.391795 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhnbs\" (UniqueName: \"kubernetes.io/projected/de1e8e22-10a4-4d2a-855f-4c7bb6a49096-kube-api-access-bhnbs\") pod \"horizon-operator-controller-manager-5b9b8895d5-tsqnc\" (UID: \"de1e8e22-10a4-4d2a-855f-4c7bb6a49096\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-tsqnc" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.391833 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cndqf\" (UniqueName: \"kubernetes.io/projected/a2a15c49-cac6-4772-be07-69fd7597b692-kube-api-access-cndqf\") pod \"ironic-operator-controller-manager-554564d7fc-fzgws\" (UID: \"a2a15c49-cac6-4772-be07-69fd7597b692\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fzgws" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.391903 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-292sv\" (UniqueName: \"kubernetes.io/projected/8238fbef-1e59-4430-af92-1be3d70c4d84-kube-api-access-292sv\") pod \"mariadb-operator-controller-manager-6994f66f48-rtvvf\" (UID: \"8238fbef-1e59-4430-af92-1be3d70c4d84\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rtvvf" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.392008 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4swbq\" (UniqueName: \"kubernetes.io/projected/7c6ab7c6-c333-41db-ba23-f89b3eff3eef-kube-api-access-4swbq\") pod \"neutron-operator-controller-manager-64ddbf8bb-bl5g8\" (UID: \"7c6ab7c6-c333-41db-ba23-f89b3eff3eef\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bl5g8" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.392048 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s99gv\" (UniqueName: \"kubernetes.io/projected/bd9aef55-ad36-4675-a79a-a1829c9b3b3e-kube-api-access-s99gv\") pod \"infra-operator-controller-manager-79d975b745-9ht86\" (UID: \"bd9aef55-ad36-4675-a79a-a1829c9b3b3e\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-9ht86" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.392085 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tsjl\" (UniqueName: \"kubernetes.io/projected/5e11063d-aac7-4fea-91d9-0b560622ccb9-kube-api-access-8tsjl\") pod \"keystone-operator-controller-manager-b4d948c87-nwjnd\" (UID: \"5e11063d-aac7-4fea-91d9-0b560622ccb9\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nwjnd" Feb 14 19:01:31 crc kubenswrapper[4897]: E0214 19:01:31.392324 4897 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 14 19:01:31 crc kubenswrapper[4897]: E0214 19:01:31.392395 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd9aef55-ad36-4675-a79a-a1829c9b3b3e-cert podName:bd9aef55-ad36-4675-a79a-a1829c9b3b3e nodeName:}" failed. No retries permitted until 2026-02-14 19:01:31.892367846 +0000 UTC m=+1144.868776339 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd9aef55-ad36-4675-a79a-a1829c9b3b3e-cert") pod "infra-operator-controller-manager-79d975b745-9ht86" (UID: "bd9aef55-ad36-4675-a79a-a1829c9b3b3e") : secret "infra-operator-webhook-server-cert" not found Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.405797 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-wsghb" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.418185 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csghqz"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.426078 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhnbs\" (UniqueName: \"kubernetes.io/projected/de1e8e22-10a4-4d2a-855f-4c7bb6a49096-kube-api-access-bhnbs\") pod \"horizon-operator-controller-manager-5b9b8895d5-tsqnc\" (UID: \"de1e8e22-10a4-4d2a-855f-4c7bb6a49096\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-tsqnc" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.427133 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cndqf\" (UniqueName: \"kubernetes.io/projected/a2a15c49-cac6-4772-be07-69fd7597b692-kube-api-access-cndqf\") pod \"ironic-operator-controller-manager-554564d7fc-fzgws\" (UID: \"a2a15c49-cac6-4772-be07-69fd7597b692\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fzgws" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.446089 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s99gv\" (UniqueName: \"kubernetes.io/projected/bd9aef55-ad36-4675-a79a-a1829c9b3b3e-kube-api-access-s99gv\") pod \"infra-operator-controller-manager-79d975b745-9ht86\" (UID: \"bd9aef55-ad36-4675-a79a-a1829c9b3b3e\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-9ht86" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.447930 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-tsqnc" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.475659 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-gfrd9"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.476865 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-gfrd9" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.482188 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-wnk86" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.486340 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fzgws" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.493325 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wpkg\" (UniqueName: \"kubernetes.io/projected/afb3d9d3-a3e1-4aac-89ef-a7128579e6e9-kube-api-access-5wpkg\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csghqz\" (UID: \"afb3d9d3-a3e1-4aac-89ef-a7128579e6e9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.493460 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-292sv\" (UniqueName: \"kubernetes.io/projected/8238fbef-1e59-4430-af92-1be3d70c4d84-kube-api-access-292sv\") pod \"mariadb-operator-controller-manager-6994f66f48-rtvvf\" (UID: \"8238fbef-1e59-4430-af92-1be3d70c4d84\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rtvvf" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.495264 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86djg\" (UniqueName: \"kubernetes.io/projected/d2543021-51cc-4cbe-9293-a6e02894e1f4-kube-api-access-86djg\") pod \"ovn-operator-controller-manager-d44cf6b75-bh95f\" (UID: \"d2543021-51cc-4cbe-9293-a6e02894e1f4\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bh95f" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.495321 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4h75\" (UniqueName: \"kubernetes.io/projected/088b6f9e-b5ec-48f5-a7bb-4d9ef0db9820-kube-api-access-f4h75\") pod \"nova-operator-controller-manager-567668f5cf-gvcdc\" (UID: \"088b6f9e-b5ec-48f5-a7bb-4d9ef0db9820\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gvcdc" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.495368 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b79z\" (UniqueName: \"kubernetes.io/projected/fd2bec18-d3b4-4ee2-bfb9-34e5d1ddff3a-kube-api-access-9b79z\") pod \"octavia-operator-controller-manager-69f8888797-qbz5t\" (UID: \"fd2bec18-d3b4-4ee2-bfb9-34e5d1ddff3a\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-qbz5t" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.495398 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/afb3d9d3-a3e1-4aac-89ef-a7128579e6e9-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csghqz\" (UID: \"afb3d9d3-a3e1-4aac-89ef-a7128579e6e9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.495424 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4swbq\" (UniqueName: \"kubernetes.io/projected/7c6ab7c6-c333-41db-ba23-f89b3eff3eef-kube-api-access-4swbq\") pod \"neutron-operator-controller-manager-64ddbf8bb-bl5g8\" (UID: \"7c6ab7c6-c333-41db-ba23-f89b3eff3eef\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bl5g8" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.495462 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tsjl\" (UniqueName: \"kubernetes.io/projected/5e11063d-aac7-4fea-91d9-0b560622ccb9-kube-api-access-8tsjl\") pod \"keystone-operator-controller-manager-b4d948c87-nwjnd\" (UID: \"5e11063d-aac7-4fea-91d9-0b560622ccb9\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nwjnd" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.496848 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q88gj\" (UniqueName: \"kubernetes.io/projected/6fe73ade-8031-493c-9628-018ad436c7a5-kube-api-access-q88gj\") pod \"manila-operator-controller-manager-54f6768c69-5dg28\" (UID: \"6fe73ade-8031-493c-9628-018ad436c7a5\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-5dg28" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.519483 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-292sv\" (UniqueName: \"kubernetes.io/projected/8238fbef-1e59-4430-af92-1be3d70c4d84-kube-api-access-292sv\") pod \"mariadb-operator-controller-manager-6994f66f48-rtvvf\" (UID: \"8238fbef-1e59-4430-af92-1be3d70c4d84\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rtvvf" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.526001 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tsjl\" (UniqueName: \"kubernetes.io/projected/5e11063d-aac7-4fea-91d9-0b560622ccb9-kube-api-access-8tsjl\") pod \"keystone-operator-controller-manager-b4d948c87-nwjnd\" (UID: \"5e11063d-aac7-4fea-91d9-0b560622ccb9\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nwjnd" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.528786 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q88gj\" (UniqueName: \"kubernetes.io/projected/6fe73ade-8031-493c-9628-018ad436c7a5-kube-api-access-q88gj\") pod \"manila-operator-controller-manager-54f6768c69-5dg28\" (UID: \"6fe73ade-8031-493c-9628-018ad436c7a5\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-5dg28" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.532081 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4swbq\" (UniqueName: \"kubernetes.io/projected/7c6ab7c6-c333-41db-ba23-f89b3eff3eef-kube-api-access-4swbq\") pod \"neutron-operator-controller-manager-64ddbf8bb-bl5g8\" (UID: \"7c6ab7c6-c333-41db-ba23-f89b3eff3eef\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bl5g8" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.561593 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-gfrd9"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.580976 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nwjnd" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.597375 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-m5nfk"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.598670 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-m5nfk" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.601701 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-5wgwv" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.603200 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86djg\" (UniqueName: \"kubernetes.io/projected/d2543021-51cc-4cbe-9293-a6e02894e1f4-kube-api-access-86djg\") pod \"ovn-operator-controller-manager-d44cf6b75-bh95f\" (UID: \"d2543021-51cc-4cbe-9293-a6e02894e1f4\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bh95f" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.603248 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4h75\" (UniqueName: \"kubernetes.io/projected/088b6f9e-b5ec-48f5-a7bb-4d9ef0db9820-kube-api-access-f4h75\") pod \"nova-operator-controller-manager-567668f5cf-gvcdc\" (UID: \"088b6f9e-b5ec-48f5-a7bb-4d9ef0db9820\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gvcdc" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.603279 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b79z\" (UniqueName: \"kubernetes.io/projected/fd2bec18-d3b4-4ee2-bfb9-34e5d1ddff3a-kube-api-access-9b79z\") pod \"octavia-operator-controller-manager-69f8888797-qbz5t\" (UID: \"fd2bec18-d3b4-4ee2-bfb9-34e5d1ddff3a\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-qbz5t" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.603300 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/afb3d9d3-a3e1-4aac-89ef-a7128579e6e9-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csghqz\" (UID: \"afb3d9d3-a3e1-4aac-89ef-a7128579e6e9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.603348 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m42zt\" (UniqueName: \"kubernetes.io/projected/cd0646ca-c695-4387-ba4b-cc9a3d85b460-kube-api-access-m42zt\") pod \"placement-operator-controller-manager-8497b45c89-gfrd9\" (UID: \"cd0646ca-c695-4387-ba4b-cc9a3d85b460\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-gfrd9" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.603410 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wpkg\" (UniqueName: \"kubernetes.io/projected/afb3d9d3-a3e1-4aac-89ef-a7128579e6e9-kube-api-access-5wpkg\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csghqz\" (UID: \"afb3d9d3-a3e1-4aac-89ef-a7128579e6e9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" Feb 14 19:01:31 crc kubenswrapper[4897]: E0214 19:01:31.604112 4897 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 19:01:31 crc kubenswrapper[4897]: E0214 19:01:31.604156 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afb3d9d3-a3e1-4aac-89ef-a7128579e6e9-cert podName:afb3d9d3-a3e1-4aac-89ef-a7128579e6e9 nodeName:}" failed. No retries permitted until 2026-02-14 19:01:32.104143919 +0000 UTC m=+1145.080552392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/afb3d9d3-a3e1-4aac-89ef-a7128579e6e9-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" (UID: "afb3d9d3-a3e1-4aac-89ef-a7128579e6e9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.609223 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-m5nfk"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.626667 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wpkg\" (UniqueName: \"kubernetes.io/projected/afb3d9d3-a3e1-4aac-89ef-a7128579e6e9-kube-api-access-5wpkg\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csghqz\" (UID: \"afb3d9d3-a3e1-4aac-89ef-a7128579e6e9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.636198 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4h75\" (UniqueName: \"kubernetes.io/projected/088b6f9e-b5ec-48f5-a7bb-4d9ef0db9820-kube-api-access-f4h75\") pod \"nova-operator-controller-manager-567668f5cf-gvcdc\" (UID: \"088b6f9e-b5ec-48f5-a7bb-4d9ef0db9820\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gvcdc" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.636636 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b79z\" (UniqueName: \"kubernetes.io/projected/fd2bec18-d3b4-4ee2-bfb9-34e5d1ddff3a-kube-api-access-9b79z\") pod \"octavia-operator-controller-manager-69f8888797-qbz5t\" (UID: \"fd2bec18-d3b4-4ee2-bfb9-34e5d1ddff3a\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-qbz5t" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.647626 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86djg\" (UniqueName: \"kubernetes.io/projected/d2543021-51cc-4cbe-9293-a6e02894e1f4-kube-api-access-86djg\") pod \"ovn-operator-controller-manager-d44cf6b75-bh95f\" (UID: \"d2543021-51cc-4cbe-9293-a6e02894e1f4\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bh95f" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.685318 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rtvvf" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.705628 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-5dg28" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.706012 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m42zt\" (UniqueName: \"kubernetes.io/projected/cd0646ca-c695-4387-ba4b-cc9a3d85b460-kube-api-access-m42zt\") pod \"placement-operator-controller-manager-8497b45c89-gfrd9\" (UID: \"cd0646ca-c695-4387-ba4b-cc9a3d85b460\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-gfrd9" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.706149 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psfgf\" (UniqueName: \"kubernetes.io/projected/0c6cb6a4-76e1-4568-9fc5-6069cc28b9e6-kube-api-access-psfgf\") pod \"swift-operator-controller-manager-68f46476f-m5nfk\" (UID: \"0c6cb6a4-76e1-4568-9fc5-6069cc28b9e6\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-m5nfk" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.721516 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bl5g8" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.727635 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58f847fcbd-9djqq"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.729245 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58f847fcbd-9djqq" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.729360 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m42zt\" (UniqueName: \"kubernetes.io/projected/cd0646ca-c695-4387-ba4b-cc9a3d85b460-kube-api-access-m42zt\") pod \"placement-operator-controller-manager-8497b45c89-gfrd9\" (UID: \"cd0646ca-c695-4387-ba4b-cc9a3d85b460\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-gfrd9" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.732421 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-wk8q9" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.749393 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gvcdc" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.750398 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58f847fcbd-9djqq"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.765658 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-qbz5t" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.777866 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-7fnnb"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.779004 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-7fnnb" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.780942 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-7njsl" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.788761 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-7fnnb"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.801011 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bh95f" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.811976 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psfgf\" (UniqueName: \"kubernetes.io/projected/0c6cb6a4-76e1-4568-9fc5-6069cc28b9e6-kube-api-access-psfgf\") pod \"swift-operator-controller-manager-68f46476f-m5nfk\" (UID: \"0c6cb6a4-76e1-4568-9fc5-6069cc28b9e6\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-m5nfk" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.812249 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94q6f\" (UniqueName: \"kubernetes.io/projected/f8e83507-87e8-44e6-a08d-f1f45f8b4ee0-kube-api-access-94q6f\") pod \"test-operator-controller-manager-7866795846-7fnnb\" (UID: \"f8e83507-87e8-44e6-a08d-f1f45f8b4ee0\") " pod="openstack-operators/test-operator-controller-manager-7866795846-7fnnb" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.812418 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp7ss\" (UniqueName: \"kubernetes.io/projected/949ed147-ec0c-4e17-bc34-4d27018a9567-kube-api-access-xp7ss\") pod \"telemetry-operator-controller-manager-58f847fcbd-9djqq\" (UID: \"949ed147-ec0c-4e17-bc34-4d27018a9567\") " pod="openstack-operators/telemetry-operator-controller-manager-58f847fcbd-9djqq" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.830450 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psfgf\" (UniqueName: \"kubernetes.io/projected/0c6cb6a4-76e1-4568-9fc5-6069cc28b9e6-kube-api-access-psfgf\") pod \"swift-operator-controller-manager-68f46476f-m5nfk\" (UID: \"0c6cb6a4-76e1-4568-9fc5-6069cc28b9e6\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-m5nfk" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.839954 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-gfrd9" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.847163 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-vv2k7"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.848086 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-vv2k7"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.848110 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.849076 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vv2k7" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.849267 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.849383 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.851665 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-xn7x8" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.851963 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.852022 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-wm4rc" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.852103 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.882951 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wdv5h"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.884758 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wdv5h" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.888090 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-6j986" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.896348 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wdv5h"] Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.918408 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd9aef55-ad36-4675-a79a-a1829c9b3b3e-cert\") pod \"infra-operator-controller-manager-79d975b745-9ht86\" (UID: \"bd9aef55-ad36-4675-a79a-a1829c9b3b3e\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-9ht86" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.918443 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp7ss\" (UniqueName: \"kubernetes.io/projected/949ed147-ec0c-4e17-bc34-4d27018a9567-kube-api-access-xp7ss\") pod \"telemetry-operator-controller-manager-58f847fcbd-9djqq\" (UID: \"949ed147-ec0c-4e17-bc34-4d27018a9567\") " pod="openstack-operators/telemetry-operator-controller-manager-58f847fcbd-9djqq" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.918597 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94q6f\" (UniqueName: \"kubernetes.io/projected/f8e83507-87e8-44e6-a08d-f1f45f8b4ee0-kube-api-access-94q6f\") pod \"test-operator-controller-manager-7866795846-7fnnb\" (UID: \"f8e83507-87e8-44e6-a08d-f1f45f8b4ee0\") " pod="openstack-operators/test-operator-controller-manager-7866795846-7fnnb" Feb 14 19:01:31 crc kubenswrapper[4897]: E0214 19:01:31.919791 4897 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 14 19:01:31 crc kubenswrapper[4897]: E0214 19:01:31.919828 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd9aef55-ad36-4675-a79a-a1829c9b3b3e-cert podName:bd9aef55-ad36-4675-a79a-a1829c9b3b3e nodeName:}" failed. No retries permitted until 2026-02-14 19:01:32.919815781 +0000 UTC m=+1145.896224264 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd9aef55-ad36-4675-a79a-a1829c9b3b3e-cert") pod "infra-operator-controller-manager-79d975b745-9ht86" (UID: "bd9aef55-ad36-4675-a79a-a1829c9b3b3e") : secret "infra-operator-webhook-server-cert" not found Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.924335 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-m5nfk" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.942518 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94q6f\" (UniqueName: \"kubernetes.io/projected/f8e83507-87e8-44e6-a08d-f1f45f8b4ee0-kube-api-access-94q6f\") pod \"test-operator-controller-manager-7866795846-7fnnb\" (UID: \"f8e83507-87e8-44e6-a08d-f1f45f8b4ee0\") " pod="openstack-operators/test-operator-controller-manager-7866795846-7fnnb" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.942539 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp7ss\" (UniqueName: \"kubernetes.io/projected/949ed147-ec0c-4e17-bc34-4d27018a9567-kube-api-access-xp7ss\") pod \"telemetry-operator-controller-manager-58f847fcbd-9djqq\" (UID: \"949ed147-ec0c-4e17-bc34-4d27018a9567\") " pod="openstack-operators/telemetry-operator-controller-manager-58f847fcbd-9djqq" Feb 14 19:01:31 crc kubenswrapper[4897]: I0214 19:01:31.943052 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-2lvr5"] Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.027426 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-webhook-certs\") pod \"openstack-operator-controller-manager-778945c4f9-cbw2h\" (UID: \"4243feec-23ed-4292-9291-7ad01f7d12a6\") " pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.027487 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-metrics-certs\") pod \"openstack-operator-controller-manager-778945c4f9-cbw2h\" (UID: \"4243feec-23ed-4292-9291-7ad01f7d12a6\") " pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.027542 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbhnj\" (UniqueName: \"kubernetes.io/projected/fc708ffc-dcb4-4ac0-9982-4cf347cd505d-kube-api-access-sbhnj\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wdv5h\" (UID: \"fc708ffc-dcb4-4ac0-9982-4cf347cd505d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wdv5h" Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.027620 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nv4n\" (UniqueName: \"kubernetes.io/projected/4243feec-23ed-4292-9291-7ad01f7d12a6-kube-api-access-4nv4n\") pod \"openstack-operator-controller-manager-778945c4f9-cbw2h\" (UID: \"4243feec-23ed-4292-9291-7ad01f7d12a6\") " pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.027653 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdjll\" (UniqueName: \"kubernetes.io/projected/26f58f32-c15c-49c7-8756-fc2bae972a2d-kube-api-access-sdjll\") pod \"watcher-operator-controller-manager-5db88f68c-vv2k7\" (UID: \"26f58f32-c15c-49c7-8756-fc2bae972a2d\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vv2k7" Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.033120 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2lvr5" event={"ID":"48e0b91f-f946-4ecc-b36c-fc280e728f77","Type":"ContainerStarted","Data":"620800ac6b31856cda669a60e402a1ce217af890781d1e6ce54b9ce32ec4eee9"} Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.128783 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nv4n\" (UniqueName: \"kubernetes.io/projected/4243feec-23ed-4292-9291-7ad01f7d12a6-kube-api-access-4nv4n\") pod \"openstack-operator-controller-manager-778945c4f9-cbw2h\" (UID: \"4243feec-23ed-4292-9291-7ad01f7d12a6\") " pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.128843 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdjll\" (UniqueName: \"kubernetes.io/projected/26f58f32-c15c-49c7-8756-fc2bae972a2d-kube-api-access-sdjll\") pod \"watcher-operator-controller-manager-5db88f68c-vv2k7\" (UID: \"26f58f32-c15c-49c7-8756-fc2bae972a2d\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vv2k7" Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.128873 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/afb3d9d3-a3e1-4aac-89ef-a7128579e6e9-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csghqz\" (UID: \"afb3d9d3-a3e1-4aac-89ef-a7128579e6e9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.128927 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-webhook-certs\") pod \"openstack-operator-controller-manager-778945c4f9-cbw2h\" (UID: \"4243feec-23ed-4292-9291-7ad01f7d12a6\") " pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.128961 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-metrics-certs\") pod \"openstack-operator-controller-manager-778945c4f9-cbw2h\" (UID: \"4243feec-23ed-4292-9291-7ad01f7d12a6\") " pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.129002 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbhnj\" (UniqueName: \"kubernetes.io/projected/fc708ffc-dcb4-4ac0-9982-4cf347cd505d-kube-api-access-sbhnj\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wdv5h\" (UID: \"fc708ffc-dcb4-4ac0-9982-4cf347cd505d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wdv5h" Feb 14 19:01:32 crc kubenswrapper[4897]: E0214 19:01:32.129105 4897 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 19:01:32 crc kubenswrapper[4897]: E0214 19:01:32.129169 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afb3d9d3-a3e1-4aac-89ef-a7128579e6e9-cert podName:afb3d9d3-a3e1-4aac-89ef-a7128579e6e9 nodeName:}" failed. No retries permitted until 2026-02-14 19:01:33.129151108 +0000 UTC m=+1146.105559591 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/afb3d9d3-a3e1-4aac-89ef-a7128579e6e9-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" (UID: "afb3d9d3-a3e1-4aac-89ef-a7128579e6e9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 19:01:32 crc kubenswrapper[4897]: E0214 19:01:32.129416 4897 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 19:01:32 crc kubenswrapper[4897]: E0214 19:01:32.129497 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-webhook-certs podName:4243feec-23ed-4292-9291-7ad01f7d12a6 nodeName:}" failed. No retries permitted until 2026-02-14 19:01:32.629454518 +0000 UTC m=+1145.605863001 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-webhook-certs") pod "openstack-operator-controller-manager-778945c4f9-cbw2h" (UID: "4243feec-23ed-4292-9291-7ad01f7d12a6") : secret "webhook-server-cert" not found Feb 14 19:01:32 crc kubenswrapper[4897]: E0214 19:01:32.129538 4897 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 14 19:01:32 crc kubenswrapper[4897]: E0214 19:01:32.129560 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-metrics-certs podName:4243feec-23ed-4292-9291-7ad01f7d12a6 nodeName:}" failed. No retries permitted until 2026-02-14 19:01:32.629553031 +0000 UTC m=+1145.605961514 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-metrics-certs") pod "openstack-operator-controller-manager-778945c4f9-cbw2h" (UID: "4243feec-23ed-4292-9291-7ad01f7d12a6") : secret "metrics-server-cert" not found Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.149015 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58f847fcbd-9djqq" Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.150504 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbhnj\" (UniqueName: \"kubernetes.io/projected/fc708ffc-dcb4-4ac0-9982-4cf347cd505d-kube-api-access-sbhnj\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wdv5h\" (UID: \"fc708ffc-dcb4-4ac0-9982-4cf347cd505d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wdv5h" Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.150708 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nv4n\" (UniqueName: \"kubernetes.io/projected/4243feec-23ed-4292-9291-7ad01f7d12a6-kube-api-access-4nv4n\") pod \"openstack-operator-controller-manager-778945c4f9-cbw2h\" (UID: \"4243feec-23ed-4292-9291-7ad01f7d12a6\") " pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.152202 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdjll\" (UniqueName: \"kubernetes.io/projected/26f58f32-c15c-49c7-8756-fc2bae972a2d-kube-api-access-sdjll\") pod \"watcher-operator-controller-manager-5db88f68c-vv2k7\" (UID: \"26f58f32-c15c-49c7-8756-fc2bae972a2d\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vv2k7" Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.193602 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-ts22t"] Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.201844 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-7fnnb" Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.203421 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-drm7d"] Feb 14 19:01:32 crc kubenswrapper[4897]: W0214 19:01:32.219107 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe513351_3f7b_436d_9218_a66a6f579948.slice/crio-68b40b9f27ca00abacce4698a0b12e5e5776a4512ed9107d6ef700c402d269ec WatchSource:0}: Error finding container 68b40b9f27ca00abacce4698a0b12e5e5776a4512ed9107d6ef700c402d269ec: Status 404 returned error can't find the container with id 68b40b9f27ca00abacce4698a0b12e5e5776a4512ed9107d6ef700c402d269ec Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.222524 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vv2k7" Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.255221 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wdv5h" Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.573807 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-5v2tq"] Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.581060 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-wsghb"] Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.589510 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-fzgws"] Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.638949 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-webhook-certs\") pod \"openstack-operator-controller-manager-778945c4f9-cbw2h\" (UID: \"4243feec-23ed-4292-9291-7ad01f7d12a6\") " pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.639077 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-metrics-certs\") pod \"openstack-operator-controller-manager-778945c4f9-cbw2h\" (UID: \"4243feec-23ed-4292-9291-7ad01f7d12a6\") " pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" Feb 14 19:01:32 crc kubenswrapper[4897]: E0214 19:01:32.639109 4897 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 19:01:32 crc kubenswrapper[4897]: E0214 19:01:32.639178 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-webhook-certs podName:4243feec-23ed-4292-9291-7ad01f7d12a6 nodeName:}" failed. No retries permitted until 2026-02-14 19:01:33.639159454 +0000 UTC m=+1146.615567937 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-webhook-certs") pod "openstack-operator-controller-manager-778945c4f9-cbw2h" (UID: "4243feec-23ed-4292-9291-7ad01f7d12a6") : secret "webhook-server-cert" not found Feb 14 19:01:32 crc kubenswrapper[4897]: E0214 19:01:32.639297 4897 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 14 19:01:32 crc kubenswrapper[4897]: E0214 19:01:32.639382 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-metrics-certs podName:4243feec-23ed-4292-9291-7ad01f7d12a6 nodeName:}" failed. No retries permitted until 2026-02-14 19:01:33.639356931 +0000 UTC m=+1146.615765444 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-metrics-certs") pod "openstack-operator-controller-manager-778945c4f9-cbw2h" (UID: "4243feec-23ed-4292-9291-7ad01f7d12a6") : secret "metrics-server-cert" not found Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.908111 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-5dg28"] Feb 14 19:01:32 crc kubenswrapper[4897]: W0214 19:01:32.930623 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fe73ade_8031_493c_9628_018ad436c7a5.slice/crio-28bf09b85b2c8fe085312acc7e9ccba4c73506e66eb56f334e96fe7b8a488e82 WatchSource:0}: Error finding container 28bf09b85b2c8fe085312acc7e9ccba4c73506e66eb56f334e96fe7b8a488e82: Status 404 returned error can't find the container with id 28bf09b85b2c8fe085312acc7e9ccba4c73506e66eb56f334e96fe7b8a488e82 Feb 14 19:01:32 crc kubenswrapper[4897]: I0214 19:01:32.942610 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd9aef55-ad36-4675-a79a-a1829c9b3b3e-cert\") pod \"infra-operator-controller-manager-79d975b745-9ht86\" (UID: \"bd9aef55-ad36-4675-a79a-a1829c9b3b3e\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-9ht86" Feb 14 19:01:32 crc kubenswrapper[4897]: E0214 19:01:32.942762 4897 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 14 19:01:32 crc kubenswrapper[4897]: E0214 19:01:32.942820 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd9aef55-ad36-4675-a79a-a1829c9b3b3e-cert podName:bd9aef55-ad36-4675-a79a-a1829c9b3b3e nodeName:}" failed. No retries permitted until 2026-02-14 19:01:34.942805125 +0000 UTC m=+1147.919213608 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd9aef55-ad36-4675-a79a-a1829c9b3b3e-cert") pod "infra-operator-controller-manager-79d975b745-9ht86" (UID: "bd9aef55-ad36-4675-a79a-a1829c9b3b3e") : secret "infra-operator-webhook-server-cert" not found Feb 14 19:01:33 crc kubenswrapper[4897]: I0214 19:01:33.051639 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-drm7d" event={"ID":"fe513351-3f7b-436d-9218-a66a6f579948","Type":"ContainerStarted","Data":"68b40b9f27ca00abacce4698a0b12e5e5776a4512ed9107d6ef700c402d269ec"} Feb 14 19:01:33 crc kubenswrapper[4897]: I0214 19:01:33.057289 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-5dg28" event={"ID":"6fe73ade-8031-493c-9628-018ad436c7a5","Type":"ContainerStarted","Data":"28bf09b85b2c8fe085312acc7e9ccba4c73506e66eb56f334e96fe7b8a488e82"} Feb 14 19:01:33 crc kubenswrapper[4897]: I0214 19:01:33.061884 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5v2tq" event={"ID":"10c98e4f-ae22-481b-992d-6804a1b5d0cc","Type":"ContainerStarted","Data":"609be86502ba409bfb43623a731da1b2f7ac52f11a629538831ad3ec12dcb760"} Feb 14 19:01:33 crc kubenswrapper[4897]: I0214 19:01:33.071231 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ts22t" event={"ID":"8dffc7df-2563-4f02-8dfc-83ab824af909","Type":"ContainerStarted","Data":"05042c84693aaa23629f4ac152026b2dfb9e5e5f0502bfa511a261a35045da69"} Feb 14 19:01:33 crc kubenswrapper[4897]: I0214 19:01:33.080236 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fzgws" event={"ID":"a2a15c49-cac6-4772-be07-69fd7597b692","Type":"ContainerStarted","Data":"e83e58a3813f1e0ea4121919f3ca4dbd79503b80a0b424d536c795a4b74c5c50"} Feb 14 19:01:33 crc kubenswrapper[4897]: I0214 19:01:33.100799 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-wsghb" event={"ID":"0128668e-be83-412e-96e6-8c158ab45cc5","Type":"ContainerStarted","Data":"d6de948de704cc2abe940a4366f9c6caa54dfae4f2dfb4bb3f459f945a5fbb72"} Feb 14 19:01:33 crc kubenswrapper[4897]: I0214 19:01:33.109665 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-rtvvf"] Feb 14 19:01:33 crc kubenswrapper[4897]: I0214 19:01:33.116804 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-tsqnc"] Feb 14 19:01:33 crc kubenswrapper[4897]: I0214 19:01:33.126776 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bl5g8"] Feb 14 19:01:33 crc kubenswrapper[4897]: I0214 19:01:33.133360 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-nwjnd"] Feb 14 19:01:33 crc kubenswrapper[4897]: I0214 19:01:33.139621 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-gvcdc"] Feb 14 19:01:33 crc kubenswrapper[4897]: I0214 19:01:33.145561 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/afb3d9d3-a3e1-4aac-89ef-a7128579e6e9-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csghqz\" (UID: \"afb3d9d3-a3e1-4aac-89ef-a7128579e6e9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" Feb 14 19:01:33 crc kubenswrapper[4897]: E0214 19:01:33.145720 4897 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 19:01:33 crc kubenswrapper[4897]: E0214 19:01:33.145776 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afb3d9d3-a3e1-4aac-89ef-a7128579e6e9-cert podName:afb3d9d3-a3e1-4aac-89ef-a7128579e6e9 nodeName:}" failed. No retries permitted until 2026-02-14 19:01:35.145759955 +0000 UTC m=+1148.122168438 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/afb3d9d3-a3e1-4aac-89ef-a7128579e6e9-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" (UID: "afb3d9d3-a3e1-4aac-89ef-a7128579e6e9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 19:01:33 crc kubenswrapper[4897]: W0214 19:01:33.159356 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c6ab7c6_c333_41db_ba23_f89b3eff3eef.slice/crio-3ccbae55491e7e9c78da42bd09bb929c474e1d2c8c8ee8295569f80bff9e7a22 WatchSource:0}: Error finding container 3ccbae55491e7e9c78da42bd09bb929c474e1d2c8c8ee8295569f80bff9e7a22: Status 404 returned error can't find the container with id 3ccbae55491e7e9c78da42bd09bb929c474e1d2c8c8ee8295569f80bff9e7a22 Feb 14 19:01:33 crc kubenswrapper[4897]: W0214 19:01:33.162080 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e11063d_aac7_4fea_91d9_0b560622ccb9.slice/crio-3bdf1edb20b9e8c7fc32c60f5cacfb5af065e58ffba872d4a9a5cec08d18243b WatchSource:0}: Error finding container 3bdf1edb20b9e8c7fc32c60f5cacfb5af065e58ffba872d4a9a5cec08d18243b: Status 404 returned error can't find the container with id 3bdf1edb20b9e8c7fc32c60f5cacfb5af065e58ffba872d4a9a5cec08d18243b Feb 14 19:01:33 crc kubenswrapper[4897]: W0214 19:01:33.165701 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde1e8e22_10a4_4d2a_855f_4c7bb6a49096.slice/crio-4946599bf648ece59f362c85256e4e91bf2439d97769cb070a9c2f925d13c292 WatchSource:0}: Error finding container 4946599bf648ece59f362c85256e4e91bf2439d97769cb070a9c2f925d13c292: Status 404 returned error can't find the container with id 4946599bf648ece59f362c85256e4e91bf2439d97769cb070a9c2f925d13c292 Feb 14 19:01:33 crc kubenswrapper[4897]: I0214 19:01:33.565304 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-qbz5t"] Feb 14 19:01:33 crc kubenswrapper[4897]: I0214 19:01:33.584574 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-bh95f"] Feb 14 19:01:33 crc kubenswrapper[4897]: I0214 19:01:33.596289 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-7fnnb"] Feb 14 19:01:33 crc kubenswrapper[4897]: I0214 19:01:33.609106 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-gfrd9"] Feb 14 19:01:33 crc kubenswrapper[4897]: I0214 19:01:33.612808 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-vv2k7"] Feb 14 19:01:33 crc kubenswrapper[4897]: W0214 19:01:33.616120 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2543021_51cc_4cbe_9293_a6e02894e1f4.slice/crio-68d5f4f034d03319e843873c6f550a46c958df5cfe737e19fd7a8ba15e8d4902 WatchSource:0}: Error finding container 68d5f4f034d03319e843873c6f550a46c958df5cfe737e19fd7a8ba15e8d4902: Status 404 returned error can't find the container with id 68d5f4f034d03319e843873c6f550a46c958df5cfe737e19fd7a8ba15e8d4902 Feb 14 19:01:33 crc kubenswrapper[4897]: I0214 19:01:33.619546 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58f847fcbd-9djqq"] Feb 14 19:01:33 crc kubenswrapper[4897]: W0214 19:01:33.665559 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod949ed147_ec0c_4e17_bc34_4d27018a9567.slice/crio-0bcc11e0bc755046fd043de15769ad3d5cfca8c717ad4b605825ebcc57f91403 WatchSource:0}: Error finding container 0bcc11e0bc755046fd043de15769ad3d5cfca8c717ad4b605825ebcc57f91403: Status 404 returned error can't find the container with id 0bcc11e0bc755046fd043de15769ad3d5cfca8c717ad4b605825ebcc57f91403 Feb 14 19:01:33 crc kubenswrapper[4897]: I0214 19:01:33.670543 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-webhook-certs\") pod \"openstack-operator-controller-manager-778945c4f9-cbw2h\" (UID: \"4243feec-23ed-4292-9291-7ad01f7d12a6\") " pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" Feb 14 19:01:33 crc kubenswrapper[4897]: I0214 19:01:33.670646 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-metrics-certs\") pod \"openstack-operator-controller-manager-778945c4f9-cbw2h\" (UID: \"4243feec-23ed-4292-9291-7ad01f7d12a6\") " pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" Feb 14 19:01:33 crc kubenswrapper[4897]: E0214 19:01:33.670954 4897 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 14 19:01:33 crc kubenswrapper[4897]: E0214 19:01:33.671085 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-metrics-certs podName:4243feec-23ed-4292-9291-7ad01f7d12a6 nodeName:}" failed. No retries permitted until 2026-02-14 19:01:35.671010512 +0000 UTC m=+1148.647418995 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-metrics-certs") pod "openstack-operator-controller-manager-778945c4f9-cbw2h" (UID: "4243feec-23ed-4292-9291-7ad01f7d12a6") : secret "metrics-server-cert" not found Feb 14 19:01:33 crc kubenswrapper[4897]: E0214 19:01:33.671174 4897 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 19:01:33 crc kubenswrapper[4897]: E0214 19:01:33.671269 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-webhook-certs podName:4243feec-23ed-4292-9291-7ad01f7d12a6 nodeName:}" failed. No retries permitted until 2026-02-14 19:01:35.671195678 +0000 UTC m=+1148.647604161 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-webhook-certs") pod "openstack-operator-controller-manager-778945c4f9-cbw2h" (UID: "4243feec-23ed-4292-9291-7ad01f7d12a6") : secret "webhook-server-cert" not found Feb 14 19:01:33 crc kubenswrapper[4897]: I0214 19:01:33.741276 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wdv5h"] Feb 14 19:01:33 crc kubenswrapper[4897]: E0214 19:01:33.750719 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sbhnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-wdv5h_openstack-operators(fc708ffc-dcb4-4ac0-9982-4cf347cd505d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 14 19:01:33 crc kubenswrapper[4897]: E0214 19:01:33.752539 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wdv5h" podUID="fc708ffc-dcb4-4ac0-9982-4cf347cd505d" Feb 14 19:01:33 crc kubenswrapper[4897]: I0214 19:01:33.754508 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-m5nfk"] Feb 14 19:01:33 crc kubenswrapper[4897]: E0214 19:01:33.803011 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-psfgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-m5nfk_openstack-operators(0c6cb6a4-76e1-4568-9fc5-6069cc28b9e6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 14 19:01:33 crc kubenswrapper[4897]: E0214 19:01:33.804359 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-m5nfk" podUID="0c6cb6a4-76e1-4568-9fc5-6069cc28b9e6" Feb 14 19:01:34 crc kubenswrapper[4897]: I0214 19:01:34.108811 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bh95f" event={"ID":"d2543021-51cc-4cbe-9293-a6e02894e1f4","Type":"ContainerStarted","Data":"68d5f4f034d03319e843873c6f550a46c958df5cfe737e19fd7a8ba15e8d4902"} Feb 14 19:01:34 crc kubenswrapper[4897]: I0214 19:01:34.110624 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-gfrd9" event={"ID":"cd0646ca-c695-4387-ba4b-cc9a3d85b460","Type":"ContainerStarted","Data":"bad41ef3b3dd348830e777b8fef79e131ee4658dc7233ad78167263d4efaa31d"} Feb 14 19:01:34 crc kubenswrapper[4897]: I0214 19:01:34.112855 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-7fnnb" event={"ID":"f8e83507-87e8-44e6-a08d-f1f45f8b4ee0","Type":"ContainerStarted","Data":"56a11e601cf81d67f92a9839ed7e53074c5e58592c4c7ecd94134edf9931e999"} Feb 14 19:01:34 crc kubenswrapper[4897]: I0214 19:01:34.114262 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vv2k7" event={"ID":"26f58f32-c15c-49c7-8756-fc2bae972a2d","Type":"ContainerStarted","Data":"c7b308ddd8c594a14941d2b39338bb794315ac0984e163bfc0a402efc1b801b4"} Feb 14 19:01:34 crc kubenswrapper[4897]: I0214 19:01:34.117561 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58f847fcbd-9djqq" event={"ID":"949ed147-ec0c-4e17-bc34-4d27018a9567","Type":"ContainerStarted","Data":"0bcc11e0bc755046fd043de15769ad3d5cfca8c717ad4b605825ebcc57f91403"} Feb 14 19:01:34 crc kubenswrapper[4897]: I0214 19:01:34.120568 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bl5g8" event={"ID":"7c6ab7c6-c333-41db-ba23-f89b3eff3eef","Type":"ContainerStarted","Data":"3ccbae55491e7e9c78da42bd09bb929c474e1d2c8c8ee8295569f80bff9e7a22"} Feb 14 19:01:34 crc kubenswrapper[4897]: I0214 19:01:34.122268 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-qbz5t" event={"ID":"fd2bec18-d3b4-4ee2-bfb9-34e5d1ddff3a","Type":"ContainerStarted","Data":"b1c4bf146737a4ee5da45157a5be87a7c62a5dc1edfe28ddb6bd469d77e97eae"} Feb 14 19:01:34 crc kubenswrapper[4897]: I0214 19:01:34.127542 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wdv5h" event={"ID":"fc708ffc-dcb4-4ac0-9982-4cf347cd505d","Type":"ContainerStarted","Data":"ae006ea7f1afa32d4d3751ea5cc7539333e9af10d4dc6b04b7cc692e8ae10d0b"} Feb 14 19:01:34 crc kubenswrapper[4897]: E0214 19:01:34.129701 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wdv5h" podUID="fc708ffc-dcb4-4ac0-9982-4cf347cd505d" Feb 14 19:01:34 crc kubenswrapper[4897]: I0214 19:01:34.129760 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nwjnd" event={"ID":"5e11063d-aac7-4fea-91d9-0b560622ccb9","Type":"ContainerStarted","Data":"3bdf1edb20b9e8c7fc32c60f5cacfb5af065e58ffba872d4a9a5cec08d18243b"} Feb 14 19:01:34 crc kubenswrapper[4897]: I0214 19:01:34.131116 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-tsqnc" event={"ID":"de1e8e22-10a4-4d2a-855f-4c7bb6a49096","Type":"ContainerStarted","Data":"4946599bf648ece59f362c85256e4e91bf2439d97769cb070a9c2f925d13c292"} Feb 14 19:01:34 crc kubenswrapper[4897]: I0214 19:01:34.133802 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-m5nfk" event={"ID":"0c6cb6a4-76e1-4568-9fc5-6069cc28b9e6","Type":"ContainerStarted","Data":"d54a46d1cb5870173e1472e9d3f2b68841ec7ccafce5dc92729a8b850fa789da"} Feb 14 19:01:34 crc kubenswrapper[4897]: E0214 19:01:34.135251 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-m5nfk" podUID="0c6cb6a4-76e1-4568-9fc5-6069cc28b9e6" Feb 14 19:01:34 crc kubenswrapper[4897]: I0214 19:01:34.135976 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gvcdc" event={"ID":"088b6f9e-b5ec-48f5-a7bb-4d9ef0db9820","Type":"ContainerStarted","Data":"020d73fcc7b9de7cd267b6d1948dcfb9060a9b2eb02046e2dab1a67a33959a06"} Feb 14 19:01:34 crc kubenswrapper[4897]: I0214 19:01:34.137982 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rtvvf" event={"ID":"8238fbef-1e59-4430-af92-1be3d70c4d84","Type":"ContainerStarted","Data":"a94d002d0ebfa9a119b54de17c10f9780b8a4b4dddfc9d0d403cd489b324c54f"} Feb 14 19:01:35 crc kubenswrapper[4897]: I0214 19:01:35.024786 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd9aef55-ad36-4675-a79a-a1829c9b3b3e-cert\") pod \"infra-operator-controller-manager-79d975b745-9ht86\" (UID: \"bd9aef55-ad36-4675-a79a-a1829c9b3b3e\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-9ht86" Feb 14 19:01:35 crc kubenswrapper[4897]: E0214 19:01:35.024968 4897 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 14 19:01:35 crc kubenswrapper[4897]: E0214 19:01:35.025054 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd9aef55-ad36-4675-a79a-a1829c9b3b3e-cert podName:bd9aef55-ad36-4675-a79a-a1829c9b3b3e nodeName:}" failed. No retries permitted until 2026-02-14 19:01:39.025022923 +0000 UTC m=+1152.001431406 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd9aef55-ad36-4675-a79a-a1829c9b3b3e-cert") pod "infra-operator-controller-manager-79d975b745-9ht86" (UID: "bd9aef55-ad36-4675-a79a-a1829c9b3b3e") : secret "infra-operator-webhook-server-cert" not found Feb 14 19:01:35 crc kubenswrapper[4897]: E0214 19:01:35.158517 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-m5nfk" podUID="0c6cb6a4-76e1-4568-9fc5-6069cc28b9e6" Feb 14 19:01:35 crc kubenswrapper[4897]: E0214 19:01:35.159935 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wdv5h" podUID="fc708ffc-dcb4-4ac0-9982-4cf347cd505d" Feb 14 19:01:35 crc kubenswrapper[4897]: I0214 19:01:35.227792 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/afb3d9d3-a3e1-4aac-89ef-a7128579e6e9-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csghqz\" (UID: \"afb3d9d3-a3e1-4aac-89ef-a7128579e6e9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" Feb 14 19:01:35 crc kubenswrapper[4897]: E0214 19:01:35.227978 4897 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 19:01:35 crc kubenswrapper[4897]: E0214 19:01:35.228045 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afb3d9d3-a3e1-4aac-89ef-a7128579e6e9-cert podName:afb3d9d3-a3e1-4aac-89ef-a7128579e6e9 nodeName:}" failed. No retries permitted until 2026-02-14 19:01:39.228018924 +0000 UTC m=+1152.204427407 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/afb3d9d3-a3e1-4aac-89ef-a7128579e6e9-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" (UID: "afb3d9d3-a3e1-4aac-89ef-a7128579e6e9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 19:01:35 crc kubenswrapper[4897]: I0214 19:01:35.739242 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-webhook-certs\") pod \"openstack-operator-controller-manager-778945c4f9-cbw2h\" (UID: \"4243feec-23ed-4292-9291-7ad01f7d12a6\") " pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" Feb 14 19:01:35 crc kubenswrapper[4897]: I0214 19:01:35.739319 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-metrics-certs\") pod \"openstack-operator-controller-manager-778945c4f9-cbw2h\" (UID: \"4243feec-23ed-4292-9291-7ad01f7d12a6\") " pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" Feb 14 19:01:35 crc kubenswrapper[4897]: E0214 19:01:35.739516 4897 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 14 19:01:35 crc kubenswrapper[4897]: E0214 19:01:35.739581 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-metrics-certs podName:4243feec-23ed-4292-9291-7ad01f7d12a6 nodeName:}" failed. No retries permitted until 2026-02-14 19:01:39.739564127 +0000 UTC m=+1152.715972620 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-metrics-certs") pod "openstack-operator-controller-manager-778945c4f9-cbw2h" (UID: "4243feec-23ed-4292-9291-7ad01f7d12a6") : secret "metrics-server-cert" not found Feb 14 19:01:35 crc kubenswrapper[4897]: E0214 19:01:35.739725 4897 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 19:01:35 crc kubenswrapper[4897]: E0214 19:01:35.739904 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-webhook-certs podName:4243feec-23ed-4292-9291-7ad01f7d12a6 nodeName:}" failed. No retries permitted until 2026-02-14 19:01:39.739882117 +0000 UTC m=+1152.716290650 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-webhook-certs") pod "openstack-operator-controller-manager-778945c4f9-cbw2h" (UID: "4243feec-23ed-4292-9291-7ad01f7d12a6") : secret "webhook-server-cert" not found Feb 14 19:01:39 crc kubenswrapper[4897]: I0214 19:01:39.104544 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd9aef55-ad36-4675-a79a-a1829c9b3b3e-cert\") pod \"infra-operator-controller-manager-79d975b745-9ht86\" (UID: \"bd9aef55-ad36-4675-a79a-a1829c9b3b3e\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-9ht86" Feb 14 19:01:39 crc kubenswrapper[4897]: E0214 19:01:39.104699 4897 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 14 19:01:39 crc kubenswrapper[4897]: E0214 19:01:39.104768 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd9aef55-ad36-4675-a79a-a1829c9b3b3e-cert podName:bd9aef55-ad36-4675-a79a-a1829c9b3b3e nodeName:}" failed. No retries permitted until 2026-02-14 19:01:47.10475237 +0000 UTC m=+1160.081160853 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd9aef55-ad36-4675-a79a-a1829c9b3b3e-cert") pod "infra-operator-controller-manager-79d975b745-9ht86" (UID: "bd9aef55-ad36-4675-a79a-a1829c9b3b3e") : secret "infra-operator-webhook-server-cert" not found Feb 14 19:01:39 crc kubenswrapper[4897]: I0214 19:01:39.308146 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/afb3d9d3-a3e1-4aac-89ef-a7128579e6e9-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csghqz\" (UID: \"afb3d9d3-a3e1-4aac-89ef-a7128579e6e9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" Feb 14 19:01:39 crc kubenswrapper[4897]: E0214 19:01:39.308574 4897 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 19:01:39 crc kubenswrapper[4897]: E0214 19:01:39.308619 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afb3d9d3-a3e1-4aac-89ef-a7128579e6e9-cert podName:afb3d9d3-a3e1-4aac-89ef-a7128579e6e9 nodeName:}" failed. No retries permitted until 2026-02-14 19:01:47.308605939 +0000 UTC m=+1160.285014422 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/afb3d9d3-a3e1-4aac-89ef-a7128579e6e9-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" (UID: "afb3d9d3-a3e1-4aac-89ef-a7128579e6e9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 19:01:39 crc kubenswrapper[4897]: I0214 19:01:39.817116 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-metrics-certs\") pod \"openstack-operator-controller-manager-778945c4f9-cbw2h\" (UID: \"4243feec-23ed-4292-9291-7ad01f7d12a6\") " pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" Feb 14 19:01:39 crc kubenswrapper[4897]: E0214 19:01:39.817331 4897 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 14 19:01:39 crc kubenswrapper[4897]: E0214 19:01:39.817427 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-metrics-certs podName:4243feec-23ed-4292-9291-7ad01f7d12a6 nodeName:}" failed. No retries permitted until 2026-02-14 19:01:47.817403667 +0000 UTC m=+1160.793812230 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-metrics-certs") pod "openstack-operator-controller-manager-778945c4f9-cbw2h" (UID: "4243feec-23ed-4292-9291-7ad01f7d12a6") : secret "metrics-server-cert" not found Feb 14 19:01:39 crc kubenswrapper[4897]: I0214 19:01:39.817460 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-webhook-certs\") pod \"openstack-operator-controller-manager-778945c4f9-cbw2h\" (UID: \"4243feec-23ed-4292-9291-7ad01f7d12a6\") " pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" Feb 14 19:01:39 crc kubenswrapper[4897]: E0214 19:01:39.817642 4897 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 19:01:39 crc kubenswrapper[4897]: E0214 19:01:39.817725 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-webhook-certs podName:4243feec-23ed-4292-9291-7ad01f7d12a6 nodeName:}" failed. No retries permitted until 2026-02-14 19:01:47.817703147 +0000 UTC m=+1160.794111730 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-webhook-certs") pod "openstack-operator-controller-manager-778945c4f9-cbw2h" (UID: "4243feec-23ed-4292-9291-7ad01f7d12a6") : secret "webhook-server-cert" not found Feb 14 19:01:47 crc kubenswrapper[4897]: I0214 19:01:47.164585 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd9aef55-ad36-4675-a79a-a1829c9b3b3e-cert\") pod \"infra-operator-controller-manager-79d975b745-9ht86\" (UID: \"bd9aef55-ad36-4675-a79a-a1829c9b3b3e\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-9ht86" Feb 14 19:01:47 crc kubenswrapper[4897]: I0214 19:01:47.188660 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd9aef55-ad36-4675-a79a-a1829c9b3b3e-cert\") pod \"infra-operator-controller-manager-79d975b745-9ht86\" (UID: \"bd9aef55-ad36-4675-a79a-a1829c9b3b3e\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-9ht86" Feb 14 19:01:47 crc kubenswrapper[4897]: I0214 19:01:47.365545 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-9ht86" Feb 14 19:01:47 crc kubenswrapper[4897]: I0214 19:01:47.368451 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/afb3d9d3-a3e1-4aac-89ef-a7128579e6e9-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csghqz\" (UID: \"afb3d9d3-a3e1-4aac-89ef-a7128579e6e9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" Feb 14 19:01:47 crc kubenswrapper[4897]: I0214 19:01:47.375130 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/afb3d9d3-a3e1-4aac-89ef-a7128579e6e9-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csghqz\" (UID: \"afb3d9d3-a3e1-4aac-89ef-a7128579e6e9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" Feb 14 19:01:47 crc kubenswrapper[4897]: I0214 19:01:47.409384 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" Feb 14 19:01:47 crc kubenswrapper[4897]: I0214 19:01:47.879830 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-webhook-certs\") pod \"openstack-operator-controller-manager-778945c4f9-cbw2h\" (UID: \"4243feec-23ed-4292-9291-7ad01f7d12a6\") " pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" Feb 14 19:01:47 crc kubenswrapper[4897]: I0214 19:01:47.879916 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-metrics-certs\") pod \"openstack-operator-controller-manager-778945c4f9-cbw2h\" (UID: \"4243feec-23ed-4292-9291-7ad01f7d12a6\") " pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" Feb 14 19:01:47 crc kubenswrapper[4897]: E0214 19:01:47.880057 4897 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 19:01:47 crc kubenswrapper[4897]: E0214 19:01:47.880103 4897 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 14 19:01:47 crc kubenswrapper[4897]: E0214 19:01:47.880175 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-webhook-certs podName:4243feec-23ed-4292-9291-7ad01f7d12a6 nodeName:}" failed. No retries permitted until 2026-02-14 19:02:03.880130836 +0000 UTC m=+1176.856539349 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-webhook-certs") pod "openstack-operator-controller-manager-778945c4f9-cbw2h" (UID: "4243feec-23ed-4292-9291-7ad01f7d12a6") : secret "webhook-server-cert" not found Feb 14 19:01:47 crc kubenswrapper[4897]: E0214 19:01:47.880259 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-metrics-certs podName:4243feec-23ed-4292-9291-7ad01f7d12a6 nodeName:}" failed. No retries permitted until 2026-02-14 19:02:03.880222518 +0000 UTC m=+1176.856631031 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-metrics-certs") pod "openstack-operator-controller-manager-778945c4f9-cbw2h" (UID: "4243feec-23ed-4292-9291-7ad01f7d12a6") : secret "metrics-server-cert" not found Feb 14 19:01:48 crc kubenswrapper[4897]: E0214 19:01:48.141929 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" Feb 14 19:01:48 crc kubenswrapper[4897]: E0214 19:01:48.142132 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gsbj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d8bf5c495-drm7d_openstack-operators(fe513351-3f7b-436d-9218-a66a6f579948): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 19:01:48 crc kubenswrapper[4897]: E0214 19:01:48.143328 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-drm7d" podUID="fe513351-3f7b-436d-9218-a66a6f579948" Feb 14 19:01:48 crc kubenswrapper[4897]: E0214 19:01:48.961366 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-drm7d" podUID="fe513351-3f7b-436d-9218-a66a6f579948" Feb 14 19:01:49 crc kubenswrapper[4897]: E0214 19:01:49.504121 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" Feb 14 19:01:49 crc kubenswrapper[4897]: E0214 19:01:49.504500 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sgv8z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-5d946d989d-ts22t_openstack-operators(8dffc7df-2563-4f02-8dfc-83ab824af909): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 19:01:49 crc kubenswrapper[4897]: E0214 19:01:49.506333 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ts22t" podUID="8dffc7df-2563-4f02-8dfc-83ab824af909" Feb 14 19:01:50 crc kubenswrapper[4897]: E0214 19:01:50.320971 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ts22t" podUID="8dffc7df-2563-4f02-8dfc-83ab824af909" Feb 14 19:01:50 crc kubenswrapper[4897]: E0214 19:01:50.999179 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" Feb 14 19:01:50 crc kubenswrapper[4897]: E0214 19:01:50.999700 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cndqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-fzgws_openstack-operators(a2a15c49-cac6-4772-be07-69fd7597b692): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 19:01:51 crc kubenswrapper[4897]: E0214 19:01:51.001408 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fzgws" podUID="a2a15c49-cac6-4772-be07-69fd7597b692" Feb 14 19:01:51 crc kubenswrapper[4897]: E0214 19:01:51.330060 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fzgws" podUID="a2a15c49-cac6-4772-be07-69fd7597b692" Feb 14 19:01:52 crc kubenswrapper[4897]: E0214 19:01:52.817234 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" Feb 14 19:01:52 crc kubenswrapper[4897]: E0214 19:01:52.817680 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dc77r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987464f4-wsghb_openstack-operators(0128668e-be83-412e-96e6-8c158ab45cc5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 19:01:52 crc kubenswrapper[4897]: E0214 19:01:52.819248 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-wsghb" podUID="0128668e-be83-412e-96e6-8c158ab45cc5" Feb 14 19:01:53 crc kubenswrapper[4897]: E0214 19:01:53.344658 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df\\\"\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-wsghb" podUID="0128668e-be83-412e-96e6-8c158ab45cc5" Feb 14 19:01:53 crc kubenswrapper[4897]: E0214 19:01:53.409642 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" Feb 14 19:01:53 crc kubenswrapper[4897]: E0214 19:01:53.409816 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-86djg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-bh95f_openstack-operators(d2543021-51cc-4cbe-9293-a6e02894e1f4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 19:01:53 crc kubenswrapper[4897]: E0214 19:01:53.411055 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bh95f" podUID="d2543021-51cc-4cbe-9293-a6e02894e1f4" Feb 14 19:01:54 crc kubenswrapper[4897]: E0214 19:01:54.355781 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bh95f" podUID="d2543021-51cc-4cbe-9293-a6e02894e1f4" Feb 14 19:01:55 crc kubenswrapper[4897]: E0214 19:01:55.807937 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" Feb 14 19:01:55 crc kubenswrapper[4897]: E0214 19:01:55.808148 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sdjll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-vv2k7_openstack-operators(26f58f32-c15c-49c7-8756-fc2bae972a2d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 19:01:55 crc kubenswrapper[4897]: E0214 19:01:55.810258 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vv2k7" podUID="26f58f32-c15c-49c7-8756-fc2bae972a2d" Feb 14 19:01:56 crc kubenswrapper[4897]: E0214 19:01:56.374763 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vv2k7" podUID="26f58f32-c15c-49c7-8756-fc2bae972a2d" Feb 14 19:01:58 crc kubenswrapper[4897]: E0214 19:01:58.722866 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" Feb 14 19:01:58 crc kubenswrapper[4897]: E0214 19:01:58.723171 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-94q6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-7fnnb_openstack-operators(f8e83507-87e8-44e6-a08d-f1f45f8b4ee0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 19:01:58 crc kubenswrapper[4897]: E0214 19:01:58.724711 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7866795846-7fnnb" podUID="f8e83507-87e8-44e6-a08d-f1f45f8b4ee0" Feb 14 19:01:59 crc kubenswrapper[4897]: E0214 19:01:59.399942 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-7fnnb" podUID="f8e83507-87e8-44e6-a08d-f1f45f8b4ee0" Feb 14 19:02:00 crc kubenswrapper[4897]: E0214 19:02:00.311480 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" Feb 14 19:02:00 crc kubenswrapper[4897]: E0214 19:02:00.312010 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bhnbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-tsqnc_openstack-operators(de1e8e22-10a4-4d2a-855f-4c7bb6a49096): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 19:02:00 crc kubenswrapper[4897]: E0214 19:02:00.313262 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-tsqnc" podUID="de1e8e22-10a4-4d2a-855f-4c7bb6a49096" Feb 14 19:02:00 crc kubenswrapper[4897]: E0214 19:02:00.417312 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-tsqnc" podUID="de1e8e22-10a4-4d2a-855f-4c7bb6a49096" Feb 14 19:02:00 crc kubenswrapper[4897]: E0214 19:02:00.795199 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" Feb 14 19:02:00 crc kubenswrapper[4897]: E0214 19:02:00.795381 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9b79z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-qbz5t_openstack-operators(fd2bec18-d3b4-4ee2-bfb9-34e5d1ddff3a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 19:02:00 crc kubenswrapper[4897]: E0214 19:02:00.796655 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-qbz5t" podUID="fd2bec18-d3b4-4ee2-bfb9-34e5d1ddff3a" Feb 14 19:02:01 crc kubenswrapper[4897]: E0214 19:02:01.370811 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" Feb 14 19:02:01 crc kubenswrapper[4897]: E0214 19:02:01.371114 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m42zt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-gfrd9_openstack-operators(cd0646ca-c695-4387-ba4b-cc9a3d85b460): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 19:02:01 crc kubenswrapper[4897]: E0214 19:02:01.372939 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-gfrd9" podUID="cd0646ca-c695-4387-ba4b-cc9a3d85b460" Feb 14 19:02:01 crc kubenswrapper[4897]: E0214 19:02:01.426872 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-qbz5t" podUID="fd2bec18-d3b4-4ee2-bfb9-34e5d1ddff3a" Feb 14 19:02:01 crc kubenswrapper[4897]: E0214 19:02:01.428829 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-gfrd9" podUID="cd0646ca-c695-4387-ba4b-cc9a3d85b460" Feb 14 19:02:01 crc kubenswrapper[4897]: I0214 19:02:01.725715 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:02:01 crc kubenswrapper[4897]: I0214 19:02:01.725799 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:02:01 crc kubenswrapper[4897]: E0214 19:02:01.915615 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" Feb 14 19:02:01 crc kubenswrapper[4897]: E0214 19:02:01.916045 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q88gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-5dg28_openstack-operators(6fe73ade-8031-493c-9628-018ad436c7a5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 19:02:01 crc kubenswrapper[4897]: E0214 19:02:01.917199 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-5dg28" podUID="6fe73ade-8031-493c-9628-018ad436c7a5" Feb 14 19:02:02 crc kubenswrapper[4897]: E0214 19:02:02.437879 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-5dg28" podUID="6fe73ade-8031-493c-9628-018ad436c7a5" Feb 14 19:02:03 crc kubenswrapper[4897]: E0214 19:02:03.561261 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.94:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1" Feb 14 19:02:03 crc kubenswrapper[4897]: E0214 19:02:03.561313 4897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.94:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1" Feb 14 19:02:03 crc kubenswrapper[4897]: E0214 19:02:03.561461 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.94:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xp7ss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-58f847fcbd-9djqq_openstack-operators(949ed147-ec0c-4e17-bc34-4d27018a9567): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 19:02:03 crc kubenswrapper[4897]: E0214 19:02:03.562687 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-58f847fcbd-9djqq" podUID="949ed147-ec0c-4e17-bc34-4d27018a9567" Feb 14 19:02:03 crc kubenswrapper[4897]: I0214 19:02:03.953186 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-webhook-certs\") pod \"openstack-operator-controller-manager-778945c4f9-cbw2h\" (UID: \"4243feec-23ed-4292-9291-7ad01f7d12a6\") " pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" Feb 14 19:02:03 crc kubenswrapper[4897]: I0214 19:02:03.953534 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-metrics-certs\") pod \"openstack-operator-controller-manager-778945c4f9-cbw2h\" (UID: \"4243feec-23ed-4292-9291-7ad01f7d12a6\") " pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" Feb 14 19:02:03 crc kubenswrapper[4897]: I0214 19:02:03.958879 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-webhook-certs\") pod \"openstack-operator-controller-manager-778945c4f9-cbw2h\" (UID: \"4243feec-23ed-4292-9291-7ad01f7d12a6\") " pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" Feb 14 19:02:03 crc kubenswrapper[4897]: I0214 19:02:03.962144 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4243feec-23ed-4292-9291-7ad01f7d12a6-metrics-certs\") pod \"openstack-operator-controller-manager-778945c4f9-cbw2h\" (UID: \"4243feec-23ed-4292-9291-7ad01f7d12a6\") " pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" Feb 14 19:02:03 crc kubenswrapper[4897]: I0214 19:02:03.992582 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" Feb 14 19:02:04 crc kubenswrapper[4897]: E0214 19:02:04.152472 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 14 19:02:04 crc kubenswrapper[4897]: E0214 19:02:04.152815 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f4h75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-gvcdc_openstack-operators(088b6f9e-b5ec-48f5-a7bb-4d9ef0db9820): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 19:02:04 crc kubenswrapper[4897]: E0214 19:02:04.154110 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gvcdc" podUID="088b6f9e-b5ec-48f5-a7bb-4d9ef0db9820" Feb 14 19:02:04 crc kubenswrapper[4897]: E0214 19:02:04.459944 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.94:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-58f847fcbd-9djqq" podUID="949ed147-ec0c-4e17-bc34-4d27018a9567" Feb 14 19:02:04 crc kubenswrapper[4897]: E0214 19:02:04.460450 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gvcdc" podUID="088b6f9e-b5ec-48f5-a7bb-4d9ef0db9820" Feb 14 19:02:04 crc kubenswrapper[4897]: E0214 19:02:04.707675 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 14 19:02:04 crc kubenswrapper[4897]: E0214 19:02:04.707886 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8tsjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-nwjnd_openstack-operators(5e11063d-aac7-4fea-91d9-0b560622ccb9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 19:02:04 crc kubenswrapper[4897]: E0214 19:02:04.709205 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nwjnd" podUID="5e11063d-aac7-4fea-91d9-0b560622ccb9" Feb 14 19:02:05 crc kubenswrapper[4897]: E0214 19:02:05.096699 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 14 19:02:05 crc kubenswrapper[4897]: E0214 19:02:05.096883 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sbhnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-wdv5h_openstack-operators(fc708ffc-dcb4-4ac0-9982-4cf347cd505d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 19:02:05 crc kubenswrapper[4897]: E0214 19:02:05.098072 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wdv5h" podUID="fc708ffc-dcb4-4ac0-9982-4cf347cd505d" Feb 14 19:02:05 crc kubenswrapper[4897]: I0214 19:02:05.468017 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bl5g8" event={"ID":"7c6ab7c6-c333-41db-ba23-f89b3eff3eef","Type":"ContainerStarted","Data":"079c2dff52da2e6a8ddf92c33c5d18d230939daeb5a3107ebf55bb32498aa8c0"} Feb 14 19:02:05 crc kubenswrapper[4897]: I0214 19:02:05.468634 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bl5g8" Feb 14 19:02:05 crc kubenswrapper[4897]: E0214 19:02:05.469468 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nwjnd" podUID="5e11063d-aac7-4fea-91d9-0b560622ccb9" Feb 14 19:02:05 crc kubenswrapper[4897]: I0214 19:02:05.517913 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bl5g8" podStartSLOduration=4.140632234 podStartE2EDuration="34.517887379s" podCreationTimestamp="2026-02-14 19:01:31 +0000 UTC" firstStartedPulling="2026-02-14 19:01:33.162360588 +0000 UTC m=+1146.138769071" lastFinishedPulling="2026-02-14 19:02:03.539615703 +0000 UTC m=+1176.516024216" observedRunningTime="2026-02-14 19:02:05.508498976 +0000 UTC m=+1178.484907469" watchObservedRunningTime="2026-02-14 19:02:05.517887379 +0000 UTC m=+1178.494295862" Feb 14 19:02:05 crc kubenswrapper[4897]: I0214 19:02:05.601725 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csghqz"] Feb 14 19:02:05 crc kubenswrapper[4897]: I0214 19:02:05.691643 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-9ht86"] Feb 14 19:02:05 crc kubenswrapper[4897]: I0214 19:02:05.696061 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h"] Feb 14 19:02:06 crc kubenswrapper[4897]: I0214 19:02:06.476096 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-9ht86" event={"ID":"bd9aef55-ad36-4675-a79a-a1829c9b3b3e","Type":"ContainerStarted","Data":"7dabcbe57f6fdf783545e8fcd915ce9004bcd80c0ed4cdd44cefacf38a14a53d"} Feb 14 19:02:06 crc kubenswrapper[4897]: I0214 19:02:06.477565 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fzgws" event={"ID":"a2a15c49-cac6-4772-be07-69fd7597b692","Type":"ContainerStarted","Data":"0698836e5504838594407acba9499d8c3798184b5cfbc432ffa6becfee9c828f"} Feb 14 19:02:06 crc kubenswrapper[4897]: I0214 19:02:06.478649 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fzgws" Feb 14 19:02:06 crc kubenswrapper[4897]: I0214 19:02:06.479975 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-drm7d" event={"ID":"fe513351-3f7b-436d-9218-a66a6f579948","Type":"ContainerStarted","Data":"8d041a5457b73ea27b4411722bc44bcfa102b5fdc7ac7758bb2bf6877b55f1fc"} Feb 14 19:02:06 crc kubenswrapper[4897]: I0214 19:02:06.480351 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-drm7d" Feb 14 19:02:06 crc kubenswrapper[4897]: I0214 19:02:06.481873 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ts22t" event={"ID":"8dffc7df-2563-4f02-8dfc-83ab824af909","Type":"ContainerStarted","Data":"65894fda9db1509568f44e14199ee76411a793e2ef469351a0b5002c5f8fe097"} Feb 14 19:02:06 crc kubenswrapper[4897]: I0214 19:02:06.482241 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ts22t" Feb 14 19:02:06 crc kubenswrapper[4897]: I0214 19:02:06.483396 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" event={"ID":"4243feec-23ed-4292-9291-7ad01f7d12a6","Type":"ContainerStarted","Data":"d99c7c2b5ce9b96d00a34fc107cc7b5267fa26f9487193008cec0f8f0b8043c1"} Feb 14 19:02:06 crc kubenswrapper[4897]: I0214 19:02:06.483417 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" event={"ID":"4243feec-23ed-4292-9291-7ad01f7d12a6","Type":"ContainerStarted","Data":"293ec746cf677b5ffb698616edb2f779c46158ffb5e7d89dbcf85df250dc8ed4"} Feb 14 19:02:06 crc kubenswrapper[4897]: I0214 19:02:06.483759 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" Feb 14 19:02:06 crc kubenswrapper[4897]: I0214 19:02:06.485487 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rtvvf" event={"ID":"8238fbef-1e59-4430-af92-1be3d70c4d84","Type":"ContainerStarted","Data":"215ed2774989cfd7b606892f4221d227f2cf2f713e837173a39684b7d75b6bb5"} Feb 14 19:02:06 crc kubenswrapper[4897]: I0214 19:02:06.485852 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rtvvf" Feb 14 19:02:06 crc kubenswrapper[4897]: I0214 19:02:06.486897 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5v2tq" event={"ID":"10c98e4f-ae22-481b-992d-6804a1b5d0cc","Type":"ContainerStarted","Data":"859af0f7115919b1821376185cd7e544c2500037d0c46e80efc87c1fe39d34b8"} Feb 14 19:02:06 crc kubenswrapper[4897]: I0214 19:02:06.487079 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5v2tq" Feb 14 19:02:06 crc kubenswrapper[4897]: I0214 19:02:06.488255 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-m5nfk" event={"ID":"0c6cb6a4-76e1-4568-9fc5-6069cc28b9e6","Type":"ContainerStarted","Data":"236ad7e2c3c208211b6a8b37e6cafc784e556aae57f692bab89216ecdff5a523"} Feb 14 19:02:06 crc kubenswrapper[4897]: I0214 19:02:06.488518 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-m5nfk" Feb 14 19:02:06 crc kubenswrapper[4897]: I0214 19:02:06.489219 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" event={"ID":"afb3d9d3-a3e1-4aac-89ef-a7128579e6e9","Type":"ContainerStarted","Data":"f739792f39a39153dcbcee717b0e6d81e29c186536199d36663aea657e89c2d2"} Feb 14 19:02:06 crc kubenswrapper[4897]: I0214 19:02:06.490629 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2lvr5" event={"ID":"48e0b91f-f946-4ecc-b36c-fc280e728f77","Type":"ContainerStarted","Data":"3f3dd5109a2ff73f51920f3433f1c8c24be2bd5cb273c7f717c14612e22c9052"} Feb 14 19:02:06 crc kubenswrapper[4897]: I0214 19:02:06.490899 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2lvr5" Feb 14 19:02:06 crc kubenswrapper[4897]: I0214 19:02:06.509358 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fzgws" podStartSLOduration=3.8914777369999998 podStartE2EDuration="36.509340783s" podCreationTimestamp="2026-02-14 19:01:30 +0000 UTC" firstStartedPulling="2026-02-14 19:01:32.600930084 +0000 UTC m=+1145.577338567" lastFinishedPulling="2026-02-14 19:02:05.21879313 +0000 UTC m=+1178.195201613" observedRunningTime="2026-02-14 19:02:06.503616685 +0000 UTC m=+1179.480025178" watchObservedRunningTime="2026-02-14 19:02:06.509340783 +0000 UTC m=+1179.485749266" Feb 14 19:02:06 crc kubenswrapper[4897]: I0214 19:02:06.538371 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" podStartSLOduration=35.538356575 podStartE2EDuration="35.538356575s" podCreationTimestamp="2026-02-14 19:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:02:06.53721738 +0000 UTC m=+1179.513625873" watchObservedRunningTime="2026-02-14 19:02:06.538356575 +0000 UTC m=+1179.514765058" Feb 14 19:02:06 crc kubenswrapper[4897]: I0214 19:02:06.556272 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-m5nfk" podStartSLOduration=4.672472063 podStartE2EDuration="35.556255162s" podCreationTimestamp="2026-02-14 19:01:31 +0000 UTC" firstStartedPulling="2026-02-14 19:01:33.802643319 +0000 UTC m=+1146.779051802" lastFinishedPulling="2026-02-14 19:02:04.686426418 +0000 UTC m=+1177.662834901" observedRunningTime="2026-02-14 19:02:06.551184074 +0000 UTC m=+1179.527592567" watchObservedRunningTime="2026-02-14 19:02:06.556255162 +0000 UTC m=+1179.532663645" Feb 14 19:02:06 crc kubenswrapper[4897]: I0214 19:02:06.563598 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rtvvf" podStartSLOduration=4.573008309 podStartE2EDuration="35.563581799s" podCreationTimestamp="2026-02-14 19:01:31 +0000 UTC" firstStartedPulling="2026-02-14 19:01:33.141092411 +0000 UTC m=+1146.117500894" lastFinishedPulling="2026-02-14 19:02:04.131665901 +0000 UTC m=+1177.108074384" observedRunningTime="2026-02-14 19:02:06.562107714 +0000 UTC m=+1179.538516197" watchObservedRunningTime="2026-02-14 19:02:06.563581799 +0000 UTC m=+1179.539990282" Feb 14 19:02:06 crc kubenswrapper[4897]: I0214 19:02:06.575659 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-drm7d" podStartSLOduration=3.672785149 podStartE2EDuration="36.575641304s" podCreationTimestamp="2026-02-14 19:01:30 +0000 UTC" firstStartedPulling="2026-02-14 19:01:32.234412 +0000 UTC m=+1145.210820483" lastFinishedPulling="2026-02-14 19:02:05.137268115 +0000 UTC m=+1178.113676638" observedRunningTime="2026-02-14 19:02:06.575002204 +0000 UTC m=+1179.551410707" watchObservedRunningTime="2026-02-14 19:02:06.575641304 +0000 UTC m=+1179.552049787" Feb 14 19:02:06 crc kubenswrapper[4897]: I0214 19:02:06.599463 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2lvr5" podStartSLOduration=4.99090594 podStartE2EDuration="36.599443414s" podCreationTimestamp="2026-02-14 19:01:30 +0000 UTC" firstStartedPulling="2026-02-14 19:01:31.931658857 +0000 UTC m=+1144.908067340" lastFinishedPulling="2026-02-14 19:02:03.540196321 +0000 UTC m=+1176.516604814" observedRunningTime="2026-02-14 19:02:06.593386856 +0000 UTC m=+1179.569795359" watchObservedRunningTime="2026-02-14 19:02:06.599443414 +0000 UTC m=+1179.575851897" Feb 14 19:02:06 crc kubenswrapper[4897]: I0214 19:02:06.619809 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ts22t" podStartSLOduration=3.6506656790000003 podStartE2EDuration="36.619791227s" podCreationTimestamp="2026-02-14 19:01:30 +0000 UTC" firstStartedPulling="2026-02-14 19:01:32.203182736 +0000 UTC m=+1145.179591219" lastFinishedPulling="2026-02-14 19:02:05.172308274 +0000 UTC m=+1178.148716767" observedRunningTime="2026-02-14 19:02:06.611825599 +0000 UTC m=+1179.588234102" watchObservedRunningTime="2026-02-14 19:02:06.619791227 +0000 UTC m=+1179.596199710" Feb 14 19:02:06 crc kubenswrapper[4897]: I0214 19:02:06.650264 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5v2tq" podStartSLOduration=5.696048015 podStartE2EDuration="36.650247374s" podCreationTimestamp="2026-02-14 19:01:30 +0000 UTC" firstStartedPulling="2026-02-14 19:01:32.58592925 +0000 UTC m=+1145.562337743" lastFinishedPulling="2026-02-14 19:02:03.540128609 +0000 UTC m=+1176.516537102" observedRunningTime="2026-02-14 19:02:06.645957211 +0000 UTC m=+1179.622365714" watchObservedRunningTime="2026-02-14 19:02:06.650247374 +0000 UTC m=+1179.626655877" Feb 14 19:02:08 crc kubenswrapper[4897]: I0214 19:02:08.507803 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-wsghb" event={"ID":"0128668e-be83-412e-96e6-8c158ab45cc5","Type":"ContainerStarted","Data":"abb27865f930275e6ca847f0dac14c37f799011817d8047f5719197be623ed78"} Feb 14 19:02:08 crc kubenswrapper[4897]: I0214 19:02:08.508328 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-wsghb" Feb 14 19:02:08 crc kubenswrapper[4897]: I0214 19:02:08.529638 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-wsghb" podStartSLOduration=3.806093915 podStartE2EDuration="38.529620484s" podCreationTimestamp="2026-02-14 19:01:30 +0000 UTC" firstStartedPulling="2026-02-14 19:01:32.583593188 +0000 UTC m=+1145.560001671" lastFinishedPulling="2026-02-14 19:02:07.307119757 +0000 UTC m=+1180.283528240" observedRunningTime="2026-02-14 19:02:08.52435227 +0000 UTC m=+1181.500760773" watchObservedRunningTime="2026-02-14 19:02:08.529620484 +0000 UTC m=+1181.506028977" Feb 14 19:02:11 crc kubenswrapper[4897]: I0214 19:02:11.259225 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2lvr5" Feb 14 19:02:11 crc kubenswrapper[4897]: I0214 19:02:11.314598 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ts22t" Feb 14 19:02:11 crc kubenswrapper[4897]: I0214 19:02:11.351237 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-drm7d" Feb 14 19:02:11 crc kubenswrapper[4897]: I0214 19:02:11.389365 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5v2tq" Feb 14 19:02:11 crc kubenswrapper[4897]: I0214 19:02:11.490538 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fzgws" Feb 14 19:02:11 crc kubenswrapper[4897]: I0214 19:02:11.690057 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rtvvf" Feb 14 19:02:11 crc kubenswrapper[4897]: I0214 19:02:11.728479 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bl5g8" Feb 14 19:02:11 crc kubenswrapper[4897]: I0214 19:02:11.929192 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-m5nfk" Feb 14 19:02:13 crc kubenswrapper[4897]: I0214 19:02:13.575149 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bh95f" event={"ID":"d2543021-51cc-4cbe-9293-a6e02894e1f4","Type":"ContainerStarted","Data":"758aff236a304507d08c061ec5ad79c2a7894385632206b5142d88e0f6aa5dc6"} Feb 14 19:02:13 crc kubenswrapper[4897]: I0214 19:02:13.575621 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bh95f" Feb 14 19:02:13 crc kubenswrapper[4897]: I0214 19:02:13.576705 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-tsqnc" event={"ID":"de1e8e22-10a4-4d2a-855f-4c7bb6a49096","Type":"ContainerStarted","Data":"03400ee5c4fe42605615ad443d43433e22ea22180a952879e06f7826015b6a36"} Feb 14 19:02:13 crc kubenswrapper[4897]: I0214 19:02:13.576893 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-tsqnc" Feb 14 19:02:13 crc kubenswrapper[4897]: I0214 19:02:13.578287 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vv2k7" event={"ID":"26f58f32-c15c-49c7-8756-fc2bae972a2d","Type":"ContainerStarted","Data":"c1c227ed02a99888b4b2e58f51ee597f8a09cd65e7bcc60087fa9216912b9b60"} Feb 14 19:02:13 crc kubenswrapper[4897]: I0214 19:02:13.578462 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vv2k7" Feb 14 19:02:13 crc kubenswrapper[4897]: I0214 19:02:13.579878 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-9ht86" event={"ID":"bd9aef55-ad36-4675-a79a-a1829c9b3b3e","Type":"ContainerStarted","Data":"b484b61b722c887bc5c7ad9d62f6188bc733e471869735788b4ae68000ec7a38"} Feb 14 19:02:13 crc kubenswrapper[4897]: I0214 19:02:13.579985 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-9ht86" Feb 14 19:02:13 crc kubenswrapper[4897]: I0214 19:02:13.581523 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" event={"ID":"afb3d9d3-a3e1-4aac-89ef-a7128579e6e9","Type":"ContainerStarted","Data":"eb592faaf34086b77d8e2ad27bef207c9a1b2e0a8e05b569487ce76ed6ad0295"} Feb 14 19:02:13 crc kubenswrapper[4897]: I0214 19:02:13.581670 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" Feb 14 19:02:13 crc kubenswrapper[4897]: I0214 19:02:13.599699 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bh95f" podStartSLOduration=3.434810843 podStartE2EDuration="42.599680424s" podCreationTimestamp="2026-02-14 19:01:31 +0000 UTC" firstStartedPulling="2026-02-14 19:01:33.648518607 +0000 UTC m=+1146.624927090" lastFinishedPulling="2026-02-14 19:02:12.813388178 +0000 UTC m=+1185.789796671" observedRunningTime="2026-02-14 19:02:13.592909723 +0000 UTC m=+1186.569318216" watchObservedRunningTime="2026-02-14 19:02:13.599680424 +0000 UTC m=+1186.576088907" Feb 14 19:02:13 crc kubenswrapper[4897]: I0214 19:02:13.626018 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vv2k7" podStartSLOduration=3.502545576 podStartE2EDuration="42.626000802s" podCreationTimestamp="2026-02-14 19:01:31 +0000 UTC" firstStartedPulling="2026-02-14 19:01:33.663156579 +0000 UTC m=+1146.639565052" lastFinishedPulling="2026-02-14 19:02:12.786611795 +0000 UTC m=+1185.763020278" observedRunningTime="2026-02-14 19:02:13.621218973 +0000 UTC m=+1186.597627466" watchObservedRunningTime="2026-02-14 19:02:13.626000802 +0000 UTC m=+1186.602409285" Feb 14 19:02:13 crc kubenswrapper[4897]: I0214 19:02:13.642462 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" podStartSLOduration=35.445776327 podStartE2EDuration="42.642443273s" podCreationTimestamp="2026-02-14 19:01:31 +0000 UTC" firstStartedPulling="2026-02-14 19:02:05.615520344 +0000 UTC m=+1178.591928827" lastFinishedPulling="2026-02-14 19:02:12.81218729 +0000 UTC m=+1185.788595773" observedRunningTime="2026-02-14 19:02:13.642438373 +0000 UTC m=+1186.618846866" watchObservedRunningTime="2026-02-14 19:02:13.642443273 +0000 UTC m=+1186.618851756" Feb 14 19:02:13 crc kubenswrapper[4897]: I0214 19:02:13.657737 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-9ht86" podStartSLOduration=36.521059797 podStartE2EDuration="43.657716608s" podCreationTimestamp="2026-02-14 19:01:30 +0000 UTC" firstStartedPulling="2026-02-14 19:02:05.693119856 +0000 UTC m=+1178.669528339" lastFinishedPulling="2026-02-14 19:02:12.829776667 +0000 UTC m=+1185.806185150" observedRunningTime="2026-02-14 19:02:13.654125366 +0000 UTC m=+1186.630533849" watchObservedRunningTime="2026-02-14 19:02:13.657716608 +0000 UTC m=+1186.634125111" Feb 14 19:02:13 crc kubenswrapper[4897]: I0214 19:02:13.674895 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-tsqnc" podStartSLOduration=4.03157628 podStartE2EDuration="43.674873391s" podCreationTimestamp="2026-02-14 19:01:30 +0000 UTC" firstStartedPulling="2026-02-14 19:01:33.169762046 +0000 UTC m=+1146.146170519" lastFinishedPulling="2026-02-14 19:02:12.813059147 +0000 UTC m=+1185.789467630" observedRunningTime="2026-02-14 19:02:13.670571258 +0000 UTC m=+1186.646979741" watchObservedRunningTime="2026-02-14 19:02:13.674873391 +0000 UTC m=+1186.651281864" Feb 14 19:02:14 crc kubenswrapper[4897]: I0214 19:02:14.006272 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" Feb 14 19:02:14 crc kubenswrapper[4897]: I0214 19:02:14.590059 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-7fnnb" event={"ID":"f8e83507-87e8-44e6-a08d-f1f45f8b4ee0","Type":"ContainerStarted","Data":"0fe8dbd55569790c24adcd56936764b7793e2cce69c7c4d7b5418c3a9282573f"} Feb 14 19:02:14 crc kubenswrapper[4897]: I0214 19:02:14.590545 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-7fnnb" Feb 14 19:02:14 crc kubenswrapper[4897]: I0214 19:02:14.592481 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-qbz5t" event={"ID":"fd2bec18-d3b4-4ee2-bfb9-34e5d1ddff3a","Type":"ContainerStarted","Data":"a2759097035ac8aecef47bfbf92451b7132166fa851c8e9404f1813993539745"} Feb 14 19:02:14 crc kubenswrapper[4897]: I0214 19:02:14.609196 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-7fnnb" podStartSLOduration=2.965544124 podStartE2EDuration="43.609182859s" podCreationTimestamp="2026-02-14 19:01:31 +0000 UTC" firstStartedPulling="2026-02-14 19:01:33.647789215 +0000 UTC m=+1146.624197698" lastFinishedPulling="2026-02-14 19:02:14.29142795 +0000 UTC m=+1187.267836433" observedRunningTime="2026-02-14 19:02:14.608045834 +0000 UTC m=+1187.584454327" watchObservedRunningTime="2026-02-14 19:02:14.609182859 +0000 UTC m=+1187.585591342" Feb 14 19:02:14 crc kubenswrapper[4897]: I0214 19:02:14.633140 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-qbz5t" podStartSLOduration=3.067627229 podStartE2EDuration="43.633120294s" podCreationTimestamp="2026-02-14 19:01:31 +0000 UTC" firstStartedPulling="2026-02-14 19:01:33.647768285 +0000 UTC m=+1146.624176768" lastFinishedPulling="2026-02-14 19:02:14.21326135 +0000 UTC m=+1187.189669833" observedRunningTime="2026-02-14 19:02:14.626091575 +0000 UTC m=+1187.602500078" watchObservedRunningTime="2026-02-14 19:02:14.633120294 +0000 UTC m=+1187.609528777" Feb 14 19:02:15 crc kubenswrapper[4897]: I0214 19:02:15.601654 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-gfrd9" event={"ID":"cd0646ca-c695-4387-ba4b-cc9a3d85b460","Type":"ContainerStarted","Data":"eb599e4cd7fec672d73956454f49691a895017ea1955c71b446aab362b2f001b"} Feb 14 19:02:15 crc kubenswrapper[4897]: I0214 19:02:15.602080 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-gfrd9" Feb 14 19:02:15 crc kubenswrapper[4897]: I0214 19:02:15.619918 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-gfrd9" podStartSLOduration=3.014139505 podStartE2EDuration="44.619900442s" podCreationTimestamp="2026-02-14 19:01:31 +0000 UTC" firstStartedPulling="2026-02-14 19:01:33.647769195 +0000 UTC m=+1146.624177678" lastFinishedPulling="2026-02-14 19:02:15.253530132 +0000 UTC m=+1188.229938615" observedRunningTime="2026-02-14 19:02:15.612842203 +0000 UTC m=+1188.589250686" watchObservedRunningTime="2026-02-14 19:02:15.619900442 +0000 UTC m=+1188.596308925" Feb 14 19:02:17 crc kubenswrapper[4897]: I0214 19:02:17.620946 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58f847fcbd-9djqq" event={"ID":"949ed147-ec0c-4e17-bc34-4d27018a9567","Type":"ContainerStarted","Data":"e39281b5a951db58e96874e8f7bab2e834165fec67fbe2a41082708880868573"} Feb 14 19:02:17 crc kubenswrapper[4897]: I0214 19:02:17.622458 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-58f847fcbd-9djqq" Feb 14 19:02:17 crc kubenswrapper[4897]: I0214 19:02:17.678824 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-58f847fcbd-9djqq" podStartSLOduration=3.512288223 podStartE2EDuration="46.678806445s" podCreationTimestamp="2026-02-14 19:01:31 +0000 UTC" firstStartedPulling="2026-02-14 19:01:33.695281402 +0000 UTC m=+1146.671689885" lastFinishedPulling="2026-02-14 19:02:16.861799614 +0000 UTC m=+1189.838208107" observedRunningTime="2026-02-14 19:02:17.670859548 +0000 UTC m=+1190.647268091" watchObservedRunningTime="2026-02-14 19:02:17.678806445 +0000 UTC m=+1190.655214928" Feb 14 19:02:17 crc kubenswrapper[4897]: E0214 19:02:17.854600 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wdv5h" podUID="fc708ffc-dcb4-4ac0-9982-4cf347cd505d" Feb 14 19:02:18 crc kubenswrapper[4897]: I0214 19:02:18.630784 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gvcdc" event={"ID":"088b6f9e-b5ec-48f5-a7bb-4d9ef0db9820","Type":"ContainerStarted","Data":"2648ff8a009c64ac0b283ef6dc8b8a4e96093e5dc96f5b73bc557a0c2ab9db76"} Feb 14 19:02:18 crc kubenswrapper[4897]: I0214 19:02:18.631670 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gvcdc" Feb 14 19:02:18 crc kubenswrapper[4897]: I0214 19:02:18.633140 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-5dg28" event={"ID":"6fe73ade-8031-493c-9628-018ad436c7a5","Type":"ContainerStarted","Data":"9b06740096b19cad4983780a849933cdb34e98d2c3be4881676a5d3c305c22a5"} Feb 14 19:02:18 crc kubenswrapper[4897]: I0214 19:02:18.633520 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-5dg28" Feb 14 19:02:18 crc kubenswrapper[4897]: I0214 19:02:18.647793 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gvcdc" podStartSLOduration=2.524694872 podStartE2EDuration="47.64776059s" podCreationTimestamp="2026-02-14 19:01:31 +0000 UTC" firstStartedPulling="2026-02-14 19:01:33.170582392 +0000 UTC m=+1146.146990875" lastFinishedPulling="2026-02-14 19:02:18.2936481 +0000 UTC m=+1191.270056593" observedRunningTime="2026-02-14 19:02:18.645221991 +0000 UTC m=+1191.621630484" watchObservedRunningTime="2026-02-14 19:02:18.64776059 +0000 UTC m=+1191.624169073" Feb 14 19:02:18 crc kubenswrapper[4897]: I0214 19:02:18.665475 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-5dg28" podStartSLOduration=2.308326469 podStartE2EDuration="47.665453789s" podCreationTimestamp="2026-02-14 19:01:31 +0000 UTC" firstStartedPulling="2026-02-14 19:01:32.937755399 +0000 UTC m=+1145.914163882" lastFinishedPulling="2026-02-14 19:02:18.294882709 +0000 UTC m=+1191.271291202" observedRunningTime="2026-02-14 19:02:18.66158894 +0000 UTC m=+1191.637997443" watchObservedRunningTime="2026-02-14 19:02:18.665453789 +0000 UTC m=+1191.641862282" Feb 14 19:02:20 crc kubenswrapper[4897]: I0214 19:02:20.651570 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nwjnd" event={"ID":"5e11063d-aac7-4fea-91d9-0b560622ccb9","Type":"ContainerStarted","Data":"363a3bdaf9c000550770a3e15365e52df3ef8eeb56c464493cfc02d6c40f4c06"} Feb 14 19:02:20 crc kubenswrapper[4897]: I0214 19:02:20.652288 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nwjnd" Feb 14 19:02:20 crc kubenswrapper[4897]: I0214 19:02:20.678968 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nwjnd" podStartSLOduration=2.605424331 podStartE2EDuration="49.678942009s" podCreationTimestamp="2026-02-14 19:01:31 +0000 UTC" firstStartedPulling="2026-02-14 19:01:33.167711353 +0000 UTC m=+1146.144119836" lastFinishedPulling="2026-02-14 19:02:20.241229021 +0000 UTC m=+1193.217637514" observedRunningTime="2026-02-14 19:02:20.667780762 +0000 UTC m=+1193.644189305" watchObservedRunningTime="2026-02-14 19:02:20.678942009 +0000 UTC m=+1193.655350522" Feb 14 19:02:21 crc kubenswrapper[4897]: I0214 19:02:21.409059 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-wsghb" Feb 14 19:02:21 crc kubenswrapper[4897]: I0214 19:02:21.453669 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-tsqnc" Feb 14 19:02:21 crc kubenswrapper[4897]: I0214 19:02:21.768399 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-qbz5t" Feb 14 19:02:21 crc kubenswrapper[4897]: I0214 19:02:21.771414 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-qbz5t" Feb 14 19:02:21 crc kubenswrapper[4897]: I0214 19:02:21.809818 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bh95f" Feb 14 19:02:21 crc kubenswrapper[4897]: I0214 19:02:21.844454 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-gfrd9" Feb 14 19:02:22 crc kubenswrapper[4897]: I0214 19:02:22.151773 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-58f847fcbd-9djqq" Feb 14 19:02:22 crc kubenswrapper[4897]: I0214 19:02:22.204922 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-7fnnb" Feb 14 19:02:22 crc kubenswrapper[4897]: I0214 19:02:22.225689 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vv2k7" Feb 14 19:02:27 crc kubenswrapper[4897]: I0214 19:02:27.376530 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-9ht86" Feb 14 19:02:27 crc kubenswrapper[4897]: I0214 19:02:27.417328 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" Feb 14 19:02:31 crc kubenswrapper[4897]: I0214 19:02:31.584861 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nwjnd" Feb 14 19:02:31 crc kubenswrapper[4897]: I0214 19:02:31.708008 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-5dg28" Feb 14 19:02:31 crc kubenswrapper[4897]: I0214 19:02:31.725910 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:02:31 crc kubenswrapper[4897]: I0214 19:02:31.725970 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:02:31 crc kubenswrapper[4897]: I0214 19:02:31.762443 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gvcdc" Feb 14 19:02:31 crc kubenswrapper[4897]: I0214 19:02:31.769684 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wdv5h" event={"ID":"fc708ffc-dcb4-4ac0-9982-4cf347cd505d","Type":"ContainerStarted","Data":"458dfd21057e46684e8a44503cca3bbb54266337f246efae04ad56f536137512"} Feb 14 19:02:31 crc kubenswrapper[4897]: I0214 19:02:31.812065 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wdv5h" podStartSLOduration=3.314832543 podStartE2EDuration="1m0.812016399s" podCreationTimestamp="2026-02-14 19:01:31 +0000 UTC" firstStartedPulling="2026-02-14 19:01:33.750588001 +0000 UTC m=+1146.726996484" lastFinishedPulling="2026-02-14 19:02:31.247771827 +0000 UTC m=+1204.224180340" observedRunningTime="2026-02-14 19:02:31.802319178 +0000 UTC m=+1204.778727681" watchObservedRunningTime="2026-02-14 19:02:31.812016399 +0000 UTC m=+1204.788424892" Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.313036 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-x827s"] Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.316477 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-x827s" Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.323398 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.323419 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.323475 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.323664 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-8jfjh" Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.337441 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-x827s"] Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.391801 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d835d2d-ab92-4f38-910d-903b14c84bf8-config\") pod \"dnsmasq-dns-675f4bcbfc-x827s\" (UID: \"7d835d2d-ab92-4f38-910d-903b14c84bf8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-x827s" Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.391892 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7jp8\" (UniqueName: \"kubernetes.io/projected/7d835d2d-ab92-4f38-910d-903b14c84bf8-kube-api-access-v7jp8\") pod \"dnsmasq-dns-675f4bcbfc-x827s\" (UID: \"7d835d2d-ab92-4f38-910d-903b14c84bf8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-x827s" Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.409104 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-tv2zr"] Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.410563 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-tv2zr" Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.414980 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.432517 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-tv2zr"] Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.497293 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glczv\" (UniqueName: \"kubernetes.io/projected/98b49e1d-0ebd-44d6-b70b-ef73531226f3-kube-api-access-glczv\") pod \"dnsmasq-dns-78dd6ddcc-tv2zr\" (UID: \"98b49e1d-0ebd-44d6-b70b-ef73531226f3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tv2zr" Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.497402 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d835d2d-ab92-4f38-910d-903b14c84bf8-config\") pod \"dnsmasq-dns-675f4bcbfc-x827s\" (UID: \"7d835d2d-ab92-4f38-910d-903b14c84bf8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-x827s" Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.497464 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7jp8\" (UniqueName: \"kubernetes.io/projected/7d835d2d-ab92-4f38-910d-903b14c84bf8-kube-api-access-v7jp8\") pod \"dnsmasq-dns-675f4bcbfc-x827s\" (UID: \"7d835d2d-ab92-4f38-910d-903b14c84bf8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-x827s" Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.497497 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98b49e1d-0ebd-44d6-b70b-ef73531226f3-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-tv2zr\" (UID: \"98b49e1d-0ebd-44d6-b70b-ef73531226f3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tv2zr" Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.497524 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98b49e1d-0ebd-44d6-b70b-ef73531226f3-config\") pod \"dnsmasq-dns-78dd6ddcc-tv2zr\" (UID: \"98b49e1d-0ebd-44d6-b70b-ef73531226f3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tv2zr" Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.500418 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d835d2d-ab92-4f38-910d-903b14c84bf8-config\") pod \"dnsmasq-dns-675f4bcbfc-x827s\" (UID: \"7d835d2d-ab92-4f38-910d-903b14c84bf8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-x827s" Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.525159 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7jp8\" (UniqueName: \"kubernetes.io/projected/7d835d2d-ab92-4f38-910d-903b14c84bf8-kube-api-access-v7jp8\") pod \"dnsmasq-dns-675f4bcbfc-x827s\" (UID: \"7d835d2d-ab92-4f38-910d-903b14c84bf8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-x827s" Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.598461 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98b49e1d-0ebd-44d6-b70b-ef73531226f3-config\") pod \"dnsmasq-dns-78dd6ddcc-tv2zr\" (UID: \"98b49e1d-0ebd-44d6-b70b-ef73531226f3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tv2zr" Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.598530 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glczv\" (UniqueName: \"kubernetes.io/projected/98b49e1d-0ebd-44d6-b70b-ef73531226f3-kube-api-access-glczv\") pod \"dnsmasq-dns-78dd6ddcc-tv2zr\" (UID: \"98b49e1d-0ebd-44d6-b70b-ef73531226f3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tv2zr" Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.598642 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98b49e1d-0ebd-44d6-b70b-ef73531226f3-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-tv2zr\" (UID: \"98b49e1d-0ebd-44d6-b70b-ef73531226f3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tv2zr" Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.599597 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98b49e1d-0ebd-44d6-b70b-ef73531226f3-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-tv2zr\" (UID: \"98b49e1d-0ebd-44d6-b70b-ef73531226f3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tv2zr" Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.599659 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98b49e1d-0ebd-44d6-b70b-ef73531226f3-config\") pod \"dnsmasq-dns-78dd6ddcc-tv2zr\" (UID: \"98b49e1d-0ebd-44d6-b70b-ef73531226f3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tv2zr" Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.613971 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glczv\" (UniqueName: \"kubernetes.io/projected/98b49e1d-0ebd-44d6-b70b-ef73531226f3-kube-api-access-glczv\") pod \"dnsmasq-dns-78dd6ddcc-tv2zr\" (UID: \"98b49e1d-0ebd-44d6-b70b-ef73531226f3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tv2zr" Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.651141 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-x827s" Feb 14 19:02:49 crc kubenswrapper[4897]: I0214 19:02:49.728401 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-tv2zr" Feb 14 19:02:50 crc kubenswrapper[4897]: I0214 19:02:50.296715 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-tv2zr"] Feb 14 19:02:50 crc kubenswrapper[4897]: W0214 19:02:50.299836 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98b49e1d_0ebd_44d6_b70b_ef73531226f3.slice/crio-d170adb0af22bc33128be4b1735e445a0aebac6899bec8718e03c4344bdba75c WatchSource:0}: Error finding container d170adb0af22bc33128be4b1735e445a0aebac6899bec8718e03c4344bdba75c: Status 404 returned error can't find the container with id d170adb0af22bc33128be4b1735e445a0aebac6899bec8718e03c4344bdba75c Feb 14 19:02:50 crc kubenswrapper[4897]: W0214 19:02:50.300810 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d835d2d_ab92_4f38_910d_903b14c84bf8.slice/crio-144b0a420ff8b86264e422e4d5d7417c60b105e6664bb42d0f0573c88262990f WatchSource:0}: Error finding container 144b0a420ff8b86264e422e4d5d7417c60b105e6664bb42d0f0573c88262990f: Status 404 returned error can't find the container with id 144b0a420ff8b86264e422e4d5d7417c60b105e6664bb42d0f0573c88262990f Feb 14 19:02:50 crc kubenswrapper[4897]: I0214 19:02:50.306382 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-x827s"] Feb 14 19:02:50 crc kubenswrapper[4897]: I0214 19:02:50.993042 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-tv2zr" event={"ID":"98b49e1d-0ebd-44d6-b70b-ef73531226f3","Type":"ContainerStarted","Data":"d170adb0af22bc33128be4b1735e445a0aebac6899bec8718e03c4344bdba75c"} Feb 14 19:02:50 crc kubenswrapper[4897]: I0214 19:02:50.994014 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-x827s" event={"ID":"7d835d2d-ab92-4f38-910d-903b14c84bf8","Type":"ContainerStarted","Data":"144b0a420ff8b86264e422e4d5d7417c60b105e6664bb42d0f0573c88262990f"} Feb 14 19:02:51 crc kubenswrapper[4897]: I0214 19:02:51.976358 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-x827s"] Feb 14 19:02:51 crc kubenswrapper[4897]: I0214 19:02:51.997357 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xrp8l"] Feb 14 19:02:52 crc kubenswrapper[4897]: I0214 19:02:52.006720 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xrp8l" Feb 14 19:02:52 crc kubenswrapper[4897]: I0214 19:02:52.053421 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xrp8l"] Feb 14 19:02:52 crc kubenswrapper[4897]: I0214 19:02:52.141781 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvfq4\" (UniqueName: \"kubernetes.io/projected/867917b7-904f-46b3-b1d3-6f9f760aabc7-kube-api-access-fvfq4\") pod \"dnsmasq-dns-666b6646f7-xrp8l\" (UID: \"867917b7-904f-46b3-b1d3-6f9f760aabc7\") " pod="openstack/dnsmasq-dns-666b6646f7-xrp8l" Feb 14 19:02:52 crc kubenswrapper[4897]: I0214 19:02:52.141901 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/867917b7-904f-46b3-b1d3-6f9f760aabc7-config\") pod \"dnsmasq-dns-666b6646f7-xrp8l\" (UID: \"867917b7-904f-46b3-b1d3-6f9f760aabc7\") " pod="openstack/dnsmasq-dns-666b6646f7-xrp8l" Feb 14 19:02:52 crc kubenswrapper[4897]: I0214 19:02:52.141947 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/867917b7-904f-46b3-b1d3-6f9f760aabc7-dns-svc\") pod \"dnsmasq-dns-666b6646f7-xrp8l\" (UID: \"867917b7-904f-46b3-b1d3-6f9f760aabc7\") " pod="openstack/dnsmasq-dns-666b6646f7-xrp8l" Feb 14 19:02:52 crc kubenswrapper[4897]: I0214 19:02:52.243005 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/867917b7-904f-46b3-b1d3-6f9f760aabc7-dns-svc\") pod \"dnsmasq-dns-666b6646f7-xrp8l\" (UID: \"867917b7-904f-46b3-b1d3-6f9f760aabc7\") " pod="openstack/dnsmasq-dns-666b6646f7-xrp8l" Feb 14 19:02:52 crc kubenswrapper[4897]: I0214 19:02:52.243093 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvfq4\" (UniqueName: \"kubernetes.io/projected/867917b7-904f-46b3-b1d3-6f9f760aabc7-kube-api-access-fvfq4\") pod \"dnsmasq-dns-666b6646f7-xrp8l\" (UID: \"867917b7-904f-46b3-b1d3-6f9f760aabc7\") " pod="openstack/dnsmasq-dns-666b6646f7-xrp8l" Feb 14 19:02:52 crc kubenswrapper[4897]: I0214 19:02:52.243182 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/867917b7-904f-46b3-b1d3-6f9f760aabc7-config\") pod \"dnsmasq-dns-666b6646f7-xrp8l\" (UID: \"867917b7-904f-46b3-b1d3-6f9f760aabc7\") " pod="openstack/dnsmasq-dns-666b6646f7-xrp8l" Feb 14 19:02:52 crc kubenswrapper[4897]: I0214 19:02:52.244363 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/867917b7-904f-46b3-b1d3-6f9f760aabc7-config\") pod \"dnsmasq-dns-666b6646f7-xrp8l\" (UID: \"867917b7-904f-46b3-b1d3-6f9f760aabc7\") " pod="openstack/dnsmasq-dns-666b6646f7-xrp8l" Feb 14 19:02:52 crc kubenswrapper[4897]: I0214 19:02:52.244868 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/867917b7-904f-46b3-b1d3-6f9f760aabc7-dns-svc\") pod \"dnsmasq-dns-666b6646f7-xrp8l\" (UID: \"867917b7-904f-46b3-b1d3-6f9f760aabc7\") " pod="openstack/dnsmasq-dns-666b6646f7-xrp8l" Feb 14 19:02:52 crc kubenswrapper[4897]: I0214 19:02:52.269269 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvfq4\" (UniqueName: \"kubernetes.io/projected/867917b7-904f-46b3-b1d3-6f9f760aabc7-kube-api-access-fvfq4\") pod \"dnsmasq-dns-666b6646f7-xrp8l\" (UID: \"867917b7-904f-46b3-b1d3-6f9f760aabc7\") " pod="openstack/dnsmasq-dns-666b6646f7-xrp8l" Feb 14 19:02:52 crc kubenswrapper[4897]: I0214 19:02:52.355566 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xrp8l" Feb 14 19:02:52 crc kubenswrapper[4897]: I0214 19:02:52.356218 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-tv2zr"] Feb 14 19:02:52 crc kubenswrapper[4897]: I0214 19:02:52.385812 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tlgx5"] Feb 14 19:02:52 crc kubenswrapper[4897]: I0214 19:02:52.387402 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-tlgx5" Feb 14 19:02:52 crc kubenswrapper[4897]: I0214 19:02:52.430198 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tlgx5"] Feb 14 19:02:52 crc kubenswrapper[4897]: I0214 19:02:52.450000 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4ee62cf-bcef-4904-a262-600ed17f3719-config\") pod \"dnsmasq-dns-57d769cc4f-tlgx5\" (UID: \"a4ee62cf-bcef-4904-a262-600ed17f3719\") " pod="openstack/dnsmasq-dns-57d769cc4f-tlgx5" Feb 14 19:02:52 crc kubenswrapper[4897]: I0214 19:02:52.450149 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4ee62cf-bcef-4904-a262-600ed17f3719-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-tlgx5\" (UID: \"a4ee62cf-bcef-4904-a262-600ed17f3719\") " pod="openstack/dnsmasq-dns-57d769cc4f-tlgx5" Feb 14 19:02:52 crc kubenswrapper[4897]: I0214 19:02:52.450179 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp46h\" (UniqueName: \"kubernetes.io/projected/a4ee62cf-bcef-4904-a262-600ed17f3719-kube-api-access-kp46h\") pod \"dnsmasq-dns-57d769cc4f-tlgx5\" (UID: \"a4ee62cf-bcef-4904-a262-600ed17f3719\") " pod="openstack/dnsmasq-dns-57d769cc4f-tlgx5" Feb 14 19:02:52 crc kubenswrapper[4897]: I0214 19:02:52.552151 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4ee62cf-bcef-4904-a262-600ed17f3719-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-tlgx5\" (UID: \"a4ee62cf-bcef-4904-a262-600ed17f3719\") " pod="openstack/dnsmasq-dns-57d769cc4f-tlgx5" Feb 14 19:02:52 crc kubenswrapper[4897]: I0214 19:02:52.552206 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp46h\" (UniqueName: \"kubernetes.io/projected/a4ee62cf-bcef-4904-a262-600ed17f3719-kube-api-access-kp46h\") pod \"dnsmasq-dns-57d769cc4f-tlgx5\" (UID: \"a4ee62cf-bcef-4904-a262-600ed17f3719\") " pod="openstack/dnsmasq-dns-57d769cc4f-tlgx5" Feb 14 19:02:52 crc kubenswrapper[4897]: I0214 19:02:52.552265 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4ee62cf-bcef-4904-a262-600ed17f3719-config\") pod \"dnsmasq-dns-57d769cc4f-tlgx5\" (UID: \"a4ee62cf-bcef-4904-a262-600ed17f3719\") " pod="openstack/dnsmasq-dns-57d769cc4f-tlgx5" Feb 14 19:02:52 crc kubenswrapper[4897]: I0214 19:02:52.553111 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4ee62cf-bcef-4904-a262-600ed17f3719-config\") pod \"dnsmasq-dns-57d769cc4f-tlgx5\" (UID: \"a4ee62cf-bcef-4904-a262-600ed17f3719\") " pod="openstack/dnsmasq-dns-57d769cc4f-tlgx5" Feb 14 19:02:52 crc kubenswrapper[4897]: I0214 19:02:52.553602 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4ee62cf-bcef-4904-a262-600ed17f3719-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-tlgx5\" (UID: \"a4ee62cf-bcef-4904-a262-600ed17f3719\") " pod="openstack/dnsmasq-dns-57d769cc4f-tlgx5" Feb 14 19:02:52 crc kubenswrapper[4897]: I0214 19:02:52.592892 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp46h\" (UniqueName: \"kubernetes.io/projected/a4ee62cf-bcef-4904-a262-600ed17f3719-kube-api-access-kp46h\") pod \"dnsmasq-dns-57d769cc4f-tlgx5\" (UID: \"a4ee62cf-bcef-4904-a262-600ed17f3719\") " pod="openstack/dnsmasq-dns-57d769cc4f-tlgx5" Feb 14 19:02:52 crc kubenswrapper[4897]: I0214 19:02:52.810712 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-tlgx5" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.060574 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xrp8l"] Feb 14 19:02:53 crc kubenswrapper[4897]: W0214 19:02:53.075507 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod867917b7_904f_46b3_b1d3_6f9f760aabc7.slice/crio-e88fee1812107a76d5ff1dfb78b02206697fadfd581ce075f0d90af9b3beb734 WatchSource:0}: Error finding container e88fee1812107a76d5ff1dfb78b02206697fadfd581ce075f0d90af9b3beb734: Status 404 returned error can't find the container with id e88fee1812107a76d5ff1dfb78b02206697fadfd581ce075f0d90af9b3beb734 Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.180657 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.185737 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.188186 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.188563 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.188614 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.188973 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-hdzhq" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.188985 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.188991 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.189052 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.202427 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.214642 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.216169 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.234073 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.235584 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.248766 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.254902 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.261421 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c8eb488b-8b48-4dea-8a34-dee3346005ef-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.261465 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c16cec9c-062d-434a-aa5a-479a61b795d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c16cec9c-062d-434a-aa5a-479a61b795d3\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.261493 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/32d6ef5f-5f6d-4563-91e7-94928fbe901d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.261522 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/32d6ef5f-5f6d-4563-91e7-94928fbe901d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.261544 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/32d6ef5f-5f6d-4563-91e7-94928fbe901d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.261565 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c8eb488b-8b48-4dea-8a34-dee3346005ef-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.261580 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c8eb488b-8b48-4dea-8a34-dee3346005ef-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.261596 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32d6ef5f-5f6d-4563-91e7-94928fbe901d-config-data\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.261616 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c8eb488b-8b48-4dea-8a34-dee3346005ef-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.261638 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c8eb488b-8b48-4dea-8a34-dee3346005ef-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.261657 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/32d6ef5f-5f6d-4563-91e7-94928fbe901d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.261676 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.261701 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/32d6ef5f-5f6d-4563-91e7-94928fbe901d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.261716 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c8eb488b-8b48-4dea-8a34-dee3346005ef-config-data\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.261750 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c8eb488b-8b48-4dea-8a34-dee3346005ef-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.261771 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/32d6ef5f-5f6d-4563-91e7-94928fbe901d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.261790 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvpmx\" (UniqueName: \"kubernetes.io/projected/c8eb488b-8b48-4dea-8a34-dee3346005ef-kube-api-access-xvpmx\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.261809 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/32d6ef5f-5f6d-4563-91e7-94928fbe901d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.261825 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/32d6ef5f-5f6d-4563-91e7-94928fbe901d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.261846 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c8eb488b-8b48-4dea-8a34-dee3346005ef-server-conf\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.261871 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr2xq\" (UniqueName: \"kubernetes.io/projected/32d6ef5f-5f6d-4563-91e7-94928fbe901d-kube-api-access-nr2xq\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.261894 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c8eb488b-8b48-4dea-8a34-dee3346005ef-pod-info\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.287899 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tlgx5"] Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.363158 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32d6ef5f-5f6d-4563-91e7-94928fbe901d-config-data\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.363547 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c8eb488b-8b48-4dea-8a34-dee3346005ef-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.363590 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c8eb488b-8b48-4dea-8a34-dee3346005ef-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.363620 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.363644 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/32d6ef5f-5f6d-4563-91e7-94928fbe901d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.363681 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.363720 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/32d6ef5f-5f6d-4563-91e7-94928fbe901d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.363764 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c8eb488b-8b48-4dea-8a34-dee3346005ef-config-data\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.363804 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.363844 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c8eb488b-8b48-4dea-8a34-dee3346005ef-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.363874 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-server-conf\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.363919 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/32d6ef5f-5f6d-4563-91e7-94928fbe901d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.363941 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-pod-info\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.363962 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.363976 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32d6ef5f-5f6d-4563-91e7-94928fbe901d-config-data\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.363992 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvpmx\" (UniqueName: \"kubernetes.io/projected/c8eb488b-8b48-4dea-8a34-dee3346005ef-kube-api-access-xvpmx\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.364088 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/32d6ef5f-5f6d-4563-91e7-94928fbe901d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.364117 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/32d6ef5f-5f6d-4563-91e7-94928fbe901d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.364184 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c8eb488b-8b48-4dea-8a34-dee3346005ef-server-conf\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.364215 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c3f582fe-134d-414a-971c-d17234485d47\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c3f582fe-134d-414a-971c-d17234485d47\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.364253 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwvd4\" (UniqueName: \"kubernetes.io/projected/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-kube-api-access-xwvd4\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.364275 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.364312 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr2xq\" (UniqueName: \"kubernetes.io/projected/32d6ef5f-5f6d-4563-91e7-94928fbe901d-kube-api-access-nr2xq\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.364382 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c8eb488b-8b48-4dea-8a34-dee3346005ef-pod-info\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.364431 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c8eb488b-8b48-4dea-8a34-dee3346005ef-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.364503 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.364540 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c16cec9c-062d-434a-aa5a-479a61b795d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c16cec9c-062d-434a-aa5a-479a61b795d3\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.364614 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/32d6ef5f-5f6d-4563-91e7-94928fbe901d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.364669 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.364699 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/32d6ef5f-5f6d-4563-91e7-94928fbe901d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.364760 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/32d6ef5f-5f6d-4563-91e7-94928fbe901d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.364786 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-config-data\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.364845 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c8eb488b-8b48-4dea-8a34-dee3346005ef-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.364872 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c8eb488b-8b48-4dea-8a34-dee3346005ef-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.365831 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c8eb488b-8b48-4dea-8a34-dee3346005ef-server-conf\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.367309 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c8eb488b-8b48-4dea-8a34-dee3346005ef-config-data\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.367512 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c8eb488b-8b48-4dea-8a34-dee3346005ef-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.368694 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c8eb488b-8b48-4dea-8a34-dee3346005ef-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.369117 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/32d6ef5f-5f6d-4563-91e7-94928fbe901d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.369172 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/32d6ef5f-5f6d-4563-91e7-94928fbe901d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.369384 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/32d6ef5f-5f6d-4563-91e7-94928fbe901d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.370403 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c8eb488b-8b48-4dea-8a34-dee3346005ef-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.371306 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/32d6ef5f-5f6d-4563-91e7-94928fbe901d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.372020 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c8eb488b-8b48-4dea-8a34-dee3346005ef-pod-info\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.372374 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/32d6ef5f-5f6d-4563-91e7-94928fbe901d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.372767 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c8eb488b-8b48-4dea-8a34-dee3346005ef-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.373352 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c8eb488b-8b48-4dea-8a34-dee3346005ef-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.373950 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.373983 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c8298b982ac0a8950d87841fa11447940cdea275839e8718e250a1f9acab59f7/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.374747 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.374773 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c16cec9c-062d-434a-aa5a-479a61b795d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c16cec9c-062d-434a-aa5a-479a61b795d3\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bdd9fdd2f7f7e3465101c97ccaf93539e86bf50672dc0be4645c042fca69f0d6/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.374949 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/32d6ef5f-5f6d-4563-91e7-94928fbe901d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.375272 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/32d6ef5f-5f6d-4563-91e7-94928fbe901d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.376434 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c8eb488b-8b48-4dea-8a34-dee3346005ef-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.377360 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/32d6ef5f-5f6d-4563-91e7-94928fbe901d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.381855 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvpmx\" (UniqueName: \"kubernetes.io/projected/c8eb488b-8b48-4dea-8a34-dee3346005ef-kube-api-access-xvpmx\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.388781 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr2xq\" (UniqueName: \"kubernetes.io/projected/32d6ef5f-5f6d-4563-91e7-94928fbe901d-kube-api-access-nr2xq\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.412842 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c16cec9c-062d-434a-aa5a-479a61b795d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c16cec9c-062d-434a-aa5a-479a61b795d3\") pod \"rabbitmq-server-0\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.425195 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f\") pod \"rabbitmq-server-1\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.468992 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.469069 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-server-conf\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.469087 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-pod-info\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.469104 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.469154 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c3f582fe-134d-414a-971c-d17234485d47\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c3f582fe-134d-414a-971c-d17234485d47\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.469175 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwvd4\" (UniqueName: \"kubernetes.io/projected/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-kube-api-access-xwvd4\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.469190 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.469244 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.469268 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.469289 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-config-data\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.469338 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.472933 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.473784 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.475343 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.476384 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-server-conf\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.476789 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.476791 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-pod-info\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.476817 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c3f582fe-134d-414a-971c-d17234485d47\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c3f582fe-134d-414a-971c-d17234485d47\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/db3bb9145c21dd13780a516e6cf8590bb629ffd0f8f03124b19a4bac524d871f/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.479583 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-config-data\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.491542 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.491574 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.496040 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwvd4\" (UniqueName: \"kubernetes.io/projected/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-kube-api-access-xwvd4\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.514991 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.527570 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.537616 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.540240 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.548817 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.560566 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.567883 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.567989 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.568229 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4sqls" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.568367 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.568512 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.568645 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.568753 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.581989 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c3f582fe-134d-414a-971c-d17234485d47\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c3f582fe-134d-414a-971c-d17234485d47\") pod \"rabbitmq-server-2\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.672900 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.672979 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.673151 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.673237 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.673276 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e1ad9f53-f4e5-40e5-9da8-9de52cf43282\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1ad9f53-f4e5-40e5-9da8-9de52cf43282\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.673291 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.673358 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.673373 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.673414 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbsjw\" (UniqueName: \"kubernetes.io/projected/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-kube-api-access-pbsjw\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.673487 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.673507 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.775793 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.776338 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.776371 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.776388 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.776407 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e1ad9f53-f4e5-40e5-9da8-9de52cf43282\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1ad9f53-f4e5-40e5-9da8-9de52cf43282\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.776437 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.776455 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.776477 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbsjw\" (UniqueName: \"kubernetes.io/projected/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-kube-api-access-pbsjw\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.776510 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.776528 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.776552 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.778138 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.786559 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.792112 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.793433 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.794486 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.795767 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.796851 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.796908 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e1ad9f53-f4e5-40e5-9da8-9de52cf43282\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1ad9f53-f4e5-40e5-9da8-9de52cf43282\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1ba554ac9cd7bf9719c3c599063f28fa348ae684b1c7ff81601658ac87c0ecab/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.797108 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.818998 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbsjw\" (UniqueName: \"kubernetes.io/projected/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-kube-api-access-pbsjw\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.822085 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.839404 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.840672 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:53 crc kubenswrapper[4897]: I0214 19:02:53.907235 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e1ad9f53-f4e5-40e5-9da8-9de52cf43282\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1ad9f53-f4e5-40e5-9da8-9de52cf43282\") pod \"rabbitmq-cell1-server-0\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.068072 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.094425 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-xrp8l" event={"ID":"867917b7-904f-46b3-b1d3-6f9f760aabc7","Type":"ContainerStarted","Data":"e88fee1812107a76d5ff1dfb78b02206697fadfd581ce075f0d90af9b3beb734"} Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.104689 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-tlgx5" event={"ID":"a4ee62cf-bcef-4904-a262-600ed17f3719","Type":"ContainerStarted","Data":"5843b8eacbc559a16fdc395dcd3d3b1f48bf95cb8654cc7a0329d2844baafda5"} Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.167167 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.239794 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 19:02:54 crc kubenswrapper[4897]: W0214 19:02:54.258855 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8eb488b_8b48_4dea_8a34_dee3346005ef.slice/crio-41d7f57883ed13bd8d08d07218a119dbac5e13e0d5bbc38cae3d44024d9798af WatchSource:0}: Error finding container 41d7f57883ed13bd8d08d07218a119dbac5e13e0d5bbc38cae3d44024d9798af: Status 404 returned error can't find the container with id 41d7f57883ed13bd8d08d07218a119dbac5e13e0d5bbc38cae3d44024d9798af Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.374666 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 19:02:54 crc kubenswrapper[4897]: W0214 19:02:54.452432 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e532d34_b3bb_4f63_bc64_6b6cc22666b0.slice/crio-35d51b28cecbf76e932afedcc230bbc3f85fdff73e6fb5a862f4742fc75228d7 WatchSource:0}: Error finding container 35d51b28cecbf76e932afedcc230bbc3f85fdff73e6fb5a862f4742fc75228d7: Status 404 returned error can't find the container with id 35d51b28cecbf76e932afedcc230bbc3f85fdff73e6fb5a862f4742fc75228d7 Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.610083 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.616181 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.619303 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.619583 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.619655 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-74jmw" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.623544 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.625291 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.627106 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.699598 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a8b3d12-d5db-435a-ba48-fbe1e31fef96-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9a8b3d12-d5db-435a-ba48-fbe1e31fef96\") " pod="openstack/openstack-galera-0" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.709183 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2787931c-debf-40ed-9232-eee463f18148\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2787931c-debf-40ed-9232-eee463f18148\") pod \"openstack-galera-0\" (UID: \"9a8b3d12-d5db-435a-ba48-fbe1e31fef96\") " pod="openstack/openstack-galera-0" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.709301 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a8b3d12-d5db-435a-ba48-fbe1e31fef96-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9a8b3d12-d5db-435a-ba48-fbe1e31fef96\") " pod="openstack/openstack-galera-0" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.709347 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9a8b3d12-d5db-435a-ba48-fbe1e31fef96-kolla-config\") pod \"openstack-galera-0\" (UID: \"9a8b3d12-d5db-435a-ba48-fbe1e31fef96\") " pod="openstack/openstack-galera-0" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.709497 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a8b3d12-d5db-435a-ba48-fbe1e31fef96-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9a8b3d12-d5db-435a-ba48-fbe1e31fef96\") " pod="openstack/openstack-galera-0" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.709551 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtwt5\" (UniqueName: \"kubernetes.io/projected/9a8b3d12-d5db-435a-ba48-fbe1e31fef96-kube-api-access-qtwt5\") pod \"openstack-galera-0\" (UID: \"9a8b3d12-d5db-435a-ba48-fbe1e31fef96\") " pod="openstack/openstack-galera-0" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.709600 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9a8b3d12-d5db-435a-ba48-fbe1e31fef96-config-data-default\") pod \"openstack-galera-0\" (UID: \"9a8b3d12-d5db-435a-ba48-fbe1e31fef96\") " pod="openstack/openstack-galera-0" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.709646 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9a8b3d12-d5db-435a-ba48-fbe1e31fef96-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9a8b3d12-d5db-435a-ba48-fbe1e31fef96\") " pod="openstack/openstack-galera-0" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.710510 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 19:02:54 crc kubenswrapper[4897]: W0214 19:02:54.711060 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75b00edc_276b_4e3b_84c1_db17e1eeb3ee.slice/crio-15b99b673350073944d84bf07a5b46fdd24e4605cae1e1700b21a73edc4a2dd3 WatchSource:0}: Error finding container 15b99b673350073944d84bf07a5b46fdd24e4605cae1e1700b21a73edc4a2dd3: Status 404 returned error can't find the container with id 15b99b673350073944d84bf07a5b46fdd24e4605cae1e1700b21a73edc4a2dd3 Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.813674 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a8b3d12-d5db-435a-ba48-fbe1e31fef96-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9a8b3d12-d5db-435a-ba48-fbe1e31fef96\") " pod="openstack/openstack-galera-0" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.813745 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtwt5\" (UniqueName: \"kubernetes.io/projected/9a8b3d12-d5db-435a-ba48-fbe1e31fef96-kube-api-access-qtwt5\") pod \"openstack-galera-0\" (UID: \"9a8b3d12-d5db-435a-ba48-fbe1e31fef96\") " pod="openstack/openstack-galera-0" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.813773 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9a8b3d12-d5db-435a-ba48-fbe1e31fef96-config-data-default\") pod \"openstack-galera-0\" (UID: \"9a8b3d12-d5db-435a-ba48-fbe1e31fef96\") " pod="openstack/openstack-galera-0" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.813811 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9a8b3d12-d5db-435a-ba48-fbe1e31fef96-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9a8b3d12-d5db-435a-ba48-fbe1e31fef96\") " pod="openstack/openstack-galera-0" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.813896 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a8b3d12-d5db-435a-ba48-fbe1e31fef96-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9a8b3d12-d5db-435a-ba48-fbe1e31fef96\") " pod="openstack/openstack-galera-0" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.813928 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2787931c-debf-40ed-9232-eee463f18148\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2787931c-debf-40ed-9232-eee463f18148\") pod \"openstack-galera-0\" (UID: \"9a8b3d12-d5db-435a-ba48-fbe1e31fef96\") " pod="openstack/openstack-galera-0" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.813964 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a8b3d12-d5db-435a-ba48-fbe1e31fef96-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9a8b3d12-d5db-435a-ba48-fbe1e31fef96\") " pod="openstack/openstack-galera-0" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.813988 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9a8b3d12-d5db-435a-ba48-fbe1e31fef96-kolla-config\") pod \"openstack-galera-0\" (UID: \"9a8b3d12-d5db-435a-ba48-fbe1e31fef96\") " pod="openstack/openstack-galera-0" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.814693 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9a8b3d12-d5db-435a-ba48-fbe1e31fef96-kolla-config\") pod \"openstack-galera-0\" (UID: \"9a8b3d12-d5db-435a-ba48-fbe1e31fef96\") " pod="openstack/openstack-galera-0" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.815980 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a8b3d12-d5db-435a-ba48-fbe1e31fef96-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9a8b3d12-d5db-435a-ba48-fbe1e31fef96\") " pod="openstack/openstack-galera-0" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.816412 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9a8b3d12-d5db-435a-ba48-fbe1e31fef96-config-data-default\") pod \"openstack-galera-0\" (UID: \"9a8b3d12-d5db-435a-ba48-fbe1e31fef96\") " pod="openstack/openstack-galera-0" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.816576 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9a8b3d12-d5db-435a-ba48-fbe1e31fef96-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9a8b3d12-d5db-435a-ba48-fbe1e31fef96\") " pod="openstack/openstack-galera-0" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.819195 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.819234 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2787931c-debf-40ed-9232-eee463f18148\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2787931c-debf-40ed-9232-eee463f18148\") pod \"openstack-galera-0\" (UID: \"9a8b3d12-d5db-435a-ba48-fbe1e31fef96\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5f6d0f85c91c2a0f7df2d5890bde756576990a2f24ec54013a72c830579882c8/globalmount\"" pod="openstack/openstack-galera-0" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.828429 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a8b3d12-d5db-435a-ba48-fbe1e31fef96-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9a8b3d12-d5db-435a-ba48-fbe1e31fef96\") " pod="openstack/openstack-galera-0" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.837203 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtwt5\" (UniqueName: \"kubernetes.io/projected/9a8b3d12-d5db-435a-ba48-fbe1e31fef96-kube-api-access-qtwt5\") pod \"openstack-galera-0\" (UID: \"9a8b3d12-d5db-435a-ba48-fbe1e31fef96\") " pod="openstack/openstack-galera-0" Feb 14 19:02:54 crc kubenswrapper[4897]: I0214 19:02:54.846893 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a8b3d12-d5db-435a-ba48-fbe1e31fef96-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9a8b3d12-d5db-435a-ba48-fbe1e31fef96\") " pod="openstack/openstack-galera-0" Feb 14 19:02:55 crc kubenswrapper[4897]: I0214 19:02:55.061929 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2787931c-debf-40ed-9232-eee463f18148\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2787931c-debf-40ed-9232-eee463f18148\") pod \"openstack-galera-0\" (UID: \"9a8b3d12-d5db-435a-ba48-fbe1e31fef96\") " pod="openstack/openstack-galera-0" Feb 14 19:02:55 crc kubenswrapper[4897]: I0214 19:02:55.148632 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"c8eb488b-8b48-4dea-8a34-dee3346005ef","Type":"ContainerStarted","Data":"41d7f57883ed13bd8d08d07218a119dbac5e13e0d5bbc38cae3d44024d9798af"} Feb 14 19:02:55 crc kubenswrapper[4897]: I0214 19:02:55.151150 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"3e532d34-b3bb-4f63-bc64-6b6cc22666b0","Type":"ContainerStarted","Data":"35d51b28cecbf76e932afedcc230bbc3f85fdff73e6fb5a862f4742fc75228d7"} Feb 14 19:02:55 crc kubenswrapper[4897]: I0214 19:02:55.153084 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"32d6ef5f-5f6d-4563-91e7-94928fbe901d","Type":"ContainerStarted","Data":"e60cf35cbde19440beb3f4aa715cf94e9620365e6deae980ed1dbe70aac693e7"} Feb 14 19:02:55 crc kubenswrapper[4897]: I0214 19:02:55.160261 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"75b00edc-276b-4e3b-84c1-db17e1eeb3ee","Type":"ContainerStarted","Data":"15b99b673350073944d84bf07a5b46fdd24e4605cae1e1700b21a73edc4a2dd3"} Feb 14 19:02:55 crc kubenswrapper[4897]: I0214 19:02:55.300651 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 14 19:02:55 crc kubenswrapper[4897]: I0214 19:02:55.854700 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.127261 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.141777 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.141869 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.145429 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-gwffv" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.145477 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.145746 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.146071 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.173508 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9a8b3d12-d5db-435a-ba48-fbe1e31fef96","Type":"ContainerStarted","Data":"5ae769584024dea9d65b799fbb7aa5ad098635cc7cf0c1c550cd278df6f94285"} Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.257823 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdda6cd9-a603-4bb0-8595-3d128fc9e324-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"fdda6cd9-a603-4bb0-8595-3d128fc9e324\") " pod="openstack/openstack-cell1-galera-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.257884 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c4c92cf6-f49c-4e8c-b641-36a43532a04a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c4c92cf6-f49c-4e8c-b641-36a43532a04a\") pod \"openstack-cell1-galera-0\" (UID: \"fdda6cd9-a603-4bb0-8595-3d128fc9e324\") " pod="openstack/openstack-cell1-galera-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.257909 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjqhh\" (UniqueName: \"kubernetes.io/projected/fdda6cd9-a603-4bb0-8595-3d128fc9e324-kube-api-access-qjqhh\") pod \"openstack-cell1-galera-0\" (UID: \"fdda6cd9-a603-4bb0-8595-3d128fc9e324\") " pod="openstack/openstack-cell1-galera-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.257947 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdda6cd9-a603-4bb0-8595-3d128fc9e324-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"fdda6cd9-a603-4bb0-8595-3d128fc9e324\") " pod="openstack/openstack-cell1-galera-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.258023 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdda6cd9-a603-4bb0-8595-3d128fc9e324-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"fdda6cd9-a603-4bb0-8595-3d128fc9e324\") " pod="openstack/openstack-cell1-galera-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.258082 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fdda6cd9-a603-4bb0-8595-3d128fc9e324-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"fdda6cd9-a603-4bb0-8595-3d128fc9e324\") " pod="openstack/openstack-cell1-galera-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.258108 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fdda6cd9-a603-4bb0-8595-3d128fc9e324-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"fdda6cd9-a603-4bb0-8595-3d128fc9e324\") " pod="openstack/openstack-cell1-galera-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.258140 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fdda6cd9-a603-4bb0-8595-3d128fc9e324-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"fdda6cd9-a603-4bb0-8595-3d128fc9e324\") " pod="openstack/openstack-cell1-galera-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.269883 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.271503 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.280385 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.280547 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.280637 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-5rzkb" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.285812 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.359642 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c4c92cf6-f49c-4e8c-b641-36a43532a04a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c4c92cf6-f49c-4e8c-b641-36a43532a04a\") pod \"openstack-cell1-galera-0\" (UID: \"fdda6cd9-a603-4bb0-8595-3d128fc9e324\") " pod="openstack/openstack-cell1-galera-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.360060 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjqhh\" (UniqueName: \"kubernetes.io/projected/fdda6cd9-a603-4bb0-8595-3d128fc9e324-kube-api-access-qjqhh\") pod \"openstack-cell1-galera-0\" (UID: \"fdda6cd9-a603-4bb0-8595-3d128fc9e324\") " pod="openstack/openstack-cell1-galera-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.360142 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdda6cd9-a603-4bb0-8595-3d128fc9e324-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"fdda6cd9-a603-4bb0-8595-3d128fc9e324\") " pod="openstack/openstack-cell1-galera-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.360227 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/429062cc-8ca1-4e1f-a1b3-d84bbd4d15df-memcached-tls-certs\") pod \"memcached-0\" (UID: \"429062cc-8ca1-4e1f-a1b3-d84bbd4d15df\") " pod="openstack/memcached-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.360292 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdda6cd9-a603-4bb0-8595-3d128fc9e324-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"fdda6cd9-a603-4bb0-8595-3d128fc9e324\") " pod="openstack/openstack-cell1-galera-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.360339 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sckkg\" (UniqueName: \"kubernetes.io/projected/429062cc-8ca1-4e1f-a1b3-d84bbd4d15df-kube-api-access-sckkg\") pod \"memcached-0\" (UID: \"429062cc-8ca1-4e1f-a1b3-d84bbd4d15df\") " pod="openstack/memcached-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.360447 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429062cc-8ca1-4e1f-a1b3-d84bbd4d15df-combined-ca-bundle\") pod \"memcached-0\" (UID: \"429062cc-8ca1-4e1f-a1b3-d84bbd4d15df\") " pod="openstack/memcached-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.360497 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fdda6cd9-a603-4bb0-8595-3d128fc9e324-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"fdda6cd9-a603-4bb0-8595-3d128fc9e324\") " pod="openstack/openstack-cell1-galera-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.360517 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/429062cc-8ca1-4e1f-a1b3-d84bbd4d15df-kolla-config\") pod \"memcached-0\" (UID: \"429062cc-8ca1-4e1f-a1b3-d84bbd4d15df\") " pod="openstack/memcached-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.360555 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fdda6cd9-a603-4bb0-8595-3d128fc9e324-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"fdda6cd9-a603-4bb0-8595-3d128fc9e324\") " pod="openstack/openstack-cell1-galera-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.360574 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/429062cc-8ca1-4e1f-a1b3-d84bbd4d15df-config-data\") pod \"memcached-0\" (UID: \"429062cc-8ca1-4e1f-a1b3-d84bbd4d15df\") " pod="openstack/memcached-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.360660 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fdda6cd9-a603-4bb0-8595-3d128fc9e324-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"fdda6cd9-a603-4bb0-8595-3d128fc9e324\") " pod="openstack/openstack-cell1-galera-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.360701 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdda6cd9-a603-4bb0-8595-3d128fc9e324-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"fdda6cd9-a603-4bb0-8595-3d128fc9e324\") " pod="openstack/openstack-cell1-galera-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.361397 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fdda6cd9-a603-4bb0-8595-3d128fc9e324-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"fdda6cd9-a603-4bb0-8595-3d128fc9e324\") " pod="openstack/openstack-cell1-galera-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.361520 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fdda6cd9-a603-4bb0-8595-3d128fc9e324-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"fdda6cd9-a603-4bb0-8595-3d128fc9e324\") " pod="openstack/openstack-cell1-galera-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.363020 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fdda6cd9-a603-4bb0-8595-3d128fc9e324-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"fdda6cd9-a603-4bb0-8595-3d128fc9e324\") " pod="openstack/openstack-cell1-galera-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.364008 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.364055 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c4c92cf6-f49c-4e8c-b641-36a43532a04a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c4c92cf6-f49c-4e8c-b641-36a43532a04a\") pod \"openstack-cell1-galera-0\" (UID: \"fdda6cd9-a603-4bb0-8595-3d128fc9e324\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1949c37a3878bccd1fba24778437ce882e063ba126450a95d35ca57a1b0f424c/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.364676 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdda6cd9-a603-4bb0-8595-3d128fc9e324-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"fdda6cd9-a603-4bb0-8595-3d128fc9e324\") " pod="openstack/openstack-cell1-galera-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.364953 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdda6cd9-a603-4bb0-8595-3d128fc9e324-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"fdda6cd9-a603-4bb0-8595-3d128fc9e324\") " pod="openstack/openstack-cell1-galera-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.367089 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdda6cd9-a603-4bb0-8595-3d128fc9e324-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"fdda6cd9-a603-4bb0-8595-3d128fc9e324\") " pod="openstack/openstack-cell1-galera-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.375084 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjqhh\" (UniqueName: \"kubernetes.io/projected/fdda6cd9-a603-4bb0-8595-3d128fc9e324-kube-api-access-qjqhh\") pod \"openstack-cell1-galera-0\" (UID: \"fdda6cd9-a603-4bb0-8595-3d128fc9e324\") " pod="openstack/openstack-cell1-galera-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.458909 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c4c92cf6-f49c-4e8c-b641-36a43532a04a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c4c92cf6-f49c-4e8c-b641-36a43532a04a\") pod \"openstack-cell1-galera-0\" (UID: \"fdda6cd9-a603-4bb0-8595-3d128fc9e324\") " pod="openstack/openstack-cell1-galera-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.463116 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/429062cc-8ca1-4e1f-a1b3-d84bbd4d15df-memcached-tls-certs\") pod \"memcached-0\" (UID: \"429062cc-8ca1-4e1f-a1b3-d84bbd4d15df\") " pod="openstack/memcached-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.463204 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sckkg\" (UniqueName: \"kubernetes.io/projected/429062cc-8ca1-4e1f-a1b3-d84bbd4d15df-kube-api-access-sckkg\") pod \"memcached-0\" (UID: \"429062cc-8ca1-4e1f-a1b3-d84bbd4d15df\") " pod="openstack/memcached-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.463298 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429062cc-8ca1-4e1f-a1b3-d84bbd4d15df-combined-ca-bundle\") pod \"memcached-0\" (UID: \"429062cc-8ca1-4e1f-a1b3-d84bbd4d15df\") " pod="openstack/memcached-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.463321 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/429062cc-8ca1-4e1f-a1b3-d84bbd4d15df-kolla-config\") pod \"memcached-0\" (UID: \"429062cc-8ca1-4e1f-a1b3-d84bbd4d15df\") " pod="openstack/memcached-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.463449 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.463958 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/429062cc-8ca1-4e1f-a1b3-d84bbd4d15df-config-data\") pod \"memcached-0\" (UID: \"429062cc-8ca1-4e1f-a1b3-d84bbd4d15df\") " pod="openstack/memcached-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.464127 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/429062cc-8ca1-4e1f-a1b3-d84bbd4d15df-kolla-config\") pod \"memcached-0\" (UID: \"429062cc-8ca1-4e1f-a1b3-d84bbd4d15df\") " pod="openstack/memcached-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.464684 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/429062cc-8ca1-4e1f-a1b3-d84bbd4d15df-config-data\") pod \"memcached-0\" (UID: \"429062cc-8ca1-4e1f-a1b3-d84bbd4d15df\") " pod="openstack/memcached-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.467592 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/429062cc-8ca1-4e1f-a1b3-d84bbd4d15df-memcached-tls-certs\") pod \"memcached-0\" (UID: \"429062cc-8ca1-4e1f-a1b3-d84bbd4d15df\") " pod="openstack/memcached-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.468426 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429062cc-8ca1-4e1f-a1b3-d84bbd4d15df-combined-ca-bundle\") pod \"memcached-0\" (UID: \"429062cc-8ca1-4e1f-a1b3-d84bbd4d15df\") " pod="openstack/memcached-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.478864 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sckkg\" (UniqueName: \"kubernetes.io/projected/429062cc-8ca1-4e1f-a1b3-d84bbd4d15df-kube-api-access-sckkg\") pod \"memcached-0\" (UID: \"429062cc-8ca1-4e1f-a1b3-d84bbd4d15df\") " pod="openstack/memcached-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.606959 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 14 19:02:56 crc kubenswrapper[4897]: I0214 19:02:56.927019 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 14 19:02:56 crc kubenswrapper[4897]: W0214 19:02:56.930225 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdda6cd9_a603_4bb0_8595_3d128fc9e324.slice/crio-ffe6ba5bb5faf5979743426b140b4b85ad049eb49aa08d27fe53f073663d196c WatchSource:0}: Error finding container ffe6ba5bb5faf5979743426b140b4b85ad049eb49aa08d27fe53f073663d196c: Status 404 returned error can't find the container with id ffe6ba5bb5faf5979743426b140b4b85ad049eb49aa08d27fe53f073663d196c Feb 14 19:02:57 crc kubenswrapper[4897]: I0214 19:02:57.119163 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 14 19:02:57 crc kubenswrapper[4897]: W0214 19:02:57.124350 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod429062cc_8ca1_4e1f_a1b3_d84bbd4d15df.slice/crio-3fa93925ccb38f948cf08f4134d358188051387ede3a7574fd220c19342edd10 WatchSource:0}: Error finding container 3fa93925ccb38f948cf08f4134d358188051387ede3a7574fd220c19342edd10: Status 404 returned error can't find the container with id 3fa93925ccb38f948cf08f4134d358188051387ede3a7574fd220c19342edd10 Feb 14 19:02:57 crc kubenswrapper[4897]: I0214 19:02:57.182909 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fdda6cd9-a603-4bb0-8595-3d128fc9e324","Type":"ContainerStarted","Data":"ffe6ba5bb5faf5979743426b140b4b85ad049eb49aa08d27fe53f073663d196c"} Feb 14 19:02:57 crc kubenswrapper[4897]: I0214 19:02:57.184809 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"429062cc-8ca1-4e1f-a1b3-d84bbd4d15df","Type":"ContainerStarted","Data":"3fa93925ccb38f948cf08f4134d358188051387ede3a7574fd220c19342edd10"} Feb 14 19:02:58 crc kubenswrapper[4897]: I0214 19:02:58.531951 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 19:02:58 crc kubenswrapper[4897]: I0214 19:02:58.533506 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 19:02:58 crc kubenswrapper[4897]: I0214 19:02:58.539831 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-kv4qb" Feb 14 19:02:58 crc kubenswrapper[4897]: I0214 19:02:58.552369 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 19:02:58 crc kubenswrapper[4897]: I0214 19:02:58.621114 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkdcf\" (UniqueName: \"kubernetes.io/projected/31fc1ad2-32a3-4e47-846f-a69e5ee34493-kube-api-access-dkdcf\") pod \"kube-state-metrics-0\" (UID: \"31fc1ad2-32a3-4e47-846f-a69e5ee34493\") " pod="openstack/kube-state-metrics-0" Feb 14 19:02:58 crc kubenswrapper[4897]: I0214 19:02:58.723344 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkdcf\" (UniqueName: \"kubernetes.io/projected/31fc1ad2-32a3-4e47-846f-a69e5ee34493-kube-api-access-dkdcf\") pod \"kube-state-metrics-0\" (UID: \"31fc1ad2-32a3-4e47-846f-a69e5ee34493\") " pod="openstack/kube-state-metrics-0" Feb 14 19:02:58 crc kubenswrapper[4897]: I0214 19:02:58.753316 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkdcf\" (UniqueName: \"kubernetes.io/projected/31fc1ad2-32a3-4e47-846f-a69e5ee34493-kube-api-access-dkdcf\") pod \"kube-state-metrics-0\" (UID: \"31fc1ad2-32a3-4e47-846f-a69e5ee34493\") " pod="openstack/kube-state-metrics-0" Feb 14 19:02:58 crc kubenswrapper[4897]: I0214 19:02:58.883529 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.170605 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-t9xgt"] Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.171749 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-t9xgt" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.180128 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.180139 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-ll5mb" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.193667 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-t9xgt"] Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.347128 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7683e04b-bb89-48c2-bff0-75d052f26e7f-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-t9xgt\" (UID: \"7683e04b-bb89-48c2-bff0-75d052f26e7f\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-t9xgt" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.347218 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fr5j\" (UniqueName: \"kubernetes.io/projected/7683e04b-bb89-48c2-bff0-75d052f26e7f-kube-api-access-9fr5j\") pod \"observability-ui-dashboards-66cbf594b5-t9xgt\" (UID: \"7683e04b-bb89-48c2-bff0-75d052f26e7f\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-t9xgt" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.448768 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7683e04b-bb89-48c2-bff0-75d052f26e7f-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-t9xgt\" (UID: \"7683e04b-bb89-48c2-bff0-75d052f26e7f\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-t9xgt" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.448833 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fr5j\" (UniqueName: \"kubernetes.io/projected/7683e04b-bb89-48c2-bff0-75d052f26e7f-kube-api-access-9fr5j\") pod \"observability-ui-dashboards-66cbf594b5-t9xgt\" (UID: \"7683e04b-bb89-48c2-bff0-75d052f26e7f\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-t9xgt" Feb 14 19:02:59 crc kubenswrapper[4897]: E0214 19:02:59.448956 4897 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Feb 14 19:02:59 crc kubenswrapper[4897]: E0214 19:02:59.449049 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7683e04b-bb89-48c2-bff0-75d052f26e7f-serving-cert podName:7683e04b-bb89-48c2-bff0-75d052f26e7f nodeName:}" failed. No retries permitted until 2026-02-14 19:02:59.94901927 +0000 UTC m=+1232.925427753 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7683e04b-bb89-48c2-bff0-75d052f26e7f-serving-cert") pod "observability-ui-dashboards-66cbf594b5-t9xgt" (UID: "7683e04b-bb89-48c2-bff0-75d052f26e7f") : secret "observability-ui-dashboards" not found Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.482894 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fr5j\" (UniqueName: \"kubernetes.io/projected/7683e04b-bb89-48c2-bff0-75d052f26e7f-kube-api-access-9fr5j\") pod \"observability-ui-dashboards-66cbf594b5-t9xgt\" (UID: \"7683e04b-bb89-48c2-bff0-75d052f26e7f\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-t9xgt" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.572276 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7f7fb6d64c-hkskf"] Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.573503 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.591172 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7f7fb6d64c-hkskf"] Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.652309 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e77572d7-6aef-4c6c-bb23-bdb47d9d28ee-trusted-ca-bundle\") pod \"console-7f7fb6d64c-hkskf\" (UID: \"e77572d7-6aef-4c6c-bb23-bdb47d9d28ee\") " pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.652598 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e77572d7-6aef-4c6c-bb23-bdb47d9d28ee-service-ca\") pod \"console-7f7fb6d64c-hkskf\" (UID: \"e77572d7-6aef-4c6c-bb23-bdb47d9d28ee\") " pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.652629 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9g8c\" (UniqueName: \"kubernetes.io/projected/e77572d7-6aef-4c6c-bb23-bdb47d9d28ee-kube-api-access-k9g8c\") pod \"console-7f7fb6d64c-hkskf\" (UID: \"e77572d7-6aef-4c6c-bb23-bdb47d9d28ee\") " pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.652687 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e77572d7-6aef-4c6c-bb23-bdb47d9d28ee-console-serving-cert\") pod \"console-7f7fb6d64c-hkskf\" (UID: \"e77572d7-6aef-4c6c-bb23-bdb47d9d28ee\") " pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.652725 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e77572d7-6aef-4c6c-bb23-bdb47d9d28ee-console-config\") pod \"console-7f7fb6d64c-hkskf\" (UID: \"e77572d7-6aef-4c6c-bb23-bdb47d9d28ee\") " pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.652756 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e77572d7-6aef-4c6c-bb23-bdb47d9d28ee-oauth-serving-cert\") pod \"console-7f7fb6d64c-hkskf\" (UID: \"e77572d7-6aef-4c6c-bb23-bdb47d9d28ee\") " pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.652773 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e77572d7-6aef-4c6c-bb23-bdb47d9d28ee-console-oauth-config\") pod \"console-7f7fb6d64c-hkskf\" (UID: \"e77572d7-6aef-4c6c-bb23-bdb47d9d28ee\") " pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.666330 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.674136 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.682637 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.682855 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.683008 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-b7qjw" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.683154 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.686871 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.687061 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.687212 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.687409 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.690392 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.757904 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e77572d7-6aef-4c6c-bb23-bdb47d9d28ee-console-config\") pod \"console-7f7fb6d64c-hkskf\" (UID: \"e77572d7-6aef-4c6c-bb23-bdb47d9d28ee\") " pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.757982 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7gdd\" (UniqueName: \"kubernetes.io/projected/42b73b5c-bc43-4e91-9e3d-255ed69831db-kube-api-access-l7gdd\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.758036 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e77572d7-6aef-4c6c-bb23-bdb47d9d28ee-oauth-serving-cert\") pod \"console-7f7fb6d64c-hkskf\" (UID: \"e77572d7-6aef-4c6c-bb23-bdb47d9d28ee\") " pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.758058 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e77572d7-6aef-4c6c-bb23-bdb47d9d28ee-console-oauth-config\") pod \"console-7f7fb6d64c-hkskf\" (UID: \"e77572d7-6aef-4c6c-bb23-bdb47d9d28ee\") " pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.758107 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/42b73b5c-bc43-4e91-9e3d-255ed69831db-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.758134 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/42b73b5c-bc43-4e91-9e3d-255ed69831db-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.758176 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/42b73b5c-bc43-4e91-9e3d-255ed69831db-config\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.758195 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e77572d7-6aef-4c6c-bb23-bdb47d9d28ee-trusted-ca-bundle\") pod \"console-7f7fb6d64c-hkskf\" (UID: \"e77572d7-6aef-4c6c-bb23-bdb47d9d28ee\") " pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.758271 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e77572d7-6aef-4c6c-bb23-bdb47d9d28ee-service-ca\") pod \"console-7f7fb6d64c-hkskf\" (UID: \"e77572d7-6aef-4c6c-bb23-bdb47d9d28ee\") " pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.758291 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/42b73b5c-bc43-4e91-9e3d-255ed69831db-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.758756 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/42b73b5c-bc43-4e91-9e3d-255ed69831db-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.758789 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9g8c\" (UniqueName: \"kubernetes.io/projected/e77572d7-6aef-4c6c-bb23-bdb47d9d28ee-kube-api-access-k9g8c\") pod \"console-7f7fb6d64c-hkskf\" (UID: \"e77572d7-6aef-4c6c-bb23-bdb47d9d28ee\") " pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.758812 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/42b73b5c-bc43-4e91-9e3d-255ed69831db-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.758855 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/42b73b5c-bc43-4e91-9e3d-255ed69831db-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.758916 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/42b73b5c-bc43-4e91-9e3d-255ed69831db-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.758961 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e77572d7-6aef-4c6c-bb23-bdb47d9d28ee-console-serving-cert\") pod \"console-7f7fb6d64c-hkskf\" (UID: \"e77572d7-6aef-4c6c-bb23-bdb47d9d28ee\") " pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.759018 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-852a68ed-aa87-465e-9176-9ccd923320c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-852a68ed-aa87-465e-9176-9ccd923320c6\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.759896 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e77572d7-6aef-4c6c-bb23-bdb47d9d28ee-oauth-serving-cert\") pod \"console-7f7fb6d64c-hkskf\" (UID: \"e77572d7-6aef-4c6c-bb23-bdb47d9d28ee\") " pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.759995 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e77572d7-6aef-4c6c-bb23-bdb47d9d28ee-console-config\") pod \"console-7f7fb6d64c-hkskf\" (UID: \"e77572d7-6aef-4c6c-bb23-bdb47d9d28ee\") " pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.761301 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e77572d7-6aef-4c6c-bb23-bdb47d9d28ee-trusted-ca-bundle\") pod \"console-7f7fb6d64c-hkskf\" (UID: \"e77572d7-6aef-4c6c-bb23-bdb47d9d28ee\") " pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.762551 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e77572d7-6aef-4c6c-bb23-bdb47d9d28ee-service-ca\") pod \"console-7f7fb6d64c-hkskf\" (UID: \"e77572d7-6aef-4c6c-bb23-bdb47d9d28ee\") " pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.768597 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e77572d7-6aef-4c6c-bb23-bdb47d9d28ee-console-oauth-config\") pod \"console-7f7fb6d64c-hkskf\" (UID: \"e77572d7-6aef-4c6c-bb23-bdb47d9d28ee\") " pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.784982 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9g8c\" (UniqueName: \"kubernetes.io/projected/e77572d7-6aef-4c6c-bb23-bdb47d9d28ee-kube-api-access-k9g8c\") pod \"console-7f7fb6d64c-hkskf\" (UID: \"e77572d7-6aef-4c6c-bb23-bdb47d9d28ee\") " pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.801969 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e77572d7-6aef-4c6c-bb23-bdb47d9d28ee-console-serving-cert\") pod \"console-7f7fb6d64c-hkskf\" (UID: \"e77572d7-6aef-4c6c-bb23-bdb47d9d28ee\") " pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.867117 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/42b73b5c-bc43-4e91-9e3d-255ed69831db-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.867423 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/42b73b5c-bc43-4e91-9e3d-255ed69831db-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.867527 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-852a68ed-aa87-465e-9176-9ccd923320c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-852a68ed-aa87-465e-9176-9ccd923320c6\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.867702 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7gdd\" (UniqueName: \"kubernetes.io/projected/42b73b5c-bc43-4e91-9e3d-255ed69831db-kube-api-access-l7gdd\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.867819 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/42b73b5c-bc43-4e91-9e3d-255ed69831db-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.867839 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/42b73b5c-bc43-4e91-9e3d-255ed69831db-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.867903 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/42b73b5c-bc43-4e91-9e3d-255ed69831db-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.867942 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/42b73b5c-bc43-4e91-9e3d-255ed69831db-config\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.868001 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/42b73b5c-bc43-4e91-9e3d-255ed69831db-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.868042 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/42b73b5c-bc43-4e91-9e3d-255ed69831db-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.868066 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/42b73b5c-bc43-4e91-9e3d-255ed69831db-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.868924 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/42b73b5c-bc43-4e91-9e3d-255ed69831db-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.869356 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/42b73b5c-bc43-4e91-9e3d-255ed69831db-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.876798 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/42b73b5c-bc43-4e91-9e3d-255ed69831db-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.878444 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/42b73b5c-bc43-4e91-9e3d-255ed69831db-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.880959 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/42b73b5c-bc43-4e91-9e3d-255ed69831db-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.882612 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/42b73b5c-bc43-4e91-9e3d-255ed69831db-config\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.893384 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.893427 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-852a68ed-aa87-465e-9176-9ccd923320c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-852a68ed-aa87-465e-9176-9ccd923320c6\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8463609d0a12805b11ee43aef10868d3872f9002ead69ad9b6a8dbbf5475c501/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.900058 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7gdd\" (UniqueName: \"kubernetes.io/projected/42b73b5c-bc43-4e91-9e3d-255ed69831db-kube-api-access-l7gdd\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.901118 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/42b73b5c-bc43-4e91-9e3d-255ed69831db-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.918635 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.944298 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-852a68ed-aa87-465e-9176-9ccd923320c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-852a68ed-aa87-465e-9176-9ccd923320c6\") pod \"prometheus-metric-storage-0\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.973125 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7683e04b-bb89-48c2-bff0-75d052f26e7f-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-t9xgt\" (UID: \"7683e04b-bb89-48c2-bff0-75d052f26e7f\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-t9xgt" Feb 14 19:02:59 crc kubenswrapper[4897]: I0214 19:02:59.976658 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7683e04b-bb89-48c2-bff0-75d052f26e7f-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-t9xgt\" (UID: \"7683e04b-bb89-48c2-bff0-75d052f26e7f\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-t9xgt" Feb 14 19:03:00 crc kubenswrapper[4897]: I0214 19:03:00.009066 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 14 19:03:00 crc kubenswrapper[4897]: I0214 19:03:00.123083 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-t9xgt" Feb 14 19:03:01 crc kubenswrapper[4897]: I0214 19:03:01.725436 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:03:01 crc kubenswrapper[4897]: I0214 19:03:01.725756 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:03:01 crc kubenswrapper[4897]: I0214 19:03:01.725800 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 19:03:01 crc kubenswrapper[4897]: I0214 19:03:01.726583 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"68d22528009a2caf1cd383d357574b535616ffbac78d6b95052fe2b58aa80740"} pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 19:03:01 crc kubenswrapper[4897]: I0214 19:03:01.726640 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" containerID="cri-o://68d22528009a2caf1cd383d357574b535616ffbac78d6b95052fe2b58aa80740" gracePeriod=600 Feb 14 19:03:01 crc kubenswrapper[4897]: E0214 19:03:01.847552 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f885c6c_b913_48e3_93fc_abf932515ea9.slice/crio-68d22528009a2caf1cd383d357574b535616ffbac78d6b95052fe2b58aa80740.scope\": RecentStats: unable to find data in memory cache]" Feb 14 19:03:01 crc kubenswrapper[4897]: I0214 19:03:01.882520 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-wlxqg"] Feb 14 19:03:01 crc kubenswrapper[4897]: I0214 19:03:01.883688 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wlxqg" Feb 14 19:03:01 crc kubenswrapper[4897]: I0214 19:03:01.885095 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 14 19:03:01 crc kubenswrapper[4897]: I0214 19:03:01.888799 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-qxpwb" Feb 14 19:03:01 crc kubenswrapper[4897]: I0214 19:03:01.889236 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 14 19:03:01 crc kubenswrapper[4897]: I0214 19:03:01.916895 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-8jqrb"] Feb 14 19:03:01 crc kubenswrapper[4897]: I0214 19:03:01.918896 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-8jqrb" Feb 14 19:03:01 crc kubenswrapper[4897]: I0214 19:03:01.926709 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wlxqg"] Feb 14 19:03:01 crc kubenswrapper[4897]: I0214 19:03:01.937739 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-8jqrb"] Feb 14 19:03:01 crc kubenswrapper[4897]: I0214 19:03:01.978982 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 14 19:03:01 crc kubenswrapper[4897]: I0214 19:03:01.982365 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:01 crc kubenswrapper[4897]: I0214 19:03:01.985771 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 14 19:03:01 crc kubenswrapper[4897]: I0214 19:03:01.985931 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 14 19:03:01 crc kubenswrapper[4897]: I0214 19:03:01.985973 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 14 19:03:01 crc kubenswrapper[4897]: I0214 19:03:01.985943 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 14 19:03:01 crc kubenswrapper[4897]: I0214 19:03:01.986195 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-7bgck" Feb 14 19:03:01 crc kubenswrapper[4897]: I0214 19:03:01.999542 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.017179 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/643a69d8-25d7-4261-8848-0793ca7368fb-etc-ovs\") pod \"ovn-controller-ovs-8jqrb\" (UID: \"643a69d8-25d7-4261-8848-0793ca7368fb\") " pod="openstack/ovn-controller-ovs-8jqrb" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.017241 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6a557e7-f135-4a79-9525-aed106fd814c-ovn-controller-tls-certs\") pod \"ovn-controller-wlxqg\" (UID: \"c6a557e7-f135-4a79-9525-aed106fd814c\") " pod="openstack/ovn-controller-wlxqg" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.017461 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c6a557e7-f135-4a79-9525-aed106fd814c-var-run\") pod \"ovn-controller-wlxqg\" (UID: \"c6a557e7-f135-4a79-9525-aed106fd814c\") " pod="openstack/ovn-controller-wlxqg" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.020500 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/643a69d8-25d7-4261-8848-0793ca7368fb-scripts\") pod \"ovn-controller-ovs-8jqrb\" (UID: \"643a69d8-25d7-4261-8848-0793ca7368fb\") " pod="openstack/ovn-controller-ovs-8jqrb" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.020543 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcxz5\" (UniqueName: \"kubernetes.io/projected/643a69d8-25d7-4261-8848-0793ca7368fb-kube-api-access-dcxz5\") pod \"ovn-controller-ovs-8jqrb\" (UID: \"643a69d8-25d7-4261-8848-0793ca7368fb\") " pod="openstack/ovn-controller-ovs-8jqrb" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.020573 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a557e7-f135-4a79-9525-aed106fd814c-combined-ca-bundle\") pod \"ovn-controller-wlxqg\" (UID: \"c6a557e7-f135-4a79-9525-aed106fd814c\") " pod="openstack/ovn-controller-wlxqg" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.020624 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6a557e7-f135-4a79-9525-aed106fd814c-var-run-ovn\") pod \"ovn-controller-wlxqg\" (UID: \"c6a557e7-f135-4a79-9525-aed106fd814c\") " pod="openstack/ovn-controller-wlxqg" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.020680 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c6a557e7-f135-4a79-9525-aed106fd814c-var-log-ovn\") pod \"ovn-controller-wlxqg\" (UID: \"c6a557e7-f135-4a79-9525-aed106fd814c\") " pod="openstack/ovn-controller-wlxqg" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.020744 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/643a69d8-25d7-4261-8848-0793ca7368fb-var-lib\") pod \"ovn-controller-ovs-8jqrb\" (UID: \"643a69d8-25d7-4261-8848-0793ca7368fb\") " pod="openstack/ovn-controller-ovs-8jqrb" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.020766 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/643a69d8-25d7-4261-8848-0793ca7368fb-var-log\") pod \"ovn-controller-ovs-8jqrb\" (UID: \"643a69d8-25d7-4261-8848-0793ca7368fb\") " pod="openstack/ovn-controller-ovs-8jqrb" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.020839 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/643a69d8-25d7-4261-8848-0793ca7368fb-var-run\") pod \"ovn-controller-ovs-8jqrb\" (UID: \"643a69d8-25d7-4261-8848-0793ca7368fb\") " pod="openstack/ovn-controller-ovs-8jqrb" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.020874 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c752\" (UniqueName: \"kubernetes.io/projected/c6a557e7-f135-4a79-9525-aed106fd814c-kube-api-access-6c752\") pod \"ovn-controller-wlxqg\" (UID: \"c6a557e7-f135-4a79-9525-aed106fd814c\") " pod="openstack/ovn-controller-wlxqg" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.022080 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6a557e7-f135-4a79-9525-aed106fd814c-scripts\") pod \"ovn-controller-wlxqg\" (UID: \"c6a557e7-f135-4a79-9525-aed106fd814c\") " pod="openstack/ovn-controller-wlxqg" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.123276 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d77a004-19c2-43a0-bbe7-6e94f0d05a4e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e\") " pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.123326 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/643a69d8-25d7-4261-8848-0793ca7368fb-var-run\") pod \"ovn-controller-ovs-8jqrb\" (UID: \"643a69d8-25d7-4261-8848-0793ca7368fb\") " pod="openstack/ovn-controller-ovs-8jqrb" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.123358 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c752\" (UniqueName: \"kubernetes.io/projected/c6a557e7-f135-4a79-9525-aed106fd814c-kube-api-access-6c752\") pod \"ovn-controller-wlxqg\" (UID: \"c6a557e7-f135-4a79-9525-aed106fd814c\") " pod="openstack/ovn-controller-wlxqg" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.123416 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6a557e7-f135-4a79-9525-aed106fd814c-scripts\") pod \"ovn-controller-wlxqg\" (UID: \"c6a557e7-f135-4a79-9525-aed106fd814c\") " pod="openstack/ovn-controller-wlxqg" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.123462 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/643a69d8-25d7-4261-8848-0793ca7368fb-etc-ovs\") pod \"ovn-controller-ovs-8jqrb\" (UID: \"643a69d8-25d7-4261-8848-0793ca7368fb\") " pod="openstack/ovn-controller-ovs-8jqrb" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.123478 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6a557e7-f135-4a79-9525-aed106fd814c-ovn-controller-tls-certs\") pod \"ovn-controller-wlxqg\" (UID: \"c6a557e7-f135-4a79-9525-aed106fd814c\") " pod="openstack/ovn-controller-wlxqg" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.123500 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c6a557e7-f135-4a79-9525-aed106fd814c-var-run\") pod \"ovn-controller-wlxqg\" (UID: \"c6a557e7-f135-4a79-9525-aed106fd814c\") " pod="openstack/ovn-controller-wlxqg" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.123516 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/643a69d8-25d7-4261-8848-0793ca7368fb-scripts\") pod \"ovn-controller-ovs-8jqrb\" (UID: \"643a69d8-25d7-4261-8848-0793ca7368fb\") " pod="openstack/ovn-controller-ovs-8jqrb" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.123535 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcxz5\" (UniqueName: \"kubernetes.io/projected/643a69d8-25d7-4261-8848-0793ca7368fb-kube-api-access-dcxz5\") pod \"ovn-controller-ovs-8jqrb\" (UID: \"643a69d8-25d7-4261-8848-0793ca7368fb\") " pod="openstack/ovn-controller-ovs-8jqrb" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.123550 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a557e7-f135-4a79-9525-aed106fd814c-combined-ca-bundle\") pod \"ovn-controller-wlxqg\" (UID: \"c6a557e7-f135-4a79-9525-aed106fd814c\") " pod="openstack/ovn-controller-wlxqg" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.123570 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcgv9\" (UniqueName: \"kubernetes.io/projected/1d77a004-19c2-43a0-bbe7-6e94f0d05a4e-kube-api-access-dcgv9\") pod \"ovsdbserver-nb-0\" (UID: \"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e\") " pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.123597 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6a557e7-f135-4a79-9525-aed106fd814c-var-run-ovn\") pod \"ovn-controller-wlxqg\" (UID: \"c6a557e7-f135-4a79-9525-aed106fd814c\") " pod="openstack/ovn-controller-wlxqg" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.123613 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d77a004-19c2-43a0-bbe7-6e94f0d05a4e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e\") " pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.123642 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c6a557e7-f135-4a79-9525-aed106fd814c-var-log-ovn\") pod \"ovn-controller-wlxqg\" (UID: \"c6a557e7-f135-4a79-9525-aed106fd814c\") " pod="openstack/ovn-controller-wlxqg" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.123659 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d77a004-19c2-43a0-bbe7-6e94f0d05a4e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e\") " pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.123677 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ee580a05-baa9-4ef9-a585-8e595fbe2d65\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee580a05-baa9-4ef9-a585-8e595fbe2d65\") pod \"ovsdbserver-nb-0\" (UID: \"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e\") " pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.123698 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d77a004-19c2-43a0-bbe7-6e94f0d05a4e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e\") " pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.123717 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1d77a004-19c2-43a0-bbe7-6e94f0d05a4e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e\") " pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.123732 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d77a004-19c2-43a0-bbe7-6e94f0d05a4e-config\") pod \"ovsdbserver-nb-0\" (UID: \"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e\") " pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.123751 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/643a69d8-25d7-4261-8848-0793ca7368fb-var-lib\") pod \"ovn-controller-ovs-8jqrb\" (UID: \"643a69d8-25d7-4261-8848-0793ca7368fb\") " pod="openstack/ovn-controller-ovs-8jqrb" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.123770 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/643a69d8-25d7-4261-8848-0793ca7368fb-var-log\") pod \"ovn-controller-ovs-8jqrb\" (UID: \"643a69d8-25d7-4261-8848-0793ca7368fb\") " pod="openstack/ovn-controller-ovs-8jqrb" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.124111 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/643a69d8-25d7-4261-8848-0793ca7368fb-var-run\") pod \"ovn-controller-ovs-8jqrb\" (UID: \"643a69d8-25d7-4261-8848-0793ca7368fb\") " pod="openstack/ovn-controller-ovs-8jqrb" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.124182 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/643a69d8-25d7-4261-8848-0793ca7368fb-var-log\") pod \"ovn-controller-ovs-8jqrb\" (UID: \"643a69d8-25d7-4261-8848-0793ca7368fb\") " pod="openstack/ovn-controller-ovs-8jqrb" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.124254 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6a557e7-f135-4a79-9525-aed106fd814c-var-run-ovn\") pod \"ovn-controller-wlxqg\" (UID: \"c6a557e7-f135-4a79-9525-aed106fd814c\") " pod="openstack/ovn-controller-wlxqg" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.124306 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c6a557e7-f135-4a79-9525-aed106fd814c-var-log-ovn\") pod \"ovn-controller-wlxqg\" (UID: \"c6a557e7-f135-4a79-9525-aed106fd814c\") " pod="openstack/ovn-controller-wlxqg" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.124614 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/643a69d8-25d7-4261-8848-0793ca7368fb-var-lib\") pod \"ovn-controller-ovs-8jqrb\" (UID: \"643a69d8-25d7-4261-8848-0793ca7368fb\") " pod="openstack/ovn-controller-ovs-8jqrb" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.125732 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/643a69d8-25d7-4261-8848-0793ca7368fb-etc-ovs\") pod \"ovn-controller-ovs-8jqrb\" (UID: \"643a69d8-25d7-4261-8848-0793ca7368fb\") " pod="openstack/ovn-controller-ovs-8jqrb" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.128155 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/643a69d8-25d7-4261-8848-0793ca7368fb-scripts\") pod \"ovn-controller-ovs-8jqrb\" (UID: \"643a69d8-25d7-4261-8848-0793ca7368fb\") " pod="openstack/ovn-controller-ovs-8jqrb" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.128351 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c6a557e7-f135-4a79-9525-aed106fd814c-var-run\") pod \"ovn-controller-wlxqg\" (UID: \"c6a557e7-f135-4a79-9525-aed106fd814c\") " pod="openstack/ovn-controller-wlxqg" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.130912 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6a557e7-f135-4a79-9525-aed106fd814c-scripts\") pod \"ovn-controller-wlxqg\" (UID: \"c6a557e7-f135-4a79-9525-aed106fd814c\") " pod="openstack/ovn-controller-wlxqg" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.131830 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a557e7-f135-4a79-9525-aed106fd814c-combined-ca-bundle\") pod \"ovn-controller-wlxqg\" (UID: \"c6a557e7-f135-4a79-9525-aed106fd814c\") " pod="openstack/ovn-controller-wlxqg" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.132489 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6a557e7-f135-4a79-9525-aed106fd814c-ovn-controller-tls-certs\") pod \"ovn-controller-wlxqg\" (UID: \"c6a557e7-f135-4a79-9525-aed106fd814c\") " pod="openstack/ovn-controller-wlxqg" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.143333 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c752\" (UniqueName: \"kubernetes.io/projected/c6a557e7-f135-4a79-9525-aed106fd814c-kube-api-access-6c752\") pod \"ovn-controller-wlxqg\" (UID: \"c6a557e7-f135-4a79-9525-aed106fd814c\") " pod="openstack/ovn-controller-wlxqg" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.152762 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcxz5\" (UniqueName: \"kubernetes.io/projected/643a69d8-25d7-4261-8848-0793ca7368fb-kube-api-access-dcxz5\") pod \"ovn-controller-ovs-8jqrb\" (UID: \"643a69d8-25d7-4261-8848-0793ca7368fb\") " pod="openstack/ovn-controller-ovs-8jqrb" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.210061 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wlxqg" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.224998 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcgv9\" (UniqueName: \"kubernetes.io/projected/1d77a004-19c2-43a0-bbe7-6e94f0d05a4e-kube-api-access-dcgv9\") pod \"ovsdbserver-nb-0\" (UID: \"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e\") " pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.225079 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d77a004-19c2-43a0-bbe7-6e94f0d05a4e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e\") " pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.225143 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d77a004-19c2-43a0-bbe7-6e94f0d05a4e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e\") " pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.225175 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ee580a05-baa9-4ef9-a585-8e595fbe2d65\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee580a05-baa9-4ef9-a585-8e595fbe2d65\") pod \"ovsdbserver-nb-0\" (UID: \"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e\") " pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.225792 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d77a004-19c2-43a0-bbe7-6e94f0d05a4e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e\") " pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.225837 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1d77a004-19c2-43a0-bbe7-6e94f0d05a4e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e\") " pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.225866 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d77a004-19c2-43a0-bbe7-6e94f0d05a4e-config\") pod \"ovsdbserver-nb-0\" (UID: \"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e\") " pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.225945 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d77a004-19c2-43a0-bbe7-6e94f0d05a4e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e\") " pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.226441 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1d77a004-19c2-43a0-bbe7-6e94f0d05a4e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e\") " pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.228230 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d77a004-19c2-43a0-bbe7-6e94f0d05a4e-config\") pod \"ovsdbserver-nb-0\" (UID: \"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e\") " pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.229193 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d77a004-19c2-43a0-bbe7-6e94f0d05a4e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e\") " pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.230858 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d77a004-19c2-43a0-bbe7-6e94f0d05a4e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e\") " pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.231287 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.231459 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ee580a05-baa9-4ef9-a585-8e595fbe2d65\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee580a05-baa9-4ef9-a585-8e595fbe2d65\") pod \"ovsdbserver-nb-0\" (UID: \"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8303936ae3a84edf0e5ccbc2ae6b890d4d16d05051d2895b526275ad8d17c3e8/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.231399 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d77a004-19c2-43a0-bbe7-6e94f0d05a4e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e\") " pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.231950 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d77a004-19c2-43a0-bbe7-6e94f0d05a4e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e\") " pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.240075 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-8jqrb" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.242016 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcgv9\" (UniqueName: \"kubernetes.io/projected/1d77a004-19c2-43a0-bbe7-6e94f0d05a4e-kube-api-access-dcgv9\") pod \"ovsdbserver-nb-0\" (UID: \"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e\") " pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.257892 4897 generic.go:334] "Generic (PLEG): container finished" podID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerID="68d22528009a2caf1cd383d357574b535616ffbac78d6b95052fe2b58aa80740" exitCode=0 Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.257934 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerDied","Data":"68d22528009a2caf1cd383d357574b535616ffbac78d6b95052fe2b58aa80740"} Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.257967 4897 scope.go:117] "RemoveContainer" containerID="f530591baa3a6bc6b0de2a6354906a1508c867fd239d41af91ab4794b66dc167" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.277914 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ee580a05-baa9-4ef9-a585-8e595fbe2d65\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee580a05-baa9-4ef9-a585-8e595fbe2d65\") pod \"ovsdbserver-nb-0\" (UID: \"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e\") " pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:02 crc kubenswrapper[4897]: I0214 19:03:02.302537 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.053785 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.058831 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.062398 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-fhs8f" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.062752 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.062813 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.063110 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.068258 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.108860 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbbc45ca-578f-42e4-b2e9-596c8b2587a1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"bbbc45ca-578f-42e4-b2e9-596c8b2587a1\") " pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.108915 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbbc45ca-578f-42e4-b2e9-596c8b2587a1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"bbbc45ca-578f-42e4-b2e9-596c8b2587a1\") " pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.108952 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9ckz\" (UniqueName: \"kubernetes.io/projected/bbbc45ca-578f-42e4-b2e9-596c8b2587a1-kube-api-access-b9ckz\") pod \"ovsdbserver-sb-0\" (UID: \"bbbc45ca-578f-42e4-b2e9-596c8b2587a1\") " pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.109057 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbbc45ca-578f-42e4-b2e9-596c8b2587a1-config\") pod \"ovsdbserver-sb-0\" (UID: \"bbbc45ca-578f-42e4-b2e9-596c8b2587a1\") " pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.109084 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbbc45ca-578f-42e4-b2e9-596c8b2587a1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"bbbc45ca-578f-42e4-b2e9-596c8b2587a1\") " pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.109111 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bbbc45ca-578f-42e4-b2e9-596c8b2587a1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"bbbc45ca-578f-42e4-b2e9-596c8b2587a1\") " pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.109186 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bbbc45ca-578f-42e4-b2e9-596c8b2587a1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"bbbc45ca-578f-42e4-b2e9-596c8b2587a1\") " pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.109218 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4657aea4-f55f-4657-9d08-364c921b98cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4657aea4-f55f-4657-9d08-364c921b98cb\") pod \"ovsdbserver-sb-0\" (UID: \"bbbc45ca-578f-42e4-b2e9-596c8b2587a1\") " pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.210712 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9ckz\" (UniqueName: \"kubernetes.io/projected/bbbc45ca-578f-42e4-b2e9-596c8b2587a1-kube-api-access-b9ckz\") pod \"ovsdbserver-sb-0\" (UID: \"bbbc45ca-578f-42e4-b2e9-596c8b2587a1\") " pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.210824 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbbc45ca-578f-42e4-b2e9-596c8b2587a1-config\") pod \"ovsdbserver-sb-0\" (UID: \"bbbc45ca-578f-42e4-b2e9-596c8b2587a1\") " pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.210873 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbbc45ca-578f-42e4-b2e9-596c8b2587a1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"bbbc45ca-578f-42e4-b2e9-596c8b2587a1\") " pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.210924 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bbbc45ca-578f-42e4-b2e9-596c8b2587a1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"bbbc45ca-578f-42e4-b2e9-596c8b2587a1\") " pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.211075 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bbbc45ca-578f-42e4-b2e9-596c8b2587a1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"bbbc45ca-578f-42e4-b2e9-596c8b2587a1\") " pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.211137 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4657aea4-f55f-4657-9d08-364c921b98cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4657aea4-f55f-4657-9d08-364c921b98cb\") pod \"ovsdbserver-sb-0\" (UID: \"bbbc45ca-578f-42e4-b2e9-596c8b2587a1\") " pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.211218 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbbc45ca-578f-42e4-b2e9-596c8b2587a1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"bbbc45ca-578f-42e4-b2e9-596c8b2587a1\") " pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.211264 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbbc45ca-578f-42e4-b2e9-596c8b2587a1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"bbbc45ca-578f-42e4-b2e9-596c8b2587a1\") " pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.211937 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bbbc45ca-578f-42e4-b2e9-596c8b2587a1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"bbbc45ca-578f-42e4-b2e9-596c8b2587a1\") " pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.212058 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbbc45ca-578f-42e4-b2e9-596c8b2587a1-config\") pod \"ovsdbserver-sb-0\" (UID: \"bbbc45ca-578f-42e4-b2e9-596c8b2587a1\") " pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.213355 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bbbc45ca-578f-42e4-b2e9-596c8b2587a1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"bbbc45ca-578f-42e4-b2e9-596c8b2587a1\") " pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.216696 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.216734 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4657aea4-f55f-4657-9d08-364c921b98cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4657aea4-f55f-4657-9d08-364c921b98cb\") pod \"ovsdbserver-sb-0\" (UID: \"bbbc45ca-578f-42e4-b2e9-596c8b2587a1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7542622e33c77fa983d4001ff14a1c0cddef04e8ff559c14833148d6421a1905/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.218515 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbbc45ca-578f-42e4-b2e9-596c8b2587a1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"bbbc45ca-578f-42e4-b2e9-596c8b2587a1\") " pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.220229 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbbc45ca-578f-42e4-b2e9-596c8b2587a1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"bbbc45ca-578f-42e4-b2e9-596c8b2587a1\") " pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.222841 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbbc45ca-578f-42e4-b2e9-596c8b2587a1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"bbbc45ca-578f-42e4-b2e9-596c8b2587a1\") " pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.225887 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9ckz\" (UniqueName: \"kubernetes.io/projected/bbbc45ca-578f-42e4-b2e9-596c8b2587a1-kube-api-access-b9ckz\") pod \"ovsdbserver-sb-0\" (UID: \"bbbc45ca-578f-42e4-b2e9-596c8b2587a1\") " pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.255790 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4657aea4-f55f-4657-9d08-364c921b98cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4657aea4-f55f-4657-9d08-364c921b98cb\") pod \"ovsdbserver-sb-0\" (UID: \"bbbc45ca-578f-42e4-b2e9-596c8b2587a1\") " pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:06 crc kubenswrapper[4897]: I0214 19:03:06.378048 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:12 crc kubenswrapper[4897]: E0214 19:03:12.462485 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 14 19:03:12 crc kubenswrapper[4897]: E0214 19:03:12.463231 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v7jp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-x827s_openstack(7d835d2d-ab92-4f38-910d-903b14c84bf8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 19:03:12 crc kubenswrapper[4897]: E0214 19:03:12.465271 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-x827s" podUID="7d835d2d-ab92-4f38-910d-903b14c84bf8" Feb 14 19:03:16 crc kubenswrapper[4897]: E0214 19:03:16.594310 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 14 19:03:16 crc kubenswrapper[4897]: E0214 19:03:16.594734 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glczv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-tv2zr_openstack(98b49e1d-0ebd-44d6-b70b-ef73531226f3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 19:03:16 crc kubenswrapper[4897]: E0214 19:03:16.595882 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-tv2zr" podUID="98b49e1d-0ebd-44d6-b70b-ef73531226f3" Feb 14 19:03:16 crc kubenswrapper[4897]: E0214 19:03:16.614627 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 14 19:03:16 crc kubenswrapper[4897]: E0214 19:03:16.614784 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvfq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-xrp8l_openstack(867917b7-904f-46b3-b1d3-6f9f760aabc7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 19:03:16 crc kubenswrapper[4897]: E0214 19:03:16.615972 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-xrp8l" podUID="867917b7-904f-46b3-b1d3-6f9f760aabc7" Feb 14 19:03:16 crc kubenswrapper[4897]: E0214 19:03:16.635626 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 14 19:03:16 crc kubenswrapper[4897]: E0214 19:03:16.635810 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nr2xq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(32d6ef5f-5f6d-4563-91e7-94928fbe901d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 19:03:16 crc kubenswrapper[4897]: E0214 19:03:16.637278 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="32d6ef5f-5f6d-4563-91e7-94928fbe901d" Feb 14 19:03:16 crc kubenswrapper[4897]: E0214 19:03:16.671311 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 14 19:03:16 crc kubenswrapper[4897]: E0214 19:03:16.671492 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xvpmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-1_openstack(c8eb488b-8b48-4dea-8a34-dee3346005ef): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 19:03:16 crc kubenswrapper[4897]: E0214 19:03:16.672753 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-1" podUID="c8eb488b-8b48-4dea-8a34-dee3346005ef" Feb 14 19:03:17 crc kubenswrapper[4897]: E0214 19:03:17.416178 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-1" podUID="c8eb488b-8b48-4dea-8a34-dee3346005ef" Feb 14 19:03:17 crc kubenswrapper[4897]: E0214 19:03:17.416211 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="32d6ef5f-5f6d-4563-91e7-94928fbe901d" Feb 14 19:03:17 crc kubenswrapper[4897]: E0214 19:03:17.416273 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-xrp8l" podUID="867917b7-904f-46b3-b1d3-6f9f760aabc7" Feb 14 19:03:18 crc kubenswrapper[4897]: E0214 19:03:18.697645 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 14 19:03:18 crc kubenswrapper[4897]: E0214 19:03:18.698429 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kp46h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-tlgx5_openstack(a4ee62cf-bcef-4904-a262-600ed17f3719): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 19:03:18 crc kubenswrapper[4897]: E0214 19:03:18.699791 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-tlgx5" podUID="a4ee62cf-bcef-4904-a262-600ed17f3719" Feb 14 19:03:18 crc kubenswrapper[4897]: I0214 19:03:18.830284 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-x827s" Feb 14 19:03:18 crc kubenswrapper[4897]: I0214 19:03:18.846246 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7jp8\" (UniqueName: \"kubernetes.io/projected/7d835d2d-ab92-4f38-910d-903b14c84bf8-kube-api-access-v7jp8\") pod \"7d835d2d-ab92-4f38-910d-903b14c84bf8\" (UID: \"7d835d2d-ab92-4f38-910d-903b14c84bf8\") " Feb 14 19:03:18 crc kubenswrapper[4897]: I0214 19:03:18.846375 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d835d2d-ab92-4f38-910d-903b14c84bf8-config\") pod \"7d835d2d-ab92-4f38-910d-903b14c84bf8\" (UID: \"7d835d2d-ab92-4f38-910d-903b14c84bf8\") " Feb 14 19:03:18 crc kubenswrapper[4897]: I0214 19:03:18.852605 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d835d2d-ab92-4f38-910d-903b14c84bf8-config" (OuterVolumeSpecName: "config") pod "7d835d2d-ab92-4f38-910d-903b14c84bf8" (UID: "7d835d2d-ab92-4f38-910d-903b14c84bf8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:03:18 crc kubenswrapper[4897]: I0214 19:03:18.865385 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d835d2d-ab92-4f38-910d-903b14c84bf8-kube-api-access-v7jp8" (OuterVolumeSpecName: "kube-api-access-v7jp8") pod "7d835d2d-ab92-4f38-910d-903b14c84bf8" (UID: "7d835d2d-ab92-4f38-910d-903b14c84bf8"). InnerVolumeSpecName "kube-api-access-v7jp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:03:18 crc kubenswrapper[4897]: I0214 19:03:18.889470 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-tv2zr" Feb 14 19:03:18 crc kubenswrapper[4897]: I0214 19:03:18.948984 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98b49e1d-0ebd-44d6-b70b-ef73531226f3-dns-svc\") pod \"98b49e1d-0ebd-44d6-b70b-ef73531226f3\" (UID: \"98b49e1d-0ebd-44d6-b70b-ef73531226f3\") " Feb 14 19:03:18 crc kubenswrapper[4897]: I0214 19:03:18.949572 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98b49e1d-0ebd-44d6-b70b-ef73531226f3-config\") pod \"98b49e1d-0ebd-44d6-b70b-ef73531226f3\" (UID: \"98b49e1d-0ebd-44d6-b70b-ef73531226f3\") " Feb 14 19:03:18 crc kubenswrapper[4897]: I0214 19:03:18.949779 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glczv\" (UniqueName: \"kubernetes.io/projected/98b49e1d-0ebd-44d6-b70b-ef73531226f3-kube-api-access-glczv\") pod \"98b49e1d-0ebd-44d6-b70b-ef73531226f3\" (UID: \"98b49e1d-0ebd-44d6-b70b-ef73531226f3\") " Feb 14 19:03:18 crc kubenswrapper[4897]: I0214 19:03:18.950694 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d835d2d-ab92-4f38-910d-903b14c84bf8-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:18 crc kubenswrapper[4897]: I0214 19:03:18.950713 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7jp8\" (UniqueName: \"kubernetes.io/projected/7d835d2d-ab92-4f38-910d-903b14c84bf8-kube-api-access-v7jp8\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:18 crc kubenswrapper[4897]: I0214 19:03:18.950980 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98b49e1d-0ebd-44d6-b70b-ef73531226f3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "98b49e1d-0ebd-44d6-b70b-ef73531226f3" (UID: "98b49e1d-0ebd-44d6-b70b-ef73531226f3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:03:18 crc kubenswrapper[4897]: I0214 19:03:18.951983 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98b49e1d-0ebd-44d6-b70b-ef73531226f3-config" (OuterVolumeSpecName: "config") pod "98b49e1d-0ebd-44d6-b70b-ef73531226f3" (UID: "98b49e1d-0ebd-44d6-b70b-ef73531226f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:03:18 crc kubenswrapper[4897]: I0214 19:03:18.995314 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98b49e1d-0ebd-44d6-b70b-ef73531226f3-kube-api-access-glczv" (OuterVolumeSpecName: "kube-api-access-glczv") pod "98b49e1d-0ebd-44d6-b70b-ef73531226f3" (UID: "98b49e1d-0ebd-44d6-b70b-ef73531226f3"). InnerVolumeSpecName "kube-api-access-glczv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:03:19 crc kubenswrapper[4897]: I0214 19:03:19.060980 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98b49e1d-0ebd-44d6-b70b-ef73531226f3-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:19 crc kubenswrapper[4897]: I0214 19:03:19.061016 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98b49e1d-0ebd-44d6-b70b-ef73531226f3-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:19 crc kubenswrapper[4897]: I0214 19:03:19.061067 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glczv\" (UniqueName: \"kubernetes.io/projected/98b49e1d-0ebd-44d6-b70b-ef73531226f3-kube-api-access-glczv\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:19 crc kubenswrapper[4897]: I0214 19:03:19.444880 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-tv2zr" event={"ID":"98b49e1d-0ebd-44d6-b70b-ef73531226f3","Type":"ContainerDied","Data":"d170adb0af22bc33128be4b1735e445a0aebac6899bec8718e03c4344bdba75c"} Feb 14 19:03:19 crc kubenswrapper[4897]: I0214 19:03:19.445149 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-tv2zr" Feb 14 19:03:19 crc kubenswrapper[4897]: I0214 19:03:19.448659 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9a8b3d12-d5db-435a-ba48-fbe1e31fef96","Type":"ContainerStarted","Data":"ebc6c2c04a6f8669eb059fbc5d926ac0200f460f2297b585e57665cf1f6bce89"} Feb 14 19:03:19 crc kubenswrapper[4897]: I0214 19:03:19.451347 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-x827s" event={"ID":"7d835d2d-ab92-4f38-910d-903b14c84bf8","Type":"ContainerDied","Data":"144b0a420ff8b86264e422e4d5d7417c60b105e6664bb42d0f0573c88262990f"} Feb 14 19:03:19 crc kubenswrapper[4897]: I0214 19:03:19.451424 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-x827s" Feb 14 19:03:19 crc kubenswrapper[4897]: I0214 19:03:19.455630 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fdda6cd9-a603-4bb0-8595-3d128fc9e324","Type":"ContainerStarted","Data":"cf5f710119861c7db68241dca81b379087b8f4b983f403594fbb16fe23e93e84"} Feb 14 19:03:19 crc kubenswrapper[4897]: I0214 19:03:19.459181 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerStarted","Data":"235f7e04d5c8603ba95b93f15134ed139784ade9cf49c6bd1886aa661c14e66a"} Feb 14 19:03:19 crc kubenswrapper[4897]: I0214 19:03:19.466376 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"429062cc-8ca1-4e1f-a1b3-d84bbd4d15df","Type":"ContainerStarted","Data":"31d40f8f7e5b79c528422581f4263fe6e68f341d058571a8dc79fe740cef3c6a"} Feb 14 19:03:19 crc kubenswrapper[4897]: I0214 19:03:19.466430 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 14 19:03:19 crc kubenswrapper[4897]: E0214 19:03:19.482348 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-tlgx5" podUID="a4ee62cf-bcef-4904-a262-600ed17f3719" Feb 14 19:03:19 crc kubenswrapper[4897]: I0214 19:03:19.571773 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.968776107 podStartE2EDuration="23.571750593s" podCreationTimestamp="2026-02-14 19:02:56 +0000 UTC" firstStartedPulling="2026-02-14 19:02:57.1274223 +0000 UTC m=+1230.103830783" lastFinishedPulling="2026-02-14 19:03:18.730396776 +0000 UTC m=+1251.706805269" observedRunningTime="2026-02-14 19:03:19.562376864 +0000 UTC m=+1252.538785357" watchObservedRunningTime="2026-02-14 19:03:19.571750593 +0000 UTC m=+1252.548159076" Feb 14 19:03:19 crc kubenswrapper[4897]: I0214 19:03:19.615160 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 19:03:19 crc kubenswrapper[4897]: I0214 19:03:19.633629 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 19:03:19 crc kubenswrapper[4897]: I0214 19:03:19.648909 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-tv2zr"] Feb 14 19:03:19 crc kubenswrapper[4897]: I0214 19:03:19.659570 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-tv2zr"] Feb 14 19:03:19 crc kubenswrapper[4897]: I0214 19:03:19.679051 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wlxqg"] Feb 14 19:03:19 crc kubenswrapper[4897]: I0214 19:03:19.700692 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-x827s"] Feb 14 19:03:19 crc kubenswrapper[4897]: I0214 19:03:19.713668 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-x827s"] Feb 14 19:03:19 crc kubenswrapper[4897]: I0214 19:03:19.814057 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d835d2d-ab92-4f38-910d-903b14c84bf8" path="/var/lib/kubelet/pods/7d835d2d-ab92-4f38-910d-903b14c84bf8/volumes" Feb 14 19:03:19 crc kubenswrapper[4897]: I0214 19:03:19.816181 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98b49e1d-0ebd-44d6-b70b-ef73531226f3" path="/var/lib/kubelet/pods/98b49e1d-0ebd-44d6-b70b-ef73531226f3/volumes" Feb 14 19:03:19 crc kubenswrapper[4897]: I0214 19:03:19.953422 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-t9xgt"] Feb 14 19:03:19 crc kubenswrapper[4897]: I0214 19:03:19.961424 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7f7fb6d64c-hkskf"] Feb 14 19:03:19 crc kubenswrapper[4897]: W0214 19:03:19.989198 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7683e04b_bb89_48c2_bff0_75d052f26e7f.slice/crio-f891e273144cd47394e85424777f8e561839d9a10843b6fbe6ac403da2823148 WatchSource:0}: Error finding container f891e273144cd47394e85424777f8e561839d9a10843b6fbe6ac403da2823148: Status 404 returned error can't find the container with id f891e273144cd47394e85424777f8e561839d9a10843b6fbe6ac403da2823148 Feb 14 19:03:20 crc kubenswrapper[4897]: I0214 19:03:20.080798 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 14 19:03:20 crc kubenswrapper[4897]: I0214 19:03:20.174198 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 14 19:03:20 crc kubenswrapper[4897]: W0214 19:03:20.190371 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d77a004_19c2_43a0_bbe7_6e94f0d05a4e.slice/crio-835fcc9626081878356f4ffb6b5b288d4565f4ce95877a953e44754cf409e32e WatchSource:0}: Error finding container 835fcc9626081878356f4ffb6b5b288d4565f4ce95877a953e44754cf409e32e: Status 404 returned error can't find the container with id 835fcc9626081878356f4ffb6b5b288d4565f4ce95877a953e44754cf409e32e Feb 14 19:03:20 crc kubenswrapper[4897]: I0214 19:03:20.293570 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-8jqrb"] Feb 14 19:03:20 crc kubenswrapper[4897]: W0214 19:03:20.383169 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod643a69d8_25d7_4261_8848_0793ca7368fb.slice/crio-f121798ad254c0e349cd350baa3036e2b432f4f474b36892c0a522c31dcb51a9 WatchSource:0}: Error finding container f121798ad254c0e349cd350baa3036e2b432f4f474b36892c0a522c31dcb51a9: Status 404 returned error can't find the container with id f121798ad254c0e349cd350baa3036e2b432f4f474b36892c0a522c31dcb51a9 Feb 14 19:03:20 crc kubenswrapper[4897]: I0214 19:03:20.479099 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"42b73b5c-bc43-4e91-9e3d-255ed69831db","Type":"ContainerStarted","Data":"178227235efbc6fdf2a9a03f9742b3057f52abc07163ec63bf042cd3ccc28931"} Feb 14 19:03:20 crc kubenswrapper[4897]: I0214 19:03:20.480585 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"bbbc45ca-578f-42e4-b2e9-596c8b2587a1","Type":"ContainerStarted","Data":"17c687dea9ad188657cb6ea94ae22465211f1ae1229ae5950f162ae785df9a12"} Feb 14 19:03:20 crc kubenswrapper[4897]: I0214 19:03:20.481627 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f7fb6d64c-hkskf" event={"ID":"e77572d7-6aef-4c6c-bb23-bdb47d9d28ee","Type":"ContainerStarted","Data":"789101a7f2f3632033f1a6d864281e915f340ca4c67268e1ece9b005575d971f"} Feb 14 19:03:20 crc kubenswrapper[4897]: I0214 19:03:20.483097 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"31fc1ad2-32a3-4e47-846f-a69e5ee34493","Type":"ContainerStarted","Data":"3b9be166c63ce14e55a0f6c7be42d303c4aa911740756a271d1f765c030bb366"} Feb 14 19:03:20 crc kubenswrapper[4897]: I0214 19:03:20.485267 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e","Type":"ContainerStarted","Data":"835fcc9626081878356f4ffb6b5b288d4565f4ce95877a953e44754cf409e32e"} Feb 14 19:03:20 crc kubenswrapper[4897]: I0214 19:03:20.490015 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wlxqg" event={"ID":"c6a557e7-f135-4a79-9525-aed106fd814c","Type":"ContainerStarted","Data":"d13981270db903b03db35545e453b13d61a98059bc74bc4ed8d52afda00a6c8a"} Feb 14 19:03:20 crc kubenswrapper[4897]: I0214 19:03:20.495349 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-t9xgt" event={"ID":"7683e04b-bb89-48c2-bff0-75d052f26e7f","Type":"ContainerStarted","Data":"f891e273144cd47394e85424777f8e561839d9a10843b6fbe6ac403da2823148"} Feb 14 19:03:20 crc kubenswrapper[4897]: I0214 19:03:20.497648 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-8jqrb" event={"ID":"643a69d8-25d7-4261-8848-0793ca7368fb","Type":"ContainerStarted","Data":"f121798ad254c0e349cd350baa3036e2b432f4f474b36892c0a522c31dcb51a9"} Feb 14 19:03:21 crc kubenswrapper[4897]: I0214 19:03:21.509231 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"3e532d34-b3bb-4f63-bc64-6b6cc22666b0","Type":"ContainerStarted","Data":"bb5453fc7c803ba4c78169d1d9f1ca44c2597e317e1cdc22384f1796b179a86c"} Feb 14 19:03:21 crc kubenswrapper[4897]: I0214 19:03:21.511404 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f7fb6d64c-hkskf" event={"ID":"e77572d7-6aef-4c6c-bb23-bdb47d9d28ee","Type":"ContainerStarted","Data":"33874808f94477e3e6a3e1c3a94b2b0c5f3257c6b5ae6cfb889c9b4cdfa1ce11"} Feb 14 19:03:21 crc kubenswrapper[4897]: I0214 19:03:21.512962 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"75b00edc-276b-4e3b-84c1-db17e1eeb3ee","Type":"ContainerStarted","Data":"cbdac35dc72f27a3253bb19267a193ec38202343ba5dde4d824ec972949ec729"} Feb 14 19:03:21 crc kubenswrapper[4897]: I0214 19:03:21.566092 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7f7fb6d64c-hkskf" podStartSLOduration=22.566068564 podStartE2EDuration="22.566068564s" podCreationTimestamp="2026-02-14 19:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:03:21.553429373 +0000 UTC m=+1254.529837866" watchObservedRunningTime="2026-02-14 19:03:21.566068564 +0000 UTC m=+1254.542477067" Feb 14 19:03:23 crc kubenswrapper[4897]: I0214 19:03:23.538146 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9a8b3d12-d5db-435a-ba48-fbe1e31fef96","Type":"ContainerDied","Data":"ebc6c2c04a6f8669eb059fbc5d926ac0200f460f2297b585e57665cf1f6bce89"} Feb 14 19:03:23 crc kubenswrapper[4897]: I0214 19:03:23.538505 4897 generic.go:334] "Generic (PLEG): container finished" podID="9a8b3d12-d5db-435a-ba48-fbe1e31fef96" containerID="ebc6c2c04a6f8669eb059fbc5d926ac0200f460f2297b585e57665cf1f6bce89" exitCode=0 Feb 14 19:03:23 crc kubenswrapper[4897]: I0214 19:03:23.548939 4897 generic.go:334] "Generic (PLEG): container finished" podID="fdda6cd9-a603-4bb0-8595-3d128fc9e324" containerID="cf5f710119861c7db68241dca81b379087b8f4b983f403594fbb16fe23e93e84" exitCode=0 Feb 14 19:03:23 crc kubenswrapper[4897]: I0214 19:03:23.548996 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fdda6cd9-a603-4bb0-8595-3d128fc9e324","Type":"ContainerDied","Data":"cf5f710119861c7db68241dca81b379087b8f4b983f403594fbb16fe23e93e84"} Feb 14 19:03:26 crc kubenswrapper[4897]: I0214 19:03:26.602348 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-8jqrb" event={"ID":"643a69d8-25d7-4261-8848-0793ca7368fb","Type":"ContainerStarted","Data":"ef724eb2080717f160843ee902979c28a3825f92f4872a02fd2bd7cedaebf614"} Feb 14 19:03:26 crc kubenswrapper[4897]: I0214 19:03:26.606433 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fdda6cd9-a603-4bb0-8595-3d128fc9e324","Type":"ContainerStarted","Data":"582f26b3a97ae333b48f26dba8219d84d182c93c5c493e55ab1ff1f207357838"} Feb 14 19:03:26 crc kubenswrapper[4897]: I0214 19:03:26.607922 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 14 19:03:26 crc kubenswrapper[4897]: I0214 19:03:26.610631 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"31fc1ad2-32a3-4e47-846f-a69e5ee34493","Type":"ContainerStarted","Data":"286e2848ae1acf45f0d512d54f3cd2bea97153a2fe9e0244a0e0e58297bc05ff"} Feb 14 19:03:26 crc kubenswrapper[4897]: I0214 19:03:26.613804 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9a8b3d12-d5db-435a-ba48-fbe1e31fef96","Type":"ContainerStarted","Data":"1ae909fc87abca6b70a54edb63d7f2c825f62160862049babf6d8c6c86b0dc8d"} Feb 14 19:03:26 crc kubenswrapper[4897]: I0214 19:03:26.651170 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=9.672607775 podStartE2EDuration="31.651153127s" podCreationTimestamp="2026-02-14 19:02:55 +0000 UTC" firstStartedPulling="2026-02-14 19:02:56.932678055 +0000 UTC m=+1229.909086538" lastFinishedPulling="2026-02-14 19:03:18.911223397 +0000 UTC m=+1251.887631890" observedRunningTime="2026-02-14 19:03:26.644387803 +0000 UTC m=+1259.620796306" watchObservedRunningTime="2026-02-14 19:03:26.651153127 +0000 UTC m=+1259.627561620" Feb 14 19:03:26 crc kubenswrapper[4897]: I0214 19:03:26.715368 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=10.494530938 podStartE2EDuration="33.715349689s" podCreationTimestamp="2026-02-14 19:02:53 +0000 UTC" firstStartedPulling="2026-02-14 19:02:55.870671128 +0000 UTC m=+1228.847079611" lastFinishedPulling="2026-02-14 19:03:19.091489879 +0000 UTC m=+1252.067898362" observedRunningTime="2026-02-14 19:03:26.708479531 +0000 UTC m=+1259.684888024" watchObservedRunningTime="2026-02-14 19:03:26.715349689 +0000 UTC m=+1259.691758172" Feb 14 19:03:27 crc kubenswrapper[4897]: I0214 19:03:27.629979 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e","Type":"ContainerStarted","Data":"1a4d1e19b23ee8eb403900f9964a1d5bb03955ad07cf789a11f8b89552e735cd"} Feb 14 19:03:27 crc kubenswrapper[4897]: I0214 19:03:27.633622 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wlxqg" event={"ID":"c6a557e7-f135-4a79-9525-aed106fd814c","Type":"ContainerStarted","Data":"bd2395ee0d8a7c6455c9c11eb8a208ac67f533c33505ed10e2784624f5e79fae"} Feb 14 19:03:27 crc kubenswrapper[4897]: I0214 19:03:27.633793 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-wlxqg" Feb 14 19:03:27 crc kubenswrapper[4897]: I0214 19:03:27.636377 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-t9xgt" event={"ID":"7683e04b-bb89-48c2-bff0-75d052f26e7f","Type":"ContainerStarted","Data":"7cb857ecbdc97e822ec4d08a55a5f3a4d8be353a6c6959019263c923dca3558d"} Feb 14 19:03:27 crc kubenswrapper[4897]: I0214 19:03:27.639182 4897 generic.go:334] "Generic (PLEG): container finished" podID="643a69d8-25d7-4261-8848-0793ca7368fb" containerID="ef724eb2080717f160843ee902979c28a3825f92f4872a02fd2bd7cedaebf614" exitCode=0 Feb 14 19:03:27 crc kubenswrapper[4897]: I0214 19:03:27.639312 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-8jqrb" event={"ID":"643a69d8-25d7-4261-8848-0793ca7368fb","Type":"ContainerDied","Data":"ef724eb2080717f160843ee902979c28a3825f92f4872a02fd2bd7cedaebf614"} Feb 14 19:03:27 crc kubenswrapper[4897]: I0214 19:03:27.641988 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"bbbc45ca-578f-42e4-b2e9-596c8b2587a1","Type":"ContainerStarted","Data":"10ddebdda8b191b246f3810affef3c202c61f894e98781071116618eefb8592c"} Feb 14 19:03:27 crc kubenswrapper[4897]: I0214 19:03:27.642114 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 14 19:03:27 crc kubenswrapper[4897]: I0214 19:03:27.669871 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-wlxqg" podStartSLOduration=20.872905682 podStartE2EDuration="26.669839822s" podCreationTimestamp="2026-02-14 19:03:01 +0000 UTC" firstStartedPulling="2026-02-14 19:03:19.653815133 +0000 UTC m=+1252.630223616" lastFinishedPulling="2026-02-14 19:03:25.450749273 +0000 UTC m=+1258.427157756" observedRunningTime="2026-02-14 19:03:27.661835618 +0000 UTC m=+1260.638244101" watchObservedRunningTime="2026-02-14 19:03:27.669839822 +0000 UTC m=+1260.646248315" Feb 14 19:03:27 crc kubenswrapper[4897]: I0214 19:03:27.694158 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-t9xgt" podStartSLOduration=23.235292448 podStartE2EDuration="28.694132616s" podCreationTimestamp="2026-02-14 19:02:59 +0000 UTC" firstStartedPulling="2026-02-14 19:03:19.994402584 +0000 UTC m=+1252.970811067" lastFinishedPulling="2026-02-14 19:03:25.453242762 +0000 UTC m=+1258.429651235" observedRunningTime="2026-02-14 19:03:27.688236898 +0000 UTC m=+1260.664645431" watchObservedRunningTime="2026-02-14 19:03:27.694132616 +0000 UTC m=+1260.670541139" Feb 14 19:03:27 crc kubenswrapper[4897]: I0214 19:03:27.763837 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=23.845781429 podStartE2EDuration="29.763814922s" podCreationTimestamp="2026-02-14 19:02:58 +0000 UTC" firstStartedPulling="2026-02-14 19:03:19.654134282 +0000 UTC m=+1252.630542765" lastFinishedPulling="2026-02-14 19:03:25.572167775 +0000 UTC m=+1258.548576258" observedRunningTime="2026-02-14 19:03:27.733278311 +0000 UTC m=+1260.709686814" watchObservedRunningTime="2026-02-14 19:03:27.763814922 +0000 UTC m=+1260.740223405" Feb 14 19:03:28 crc kubenswrapper[4897]: I0214 19:03:28.656371 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-8jqrb" event={"ID":"643a69d8-25d7-4261-8848-0793ca7368fb","Type":"ContainerStarted","Data":"dd285ac30afe27a2811c0e9e20984683ce2e9761e051528ed3bbf5d8fd458845"} Feb 14 19:03:28 crc kubenswrapper[4897]: I0214 19:03:28.656782 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-8jqrb" event={"ID":"643a69d8-25d7-4261-8848-0793ca7368fb","Type":"ContainerStarted","Data":"6d9e3dc09c1ece421055b7c7dc5ec9ce1161a1cd8c74a064a05638a12081ccae"} Feb 14 19:03:28 crc kubenswrapper[4897]: I0214 19:03:28.680272 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xrp8l"] Feb 14 19:03:28 crc kubenswrapper[4897]: I0214 19:03:28.704335 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-8jqrb" podStartSLOduration=22.640072711 podStartE2EDuration="27.704318511s" podCreationTimestamp="2026-02-14 19:03:01 +0000 UTC" firstStartedPulling="2026-02-14 19:03:20.385655906 +0000 UTC m=+1253.362064389" lastFinishedPulling="2026-02-14 19:03:25.449901696 +0000 UTC m=+1258.426310189" observedRunningTime="2026-02-14 19:03:28.700440107 +0000 UTC m=+1261.676848590" watchObservedRunningTime="2026-02-14 19:03:28.704318511 +0000 UTC m=+1261.680726994" Feb 14 19:03:28 crc kubenswrapper[4897]: I0214 19:03:28.736364 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-c4mp2"] Feb 14 19:03:28 crc kubenswrapper[4897]: I0214 19:03:28.738291 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-c4mp2" Feb 14 19:03:28 crc kubenswrapper[4897]: I0214 19:03:28.769933 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-c4mp2"] Feb 14 19:03:28 crc kubenswrapper[4897]: I0214 19:03:28.828055 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/392af334-f2c0-4b48-9078-37085e1b4750-config\") pod \"dnsmasq-dns-7cb5889db5-c4mp2\" (UID: \"392af334-f2c0-4b48-9078-37085e1b4750\") " pod="openstack/dnsmasq-dns-7cb5889db5-c4mp2" Feb 14 19:03:28 crc kubenswrapper[4897]: I0214 19:03:28.828220 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2nld\" (UniqueName: \"kubernetes.io/projected/392af334-f2c0-4b48-9078-37085e1b4750-kube-api-access-f2nld\") pod \"dnsmasq-dns-7cb5889db5-c4mp2\" (UID: \"392af334-f2c0-4b48-9078-37085e1b4750\") " pod="openstack/dnsmasq-dns-7cb5889db5-c4mp2" Feb 14 19:03:28 crc kubenswrapper[4897]: I0214 19:03:28.828276 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/392af334-f2c0-4b48-9078-37085e1b4750-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-c4mp2\" (UID: \"392af334-f2c0-4b48-9078-37085e1b4750\") " pod="openstack/dnsmasq-dns-7cb5889db5-c4mp2" Feb 14 19:03:28 crc kubenswrapper[4897]: I0214 19:03:28.929750 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2nld\" (UniqueName: \"kubernetes.io/projected/392af334-f2c0-4b48-9078-37085e1b4750-kube-api-access-f2nld\") pod \"dnsmasq-dns-7cb5889db5-c4mp2\" (UID: \"392af334-f2c0-4b48-9078-37085e1b4750\") " pod="openstack/dnsmasq-dns-7cb5889db5-c4mp2" Feb 14 19:03:28 crc kubenswrapper[4897]: I0214 19:03:28.930096 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/392af334-f2c0-4b48-9078-37085e1b4750-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-c4mp2\" (UID: \"392af334-f2c0-4b48-9078-37085e1b4750\") " pod="openstack/dnsmasq-dns-7cb5889db5-c4mp2" Feb 14 19:03:28 crc kubenswrapper[4897]: I0214 19:03:28.930187 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/392af334-f2c0-4b48-9078-37085e1b4750-config\") pod \"dnsmasq-dns-7cb5889db5-c4mp2\" (UID: \"392af334-f2c0-4b48-9078-37085e1b4750\") " pod="openstack/dnsmasq-dns-7cb5889db5-c4mp2" Feb 14 19:03:28 crc kubenswrapper[4897]: I0214 19:03:28.932605 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/392af334-f2c0-4b48-9078-37085e1b4750-config\") pod \"dnsmasq-dns-7cb5889db5-c4mp2\" (UID: \"392af334-f2c0-4b48-9078-37085e1b4750\") " pod="openstack/dnsmasq-dns-7cb5889db5-c4mp2" Feb 14 19:03:28 crc kubenswrapper[4897]: I0214 19:03:28.932950 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/392af334-f2c0-4b48-9078-37085e1b4750-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-c4mp2\" (UID: \"392af334-f2c0-4b48-9078-37085e1b4750\") " pod="openstack/dnsmasq-dns-7cb5889db5-c4mp2" Feb 14 19:03:28 crc kubenswrapper[4897]: I0214 19:03:28.960905 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2nld\" (UniqueName: \"kubernetes.io/projected/392af334-f2c0-4b48-9078-37085e1b4750-kube-api-access-f2nld\") pod \"dnsmasq-dns-7cb5889db5-c4mp2\" (UID: \"392af334-f2c0-4b48-9078-37085e1b4750\") " pod="openstack/dnsmasq-dns-7cb5889db5-c4mp2" Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.088124 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-c4mp2" Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.423390 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xrp8l" Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.544383 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvfq4\" (UniqueName: \"kubernetes.io/projected/867917b7-904f-46b3-b1d3-6f9f760aabc7-kube-api-access-fvfq4\") pod \"867917b7-904f-46b3-b1d3-6f9f760aabc7\" (UID: \"867917b7-904f-46b3-b1d3-6f9f760aabc7\") " Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.544808 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/867917b7-904f-46b3-b1d3-6f9f760aabc7-dns-svc\") pod \"867917b7-904f-46b3-b1d3-6f9f760aabc7\" (UID: \"867917b7-904f-46b3-b1d3-6f9f760aabc7\") " Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.544836 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/867917b7-904f-46b3-b1d3-6f9f760aabc7-config\") pod \"867917b7-904f-46b3-b1d3-6f9f760aabc7\" (UID: \"867917b7-904f-46b3-b1d3-6f9f760aabc7\") " Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.546515 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/867917b7-904f-46b3-b1d3-6f9f760aabc7-config" (OuterVolumeSpecName: "config") pod "867917b7-904f-46b3-b1d3-6f9f760aabc7" (UID: "867917b7-904f-46b3-b1d3-6f9f760aabc7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.547341 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/867917b7-904f-46b3-b1d3-6f9f760aabc7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "867917b7-904f-46b3-b1d3-6f9f760aabc7" (UID: "867917b7-904f-46b3-b1d3-6f9f760aabc7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.553693 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/867917b7-904f-46b3-b1d3-6f9f760aabc7-kube-api-access-fvfq4" (OuterVolumeSpecName: "kube-api-access-fvfq4") pod "867917b7-904f-46b3-b1d3-6f9f760aabc7" (UID: "867917b7-904f-46b3-b1d3-6f9f760aabc7"). InnerVolumeSpecName "kube-api-access-fvfq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.647505 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvfq4\" (UniqueName: \"kubernetes.io/projected/867917b7-904f-46b3-b1d3-6f9f760aabc7-kube-api-access-fvfq4\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.647549 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/867917b7-904f-46b3-b1d3-6f9f760aabc7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.647560 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/867917b7-904f-46b3-b1d3-6f9f760aabc7-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.666501 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"bbbc45ca-578f-42e4-b2e9-596c8b2587a1","Type":"ContainerStarted","Data":"d41079c0472f271f1b5a4da10ab2746bdf9b2307b2c6ebde64e49a9fe45d042e"} Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.669588 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1d77a004-19c2-43a0-bbe7-6e94f0d05a4e","Type":"ContainerStarted","Data":"1203c4a0dddfed69706a425b10b1008e23dafc1dfa248e7500ced342aca478fd"} Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.671602 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-xrp8l" event={"ID":"867917b7-904f-46b3-b1d3-6f9f760aabc7","Type":"ContainerDied","Data":"e88fee1812107a76d5ff1dfb78b02206697fadfd581ce075f0d90af9b3beb734"} Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.671663 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xrp8l" Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.684438 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"42b73b5c-bc43-4e91-9e3d-255ed69831db","Type":"ContainerStarted","Data":"53fe9c492b6aef0c76559eeb95e05410cfe0e717f929994304c4c15b84519dcf"} Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.684478 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-8jqrb" Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.684914 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-8jqrb" Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.740068 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=15.69676093 podStartE2EDuration="24.740051369s" podCreationTimestamp="2026-02-14 19:03:05 +0000 UTC" firstStartedPulling="2026-02-14 19:03:20.228309772 +0000 UTC m=+1253.204718255" lastFinishedPulling="2026-02-14 19:03:29.271600211 +0000 UTC m=+1262.248008694" observedRunningTime="2026-02-14 19:03:29.69327095 +0000 UTC m=+1262.669679453" watchObservedRunningTime="2026-02-14 19:03:29.740051369 +0000 UTC m=+1262.716459852" Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.742707 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-c4mp2"] Feb 14 19:03:29 crc kubenswrapper[4897]: W0214 19:03:29.744803 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod392af334_f2c0_4b48_9078_37085e1b4750.slice/crio-20b32f38ad2bcb169c555370c6a350ac16e3a66016103fe37a295678f268fa07 WatchSource:0}: Error finding container 20b32f38ad2bcb169c555370c6a350ac16e3a66016103fe37a295678f268fa07: Status 404 returned error can't find the container with id 20b32f38ad2bcb169c555370c6a350ac16e3a66016103fe37a295678f268fa07 Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.774828 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=20.695592022 podStartE2EDuration="29.774815144s" podCreationTimestamp="2026-02-14 19:03:00 +0000 UTC" firstStartedPulling="2026-02-14 19:03:20.213182061 +0000 UTC m=+1253.189590544" lastFinishedPulling="2026-02-14 19:03:29.292405183 +0000 UTC m=+1262.268813666" observedRunningTime="2026-02-14 19:03:29.773016647 +0000 UTC m=+1262.749425150" watchObservedRunningTime="2026-02-14 19:03:29.774815144 +0000 UTC m=+1262.751223627" Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.810605 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xrp8l"] Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.818056 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xrp8l"] Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.906979 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.914079 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.916817 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.917078 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.917219 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.917428 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-9zsb5" Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.918914 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.919227 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.925903 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.946049 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.959040 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/674b3cbc-fa6f-4475-bebd-314f24beaaa0-etc-swift\") pod \"swift-storage-0\" (UID: \"674b3cbc-fa6f-4475-bebd-314f24beaaa0\") " pod="openstack/swift-storage-0" Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.959097 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fa310386-462e-425b-a027-ef5a8c9297e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fa310386-462e-425b-a027-ef5a8c9297e8\") pod \"swift-storage-0\" (UID: \"674b3cbc-fa6f-4475-bebd-314f24beaaa0\") " pod="openstack/swift-storage-0" Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.959176 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/674b3cbc-fa6f-4475-bebd-314f24beaaa0-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"674b3cbc-fa6f-4475-bebd-314f24beaaa0\") " pod="openstack/swift-storage-0" Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.959207 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/674b3cbc-fa6f-4475-bebd-314f24beaaa0-cache\") pod \"swift-storage-0\" (UID: \"674b3cbc-fa6f-4475-bebd-314f24beaaa0\") " pod="openstack/swift-storage-0" Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.959259 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/674b3cbc-fa6f-4475-bebd-314f24beaaa0-lock\") pod \"swift-storage-0\" (UID: \"674b3cbc-fa6f-4475-bebd-314f24beaaa0\") " pod="openstack/swift-storage-0" Feb 14 19:03:29 crc kubenswrapper[4897]: I0214 19:03:29.959340 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h28wr\" (UniqueName: \"kubernetes.io/projected/674b3cbc-fa6f-4475-bebd-314f24beaaa0-kube-api-access-h28wr\") pod \"swift-storage-0\" (UID: \"674b3cbc-fa6f-4475-bebd-314f24beaaa0\") " pod="openstack/swift-storage-0" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.061423 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/674b3cbc-fa6f-4475-bebd-314f24beaaa0-lock\") pod \"swift-storage-0\" (UID: \"674b3cbc-fa6f-4475-bebd-314f24beaaa0\") " pod="openstack/swift-storage-0" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.061558 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h28wr\" (UniqueName: \"kubernetes.io/projected/674b3cbc-fa6f-4475-bebd-314f24beaaa0-kube-api-access-h28wr\") pod \"swift-storage-0\" (UID: \"674b3cbc-fa6f-4475-bebd-314f24beaaa0\") " pod="openstack/swift-storage-0" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.061623 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/674b3cbc-fa6f-4475-bebd-314f24beaaa0-etc-swift\") pod \"swift-storage-0\" (UID: \"674b3cbc-fa6f-4475-bebd-314f24beaaa0\") " pod="openstack/swift-storage-0" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.061659 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fa310386-462e-425b-a027-ef5a8c9297e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fa310386-462e-425b-a027-ef5a8c9297e8\") pod \"swift-storage-0\" (UID: \"674b3cbc-fa6f-4475-bebd-314f24beaaa0\") " pod="openstack/swift-storage-0" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.061735 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/674b3cbc-fa6f-4475-bebd-314f24beaaa0-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"674b3cbc-fa6f-4475-bebd-314f24beaaa0\") " pod="openstack/swift-storage-0" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.061779 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/674b3cbc-fa6f-4475-bebd-314f24beaaa0-cache\") pod \"swift-storage-0\" (UID: \"674b3cbc-fa6f-4475-bebd-314f24beaaa0\") " pod="openstack/swift-storage-0" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.062016 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/674b3cbc-fa6f-4475-bebd-314f24beaaa0-lock\") pod \"swift-storage-0\" (UID: \"674b3cbc-fa6f-4475-bebd-314f24beaaa0\") " pod="openstack/swift-storage-0" Feb 14 19:03:30 crc kubenswrapper[4897]: E0214 19:03:30.062213 4897 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.062229 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/674b3cbc-fa6f-4475-bebd-314f24beaaa0-cache\") pod \"swift-storage-0\" (UID: \"674b3cbc-fa6f-4475-bebd-314f24beaaa0\") " pod="openstack/swift-storage-0" Feb 14 19:03:30 crc kubenswrapper[4897]: E0214 19:03:30.062235 4897 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 19:03:30 crc kubenswrapper[4897]: E0214 19:03:30.062306 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/674b3cbc-fa6f-4475-bebd-314f24beaaa0-etc-swift podName:674b3cbc-fa6f-4475-bebd-314f24beaaa0 nodeName:}" failed. No retries permitted until 2026-02-14 19:03:30.562288767 +0000 UTC m=+1263.538697240 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/674b3cbc-fa6f-4475-bebd-314f24beaaa0-etc-swift") pod "swift-storage-0" (UID: "674b3cbc-fa6f-4475-bebd-314f24beaaa0") : configmap "swift-ring-files" not found Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.068802 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/674b3cbc-fa6f-4475-bebd-314f24beaaa0-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"674b3cbc-fa6f-4475-bebd-314f24beaaa0\") " pod="openstack/swift-storage-0" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.068815 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.068863 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fa310386-462e-425b-a027-ef5a8c9297e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fa310386-462e-425b-a027-ef5a8c9297e8\") pod \"swift-storage-0\" (UID: \"674b3cbc-fa6f-4475-bebd-314f24beaaa0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cb6fbb75872564f9a306a08cdfead4d19dd16edf5676d4427b76aaa2f3696baa/globalmount\"" pod="openstack/swift-storage-0" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.078508 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h28wr\" (UniqueName: \"kubernetes.io/projected/674b3cbc-fa6f-4475-bebd-314f24beaaa0-kube-api-access-h28wr\") pod \"swift-storage-0\" (UID: \"674b3cbc-fa6f-4475-bebd-314f24beaaa0\") " pod="openstack/swift-storage-0" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.116995 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fa310386-462e-425b-a027-ef5a8c9297e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fa310386-462e-425b-a027-ef5a8c9297e8\") pod \"swift-storage-0\" (UID: \"674b3cbc-fa6f-4475-bebd-314f24beaaa0\") " pod="openstack/swift-storage-0" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.378301 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.451528 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.466564 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-nm7qg"] Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.470511 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nm7qg" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.475659 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.475870 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.477421 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.480920 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-nm7qg"] Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.576295 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18272353-8a77-4df9-baab-a4c2a6e6d0cb-combined-ca-bundle\") pod \"swift-ring-rebalance-nm7qg\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " pod="openstack/swift-ring-rebalance-nm7qg" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.576532 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/18272353-8a77-4df9-baab-a4c2a6e6d0cb-dispersionconf\") pod \"swift-ring-rebalance-nm7qg\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " pod="openstack/swift-ring-rebalance-nm7qg" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.576677 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18272353-8a77-4df9-baab-a4c2a6e6d0cb-scripts\") pod \"swift-ring-rebalance-nm7qg\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " pod="openstack/swift-ring-rebalance-nm7qg" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.576993 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/18272353-8a77-4df9-baab-a4c2a6e6d0cb-ring-data-devices\") pod \"swift-ring-rebalance-nm7qg\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " pod="openstack/swift-ring-rebalance-nm7qg" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.577156 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/18272353-8a77-4df9-baab-a4c2a6e6d0cb-etc-swift\") pod \"swift-ring-rebalance-nm7qg\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " pod="openstack/swift-ring-rebalance-nm7qg" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.577222 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/18272353-8a77-4df9-baab-a4c2a6e6d0cb-swiftconf\") pod \"swift-ring-rebalance-nm7qg\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " pod="openstack/swift-ring-rebalance-nm7qg" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.577282 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzbgp\" (UniqueName: \"kubernetes.io/projected/18272353-8a77-4df9-baab-a4c2a6e6d0cb-kube-api-access-bzbgp\") pod \"swift-ring-rebalance-nm7qg\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " pod="openstack/swift-ring-rebalance-nm7qg" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.577416 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/674b3cbc-fa6f-4475-bebd-314f24beaaa0-etc-swift\") pod \"swift-storage-0\" (UID: \"674b3cbc-fa6f-4475-bebd-314f24beaaa0\") " pod="openstack/swift-storage-0" Feb 14 19:03:30 crc kubenswrapper[4897]: E0214 19:03:30.577603 4897 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 19:03:30 crc kubenswrapper[4897]: E0214 19:03:30.577629 4897 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 19:03:30 crc kubenswrapper[4897]: E0214 19:03:30.577689 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/674b3cbc-fa6f-4475-bebd-314f24beaaa0-etc-swift podName:674b3cbc-fa6f-4475-bebd-314f24beaaa0 nodeName:}" failed. No retries permitted until 2026-02-14 19:03:31.577668106 +0000 UTC m=+1264.554076589 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/674b3cbc-fa6f-4475-bebd-314f24beaaa0-etc-swift") pod "swift-storage-0" (UID: "674b3cbc-fa6f-4475-bebd-314f24beaaa0") : configmap "swift-ring-files" not found Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.679593 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18272353-8a77-4df9-baab-a4c2a6e6d0cb-combined-ca-bundle\") pod \"swift-ring-rebalance-nm7qg\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " pod="openstack/swift-ring-rebalance-nm7qg" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.679655 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/18272353-8a77-4df9-baab-a4c2a6e6d0cb-dispersionconf\") pod \"swift-ring-rebalance-nm7qg\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " pod="openstack/swift-ring-rebalance-nm7qg" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.679688 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18272353-8a77-4df9-baab-a4c2a6e6d0cb-scripts\") pod \"swift-ring-rebalance-nm7qg\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " pod="openstack/swift-ring-rebalance-nm7qg" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.679817 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/18272353-8a77-4df9-baab-a4c2a6e6d0cb-ring-data-devices\") pod \"swift-ring-rebalance-nm7qg\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " pod="openstack/swift-ring-rebalance-nm7qg" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.679841 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/18272353-8a77-4df9-baab-a4c2a6e6d0cb-etc-swift\") pod \"swift-ring-rebalance-nm7qg\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " pod="openstack/swift-ring-rebalance-nm7qg" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.679882 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/18272353-8a77-4df9-baab-a4c2a6e6d0cb-swiftconf\") pod \"swift-ring-rebalance-nm7qg\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " pod="openstack/swift-ring-rebalance-nm7qg" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.679919 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzbgp\" (UniqueName: \"kubernetes.io/projected/18272353-8a77-4df9-baab-a4c2a6e6d0cb-kube-api-access-bzbgp\") pod \"swift-ring-rebalance-nm7qg\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " pod="openstack/swift-ring-rebalance-nm7qg" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.681145 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/18272353-8a77-4df9-baab-a4c2a6e6d0cb-ring-data-devices\") pod \"swift-ring-rebalance-nm7qg\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " pod="openstack/swift-ring-rebalance-nm7qg" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.689943 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/18272353-8a77-4df9-baab-a4c2a6e6d0cb-etc-swift\") pod \"swift-ring-rebalance-nm7qg\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " pod="openstack/swift-ring-rebalance-nm7qg" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.690327 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18272353-8a77-4df9-baab-a4c2a6e6d0cb-scripts\") pod \"swift-ring-rebalance-nm7qg\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " pod="openstack/swift-ring-rebalance-nm7qg" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.690377 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/18272353-8a77-4df9-baab-a4c2a6e6d0cb-dispersionconf\") pod \"swift-ring-rebalance-nm7qg\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " pod="openstack/swift-ring-rebalance-nm7qg" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.690556 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18272353-8a77-4df9-baab-a4c2a6e6d0cb-combined-ca-bundle\") pod \"swift-ring-rebalance-nm7qg\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " pod="openstack/swift-ring-rebalance-nm7qg" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.694504 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/18272353-8a77-4df9-baab-a4c2a6e6d0cb-swiftconf\") pod \"swift-ring-rebalance-nm7qg\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " pod="openstack/swift-ring-rebalance-nm7qg" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.702662 4897 generic.go:334] "Generic (PLEG): container finished" podID="392af334-f2c0-4b48-9078-37085e1b4750" containerID="bb5d692144e3d992ae0206bb3ba4d7560d1f86fd04b883c05acde5be7143da4a" exitCode=0 Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.705011 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-c4mp2" event={"ID":"392af334-f2c0-4b48-9078-37085e1b4750","Type":"ContainerDied","Data":"bb5d692144e3d992ae0206bb3ba4d7560d1f86fd04b883c05acde5be7143da4a"} Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.705099 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-c4mp2" event={"ID":"392af334-f2c0-4b48-9078-37085e1b4750","Type":"ContainerStarted","Data":"20b32f38ad2bcb169c555370c6a350ac16e3a66016103fe37a295678f268fa07"} Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.707552 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.723741 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzbgp\" (UniqueName: \"kubernetes.io/projected/18272353-8a77-4df9-baab-a4c2a6e6d0cb-kube-api-access-bzbgp\") pod \"swift-ring-rebalance-nm7qg\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " pod="openstack/swift-ring-rebalance-nm7qg" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.728426 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.799879 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nm7qg" Feb 14 19:03:30 crc kubenswrapper[4897]: I0214 19:03:30.917761 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7b9ddbfb7b-bnlsc"] Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.370644 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-nm7qg"] Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.424223 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 14 19:03:31 crc kubenswrapper[4897]: E0214 19:03:31.456981 4897 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 14 19:03:31 crc kubenswrapper[4897]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/a4ee62cf-bcef-4904-a262-600ed17f3719/volume-subpaths/dns-svc/init/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 14 19:03:31 crc kubenswrapper[4897]: > podSandboxID="5843b8eacbc559a16fdc395dcd3d3b1f48bf95cb8654cc7a0329d2844baafda5" Feb 14 19:03:31 crc kubenswrapper[4897]: E0214 19:03:31.457154 4897 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 14 19:03:31 crc kubenswrapper[4897]: init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kp46h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-tlgx5_openstack(a4ee62cf-bcef-4904-a262-600ed17f3719): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/a4ee62cf-bcef-4904-a262-600ed17f3719/volume-subpaths/dns-svc/init/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 14 19:03:31 crc kubenswrapper[4897]: > logger="UnhandledError" Feb 14 19:03:31 crc kubenswrapper[4897]: E0214 19:03:31.459157 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/a4ee62cf-bcef-4904-a262-600ed17f3719/volume-subpaths/dns-svc/init/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-57d769cc4f-tlgx5" podUID="a4ee62cf-bcef-4904-a262-600ed17f3719" Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.601688 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/674b3cbc-fa6f-4475-bebd-314f24beaaa0-etc-swift\") pod \"swift-storage-0\" (UID: \"674b3cbc-fa6f-4475-bebd-314f24beaaa0\") " pod="openstack/swift-storage-0" Feb 14 19:03:31 crc kubenswrapper[4897]: E0214 19:03:31.602058 4897 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 19:03:31 crc kubenswrapper[4897]: E0214 19:03:31.602073 4897 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 19:03:31 crc kubenswrapper[4897]: E0214 19:03:31.602111 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/674b3cbc-fa6f-4475-bebd-314f24beaaa0-etc-swift podName:674b3cbc-fa6f-4475-bebd-314f24beaaa0 nodeName:}" failed. No retries permitted until 2026-02-14 19:03:33.602098795 +0000 UTC m=+1266.578507278 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/674b3cbc-fa6f-4475-bebd-314f24beaaa0-etc-swift") pod "swift-storage-0" (UID: "674b3cbc-fa6f-4475-bebd-314f24beaaa0") : configmap "swift-ring-files" not found Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.714633 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"32d6ef5f-5f6d-4563-91e7-94928fbe901d","Type":"ContainerStarted","Data":"0559c79f3f8e876a576da1845a722e9632027d8d7c9eb9100730338292c01d04"} Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.728839 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tlgx5"] Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.731630 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-c4mp2" event={"ID":"392af334-f2c0-4b48-9078-37085e1b4750","Type":"ContainerStarted","Data":"6fffb82b34baeab21f60010f29ac485c75a2be39522b457e72e70762791abdd7"} Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.732572 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-c4mp2" Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.734093 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nm7qg" event={"ID":"18272353-8a77-4df9-baab-a4c2a6e6d0cb","Type":"ContainerStarted","Data":"d3ddd761b1677b724655be81adee5036f8d4d55a86b258c2f92ae0e349c84d6d"} Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.788151 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-r68wj"] Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.789790 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-r68wj" Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.792371 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.803143 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-9ql27"] Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.834095 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-9ql27" Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.844045 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.891158 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="867917b7-904f-46b3-b1d3-6f9f760aabc7" path="/var/lib/kubelet/pods/867917b7-904f-46b3-b1d3-6f9f760aabc7/volumes" Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.891571 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-r68wj"] Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.891622 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-9ql27"] Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.900600 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-c4mp2" podStartSLOduration=3.494345158 podStartE2EDuration="3.900580437s" podCreationTimestamp="2026-02-14 19:03:28 +0000 UTC" firstStartedPulling="2026-02-14 19:03:29.75519456 +0000 UTC m=+1262.731603043" lastFinishedPulling="2026-02-14 19:03:30.161429839 +0000 UTC m=+1263.137838322" observedRunningTime="2026-02-14 19:03:31.775371525 +0000 UTC m=+1264.751780028" watchObservedRunningTime="2026-02-14 19:03:31.900580437 +0000 UTC m=+1264.876988920" Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.940399 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/73e940f4-b0ed-44a0-8ec6-ade047f3b0b4-ovn-rundir\") pod \"ovn-controller-metrics-9ql27\" (UID: \"73e940f4-b0ed-44a0-8ec6-ade047f3b0b4\") " pod="openstack/ovn-controller-metrics-9ql27" Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.940495 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv26r\" (UniqueName: \"kubernetes.io/projected/71c86e05-3ae7-4139-bd89-cf4311b2deed-kube-api-access-nv26r\") pod \"dnsmasq-dns-6c89d5d749-r68wj\" (UID: \"71c86e05-3ae7-4139-bd89-cf4311b2deed\") " pod="openstack/dnsmasq-dns-6c89d5d749-r68wj" Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.940525 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71c86e05-3ae7-4139-bd89-cf4311b2deed-config\") pod \"dnsmasq-dns-6c89d5d749-r68wj\" (UID: \"71c86e05-3ae7-4139-bd89-cf4311b2deed\") " pod="openstack/dnsmasq-dns-6c89d5d749-r68wj" Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.940559 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71c86e05-3ae7-4139-bd89-cf4311b2deed-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-r68wj\" (UID: \"71c86e05-3ae7-4139-bd89-cf4311b2deed\") " pod="openstack/dnsmasq-dns-6c89d5d749-r68wj" Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.940573 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71c86e05-3ae7-4139-bd89-cf4311b2deed-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-r68wj\" (UID: \"71c86e05-3ae7-4139-bd89-cf4311b2deed\") " pod="openstack/dnsmasq-dns-6c89d5d749-r68wj" Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.940679 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xwm4\" (UniqueName: \"kubernetes.io/projected/73e940f4-b0ed-44a0-8ec6-ade047f3b0b4-kube-api-access-7xwm4\") pod \"ovn-controller-metrics-9ql27\" (UID: \"73e940f4-b0ed-44a0-8ec6-ade047f3b0b4\") " pod="openstack/ovn-controller-metrics-9ql27" Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.940740 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/73e940f4-b0ed-44a0-8ec6-ade047f3b0b4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-9ql27\" (UID: \"73e940f4-b0ed-44a0-8ec6-ade047f3b0b4\") " pod="openstack/ovn-controller-metrics-9ql27" Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.940755 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/73e940f4-b0ed-44a0-8ec6-ade047f3b0b4-ovs-rundir\") pod \"ovn-controller-metrics-9ql27\" (UID: \"73e940f4-b0ed-44a0-8ec6-ade047f3b0b4\") " pod="openstack/ovn-controller-metrics-9ql27" Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.940769 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73e940f4-b0ed-44a0-8ec6-ade047f3b0b4-config\") pod \"ovn-controller-metrics-9ql27\" (UID: \"73e940f4-b0ed-44a0-8ec6-ade047f3b0b4\") " pod="openstack/ovn-controller-metrics-9ql27" Feb 14 19:03:31 crc kubenswrapper[4897]: I0214 19:03:31.940783 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73e940f4-b0ed-44a0-8ec6-ade047f3b0b4-combined-ca-bundle\") pod \"ovn-controller-metrics-9ql27\" (UID: \"73e940f4-b0ed-44a0-8ec6-ade047f3b0b4\") " pod="openstack/ovn-controller-metrics-9ql27" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.042418 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xwm4\" (UniqueName: \"kubernetes.io/projected/73e940f4-b0ed-44a0-8ec6-ade047f3b0b4-kube-api-access-7xwm4\") pod \"ovn-controller-metrics-9ql27\" (UID: \"73e940f4-b0ed-44a0-8ec6-ade047f3b0b4\") " pod="openstack/ovn-controller-metrics-9ql27" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.042748 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/73e940f4-b0ed-44a0-8ec6-ade047f3b0b4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-9ql27\" (UID: \"73e940f4-b0ed-44a0-8ec6-ade047f3b0b4\") " pod="openstack/ovn-controller-metrics-9ql27" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.042767 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/73e940f4-b0ed-44a0-8ec6-ade047f3b0b4-ovs-rundir\") pod \"ovn-controller-metrics-9ql27\" (UID: \"73e940f4-b0ed-44a0-8ec6-ade047f3b0b4\") " pod="openstack/ovn-controller-metrics-9ql27" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.042786 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73e940f4-b0ed-44a0-8ec6-ade047f3b0b4-config\") pod \"ovn-controller-metrics-9ql27\" (UID: \"73e940f4-b0ed-44a0-8ec6-ade047f3b0b4\") " pod="openstack/ovn-controller-metrics-9ql27" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.042805 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73e940f4-b0ed-44a0-8ec6-ade047f3b0b4-combined-ca-bundle\") pod \"ovn-controller-metrics-9ql27\" (UID: \"73e940f4-b0ed-44a0-8ec6-ade047f3b0b4\") " pod="openstack/ovn-controller-metrics-9ql27" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.042824 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/73e940f4-b0ed-44a0-8ec6-ade047f3b0b4-ovn-rundir\") pod \"ovn-controller-metrics-9ql27\" (UID: \"73e940f4-b0ed-44a0-8ec6-ade047f3b0b4\") " pod="openstack/ovn-controller-metrics-9ql27" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.042891 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv26r\" (UniqueName: \"kubernetes.io/projected/71c86e05-3ae7-4139-bd89-cf4311b2deed-kube-api-access-nv26r\") pod \"dnsmasq-dns-6c89d5d749-r68wj\" (UID: \"71c86e05-3ae7-4139-bd89-cf4311b2deed\") " pod="openstack/dnsmasq-dns-6c89d5d749-r68wj" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.042908 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71c86e05-3ae7-4139-bd89-cf4311b2deed-config\") pod \"dnsmasq-dns-6c89d5d749-r68wj\" (UID: \"71c86e05-3ae7-4139-bd89-cf4311b2deed\") " pod="openstack/dnsmasq-dns-6c89d5d749-r68wj" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.042949 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71c86e05-3ae7-4139-bd89-cf4311b2deed-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-r68wj\" (UID: \"71c86e05-3ae7-4139-bd89-cf4311b2deed\") " pod="openstack/dnsmasq-dns-6c89d5d749-r68wj" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.042966 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71c86e05-3ae7-4139-bd89-cf4311b2deed-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-r68wj\" (UID: \"71c86e05-3ae7-4139-bd89-cf4311b2deed\") " pod="openstack/dnsmasq-dns-6c89d5d749-r68wj" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.049726 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/73e940f4-b0ed-44a0-8ec6-ade047f3b0b4-ovs-rundir\") pod \"ovn-controller-metrics-9ql27\" (UID: \"73e940f4-b0ed-44a0-8ec6-ade047f3b0b4\") " pod="openstack/ovn-controller-metrics-9ql27" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.050253 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73e940f4-b0ed-44a0-8ec6-ade047f3b0b4-config\") pod \"ovn-controller-metrics-9ql27\" (UID: \"73e940f4-b0ed-44a0-8ec6-ade047f3b0b4\") " pod="openstack/ovn-controller-metrics-9ql27" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.057656 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/73e940f4-b0ed-44a0-8ec6-ade047f3b0b4-ovn-rundir\") pod \"ovn-controller-metrics-9ql27\" (UID: \"73e940f4-b0ed-44a0-8ec6-ade047f3b0b4\") " pod="openstack/ovn-controller-metrics-9ql27" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.058170 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71c86e05-3ae7-4139-bd89-cf4311b2deed-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-r68wj\" (UID: \"71c86e05-3ae7-4139-bd89-cf4311b2deed\") " pod="openstack/dnsmasq-dns-6c89d5d749-r68wj" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.058453 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71c86e05-3ae7-4139-bd89-cf4311b2deed-config\") pod \"dnsmasq-dns-6c89d5d749-r68wj\" (UID: \"71c86e05-3ae7-4139-bd89-cf4311b2deed\") " pod="openstack/dnsmasq-dns-6c89d5d749-r68wj" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.071802 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71c86e05-3ae7-4139-bd89-cf4311b2deed-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-r68wj\" (UID: \"71c86e05-3ae7-4139-bd89-cf4311b2deed\") " pod="openstack/dnsmasq-dns-6c89d5d749-r68wj" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.076066 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-c4mp2"] Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.077358 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73e940f4-b0ed-44a0-8ec6-ade047f3b0b4-combined-ca-bundle\") pod \"ovn-controller-metrics-9ql27\" (UID: \"73e940f4-b0ed-44a0-8ec6-ade047f3b0b4\") " pod="openstack/ovn-controller-metrics-9ql27" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.078233 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/73e940f4-b0ed-44a0-8ec6-ade047f3b0b4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-9ql27\" (UID: \"73e940f4-b0ed-44a0-8ec6-ade047f3b0b4\") " pod="openstack/ovn-controller-metrics-9ql27" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.082297 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xwm4\" (UniqueName: \"kubernetes.io/projected/73e940f4-b0ed-44a0-8ec6-ade047f3b0b4-kube-api-access-7xwm4\") pod \"ovn-controller-metrics-9ql27\" (UID: \"73e940f4-b0ed-44a0-8ec6-ade047f3b0b4\") " pod="openstack/ovn-controller-metrics-9ql27" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.095560 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv26r\" (UniqueName: \"kubernetes.io/projected/71c86e05-3ae7-4139-bd89-cf4311b2deed-kube-api-access-nv26r\") pod \"dnsmasq-dns-6c89d5d749-r68wj\" (UID: \"71c86e05-3ae7-4139-bd89-cf4311b2deed\") " pod="openstack/dnsmasq-dns-6c89d5d749-r68wj" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.137931 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-7sjdq"] Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.141187 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7sjdq" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.145630 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.156994 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7sjdq"] Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.159669 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-r68wj" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.187581 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-9ql27" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.246001 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7sjdq\" (UID: \"44adf1d8-e13a-4851-8dc7-6939ef2aa45b\") " pod="openstack/dnsmasq-dns-698758b865-7sjdq" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.246061 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-config\") pod \"dnsmasq-dns-698758b865-7sjdq\" (UID: \"44adf1d8-e13a-4851-8dc7-6939ef2aa45b\") " pod="openstack/dnsmasq-dns-698758b865-7sjdq" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.246096 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nhn2\" (UniqueName: \"kubernetes.io/projected/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-kube-api-access-5nhn2\") pod \"dnsmasq-dns-698758b865-7sjdq\" (UID: \"44adf1d8-e13a-4851-8dc7-6939ef2aa45b\") " pod="openstack/dnsmasq-dns-698758b865-7sjdq" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.246136 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-dns-svc\") pod \"dnsmasq-dns-698758b865-7sjdq\" (UID: \"44adf1d8-e13a-4851-8dc7-6939ef2aa45b\") " pod="openstack/dnsmasq-dns-698758b865-7sjdq" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.246172 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7sjdq\" (UID: \"44adf1d8-e13a-4851-8dc7-6939ef2aa45b\") " pod="openstack/dnsmasq-dns-698758b865-7sjdq" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.307675 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.307942 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.347623 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7sjdq\" (UID: \"44adf1d8-e13a-4851-8dc7-6939ef2aa45b\") " pod="openstack/dnsmasq-dns-698758b865-7sjdq" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.347767 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7sjdq\" (UID: \"44adf1d8-e13a-4851-8dc7-6939ef2aa45b\") " pod="openstack/dnsmasq-dns-698758b865-7sjdq" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.347793 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-config\") pod \"dnsmasq-dns-698758b865-7sjdq\" (UID: \"44adf1d8-e13a-4851-8dc7-6939ef2aa45b\") " pod="openstack/dnsmasq-dns-698758b865-7sjdq" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.347823 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nhn2\" (UniqueName: \"kubernetes.io/projected/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-kube-api-access-5nhn2\") pod \"dnsmasq-dns-698758b865-7sjdq\" (UID: \"44adf1d8-e13a-4851-8dc7-6939ef2aa45b\") " pod="openstack/dnsmasq-dns-698758b865-7sjdq" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.347863 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-dns-svc\") pod \"dnsmasq-dns-698758b865-7sjdq\" (UID: \"44adf1d8-e13a-4851-8dc7-6939ef2aa45b\") " pod="openstack/dnsmasq-dns-698758b865-7sjdq" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.348592 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7sjdq\" (UID: \"44adf1d8-e13a-4851-8dc7-6939ef2aa45b\") " pod="openstack/dnsmasq-dns-698758b865-7sjdq" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.350567 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7sjdq\" (UID: \"44adf1d8-e13a-4851-8dc7-6939ef2aa45b\") " pod="openstack/dnsmasq-dns-698758b865-7sjdq" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.352154 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-config\") pod \"dnsmasq-dns-698758b865-7sjdq\" (UID: \"44adf1d8-e13a-4851-8dc7-6939ef2aa45b\") " pod="openstack/dnsmasq-dns-698758b865-7sjdq" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.352573 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-dns-svc\") pod \"dnsmasq-dns-698758b865-7sjdq\" (UID: \"44adf1d8-e13a-4851-8dc7-6939ef2aa45b\") " pod="openstack/dnsmasq-dns-698758b865-7sjdq" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.374760 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nhn2\" (UniqueName: \"kubernetes.io/projected/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-kube-api-access-5nhn2\") pod \"dnsmasq-dns-698758b865-7sjdq\" (UID: \"44adf1d8-e13a-4851-8dc7-6939ef2aa45b\") " pod="openstack/dnsmasq-dns-698758b865-7sjdq" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.413223 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-tlgx5" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.429958 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.448846 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4ee62cf-bcef-4904-a262-600ed17f3719-dns-svc\") pod \"a4ee62cf-bcef-4904-a262-600ed17f3719\" (UID: \"a4ee62cf-bcef-4904-a262-600ed17f3719\") " Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.449117 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4ee62cf-bcef-4904-a262-600ed17f3719-config\") pod \"a4ee62cf-bcef-4904-a262-600ed17f3719\" (UID: \"a4ee62cf-bcef-4904-a262-600ed17f3719\") " Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.449173 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp46h\" (UniqueName: \"kubernetes.io/projected/a4ee62cf-bcef-4904-a262-600ed17f3719-kube-api-access-kp46h\") pod \"a4ee62cf-bcef-4904-a262-600ed17f3719\" (UID: \"a4ee62cf-bcef-4904-a262-600ed17f3719\") " Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.456113 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4ee62cf-bcef-4904-a262-600ed17f3719-kube-api-access-kp46h" (OuterVolumeSpecName: "kube-api-access-kp46h") pod "a4ee62cf-bcef-4904-a262-600ed17f3719" (UID: "a4ee62cf-bcef-4904-a262-600ed17f3719"). InnerVolumeSpecName "kube-api-access-kp46h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.472957 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4ee62cf-bcef-4904-a262-600ed17f3719-config" (OuterVolumeSpecName: "config") pod "a4ee62cf-bcef-4904-a262-600ed17f3719" (UID: "a4ee62cf-bcef-4904-a262-600ed17f3719"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.492735 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4ee62cf-bcef-4904-a262-600ed17f3719-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a4ee62cf-bcef-4904-a262-600ed17f3719" (UID: "a4ee62cf-bcef-4904-a262-600ed17f3719"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.551361 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4ee62cf-bcef-4904-a262-600ed17f3719-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.551397 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kp46h\" (UniqueName: \"kubernetes.io/projected/a4ee62cf-bcef-4904-a262-600ed17f3719-kube-api-access-kp46h\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.551408 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4ee62cf-bcef-4904-a262-600ed17f3719-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.652559 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7sjdq" Feb 14 19:03:32 crc kubenswrapper[4897]: I0214 19:03:32.676098 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-r68wj"] Feb 14 19:03:32 crc kubenswrapper[4897]: W0214 19:03:32.687661 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71c86e05_3ae7_4139_bd89_cf4311b2deed.slice/crio-5a325a14d2ef81b773cb2e0fe65e3a5d52ab9b695ab426a27e5ad62609dbd55d WatchSource:0}: Error finding container 5a325a14d2ef81b773cb2e0fe65e3a5d52ab9b695ab426a27e5ad62609dbd55d: Status 404 returned error can't find the container with id 5a325a14d2ef81b773cb2e0fe65e3a5d52ab9b695ab426a27e5ad62609dbd55d Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:32.784541 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-tlgx5" event={"ID":"a4ee62cf-bcef-4904-a262-600ed17f3719","Type":"ContainerDied","Data":"5843b8eacbc559a16fdc395dcd3d3b1f48bf95cb8654cc7a0329d2844baafda5"} Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:32.784581 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-tlgx5" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:32.793604 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-9ql27"] Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:32.796738 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-r68wj" event={"ID":"71c86e05-3ae7-4139-bd89-cf4311b2deed","Type":"ContainerStarted","Data":"5a325a14d2ef81b773cb2e0fe65e3a5d52ab9b695ab426a27e5ad62609dbd55d"} Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:32.844680 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tlgx5"] Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:32.862080 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tlgx5"] Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:32.906904 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.066348 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.068568 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.090746 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.090882 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.090967 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.091212 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-b9v6p" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.103037 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.169159 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e0766f-fee2-48be-b8d6-1b04e52fe8ee-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d9e0766f-fee2-48be-b8d6-1b04e52fe8ee\") " pod="openstack/ovn-northd-0" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.169438 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d9e0766f-fee2-48be-b8d6-1b04e52fe8ee-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d9e0766f-fee2-48be-b8d6-1b04e52fe8ee\") " pod="openstack/ovn-northd-0" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.169461 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xvkz\" (UniqueName: \"kubernetes.io/projected/d9e0766f-fee2-48be-b8d6-1b04e52fe8ee-kube-api-access-9xvkz\") pod \"ovn-northd-0\" (UID: \"d9e0766f-fee2-48be-b8d6-1b04e52fe8ee\") " pod="openstack/ovn-northd-0" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.169536 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e0766f-fee2-48be-b8d6-1b04e52fe8ee-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d9e0766f-fee2-48be-b8d6-1b04e52fe8ee\") " pod="openstack/ovn-northd-0" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.169580 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9e0766f-fee2-48be-b8d6-1b04e52fe8ee-scripts\") pod \"ovn-northd-0\" (UID: \"d9e0766f-fee2-48be-b8d6-1b04e52fe8ee\") " pod="openstack/ovn-northd-0" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.169608 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9e0766f-fee2-48be-b8d6-1b04e52fe8ee-config\") pod \"ovn-northd-0\" (UID: \"d9e0766f-fee2-48be-b8d6-1b04e52fe8ee\") " pod="openstack/ovn-northd-0" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.169674 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e0766f-fee2-48be-b8d6-1b04e52fe8ee-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d9e0766f-fee2-48be-b8d6-1b04e52fe8ee\") " pod="openstack/ovn-northd-0" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.273447 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e0766f-fee2-48be-b8d6-1b04e52fe8ee-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d9e0766f-fee2-48be-b8d6-1b04e52fe8ee\") " pod="openstack/ovn-northd-0" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.273520 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e0766f-fee2-48be-b8d6-1b04e52fe8ee-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d9e0766f-fee2-48be-b8d6-1b04e52fe8ee\") " pod="openstack/ovn-northd-0" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.273562 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d9e0766f-fee2-48be-b8d6-1b04e52fe8ee-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d9e0766f-fee2-48be-b8d6-1b04e52fe8ee\") " pod="openstack/ovn-northd-0" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.273589 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xvkz\" (UniqueName: \"kubernetes.io/projected/d9e0766f-fee2-48be-b8d6-1b04e52fe8ee-kube-api-access-9xvkz\") pod \"ovn-northd-0\" (UID: \"d9e0766f-fee2-48be-b8d6-1b04e52fe8ee\") " pod="openstack/ovn-northd-0" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.273678 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e0766f-fee2-48be-b8d6-1b04e52fe8ee-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d9e0766f-fee2-48be-b8d6-1b04e52fe8ee\") " pod="openstack/ovn-northd-0" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.273748 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9e0766f-fee2-48be-b8d6-1b04e52fe8ee-scripts\") pod \"ovn-northd-0\" (UID: \"d9e0766f-fee2-48be-b8d6-1b04e52fe8ee\") " pod="openstack/ovn-northd-0" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.273792 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9e0766f-fee2-48be-b8d6-1b04e52fe8ee-config\") pod \"ovn-northd-0\" (UID: \"d9e0766f-fee2-48be-b8d6-1b04e52fe8ee\") " pod="openstack/ovn-northd-0" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.274120 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d9e0766f-fee2-48be-b8d6-1b04e52fe8ee-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d9e0766f-fee2-48be-b8d6-1b04e52fe8ee\") " pod="openstack/ovn-northd-0" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.274835 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9e0766f-fee2-48be-b8d6-1b04e52fe8ee-config\") pod \"ovn-northd-0\" (UID: \"d9e0766f-fee2-48be-b8d6-1b04e52fe8ee\") " pod="openstack/ovn-northd-0" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.282624 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e0766f-fee2-48be-b8d6-1b04e52fe8ee-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d9e0766f-fee2-48be-b8d6-1b04e52fe8ee\") " pod="openstack/ovn-northd-0" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.283227 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e0766f-fee2-48be-b8d6-1b04e52fe8ee-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d9e0766f-fee2-48be-b8d6-1b04e52fe8ee\") " pod="openstack/ovn-northd-0" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.287367 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9e0766f-fee2-48be-b8d6-1b04e52fe8ee-scripts\") pod \"ovn-northd-0\" (UID: \"d9e0766f-fee2-48be-b8d6-1b04e52fe8ee\") " pod="openstack/ovn-northd-0" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.294461 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e0766f-fee2-48be-b8d6-1b04e52fe8ee-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d9e0766f-fee2-48be-b8d6-1b04e52fe8ee\") " pod="openstack/ovn-northd-0" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.296340 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xvkz\" (UniqueName: \"kubernetes.io/projected/d9e0766f-fee2-48be-b8d6-1b04e52fe8ee-kube-api-access-9xvkz\") pod \"ovn-northd-0\" (UID: \"d9e0766f-fee2-48be-b8d6-1b04e52fe8ee\") " pod="openstack/ovn-northd-0" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.443595 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.682253 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/674b3cbc-fa6f-4475-bebd-314f24beaaa0-etc-swift\") pod \"swift-storage-0\" (UID: \"674b3cbc-fa6f-4475-bebd-314f24beaaa0\") " pod="openstack/swift-storage-0" Feb 14 19:03:33 crc kubenswrapper[4897]: E0214 19:03:33.682521 4897 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 19:03:33 crc kubenswrapper[4897]: E0214 19:03:33.682556 4897 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 19:03:33 crc kubenswrapper[4897]: E0214 19:03:33.682628 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/674b3cbc-fa6f-4475-bebd-314f24beaaa0-etc-swift podName:674b3cbc-fa6f-4475-bebd-314f24beaaa0 nodeName:}" failed. No retries permitted until 2026-02-14 19:03:37.682608497 +0000 UTC m=+1270.659016990 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/674b3cbc-fa6f-4475-bebd-314f24beaaa0-etc-swift") pod "swift-storage-0" (UID: "674b3cbc-fa6f-4475-bebd-314f24beaaa0") : configmap "swift-ring-files" not found Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.810265 4897 generic.go:334] "Generic (PLEG): container finished" podID="71c86e05-3ae7-4139-bd89-cf4311b2deed" containerID="85f0a9f9ae8d9514f0175172d444cb9405389d3643896161e283a68bf3887862" exitCode=0 Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.824047 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4ee62cf-bcef-4904-a262-600ed17f3719" path="/var/lib/kubelet/pods/a4ee62cf-bcef-4904-a262-600ed17f3719/volumes" Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.825074 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-9ql27" event={"ID":"73e940f4-b0ed-44a0-8ec6-ade047f3b0b4","Type":"ContainerStarted","Data":"e6ef02937439c8200cc0550562822530f6dd0572efe41fe6ba1ee842d5bc0e56"} Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.825100 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-9ql27" event={"ID":"73e940f4-b0ed-44a0-8ec6-ade047f3b0b4","Type":"ContainerStarted","Data":"17a28845711abe137f92b533a756d66076bd570b430be8f7ff55146c531677ce"} Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.825141 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-r68wj" event={"ID":"71c86e05-3ae7-4139-bd89-cf4311b2deed","Type":"ContainerDied","Data":"85f0a9f9ae8d9514f0175172d444cb9405389d3643896161e283a68bf3887862"} Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.828524 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"c8eb488b-8b48-4dea-8a34-dee3346005ef","Type":"ContainerStarted","Data":"a95df9cbd2a6de16e6cd9decf3036159b9c57f996a07f4fb70e3865a9af7ea81"} Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.836516 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-c4mp2" podUID="392af334-f2c0-4b48-9078-37085e1b4750" containerName="dnsmasq-dns" containerID="cri-o://6fffb82b34baeab21f60010f29ac485c75a2be39522b457e72e70762791abdd7" gracePeriod=10 Feb 14 19:03:33 crc kubenswrapper[4897]: I0214 19:03:33.838038 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-9ql27" podStartSLOduration=2.8380084 podStartE2EDuration="2.8380084s" podCreationTimestamp="2026-02-14 19:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:03:33.830422519 +0000 UTC m=+1266.806831002" watchObservedRunningTime="2026-02-14 19:03:33.8380084 +0000 UTC m=+1266.814416883" Feb 14 19:03:34 crc kubenswrapper[4897]: I0214 19:03:34.319405 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7sjdq"] Feb 14 19:03:34 crc kubenswrapper[4897]: W0214 19:03:34.489793 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44adf1d8_e13a_4851_8dc7_6939ef2aa45b.slice/crio-ceff3a7ea486078fc4342813e62c319302f74d447a06a93758656e2767edb77f WatchSource:0}: Error finding container ceff3a7ea486078fc4342813e62c319302f74d447a06a93758656e2767edb77f: Status 404 returned error can't find the container with id ceff3a7ea486078fc4342813e62c319302f74d447a06a93758656e2767edb77f Feb 14 19:03:34 crc kubenswrapper[4897]: I0214 19:03:34.841879 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7sjdq" event={"ID":"44adf1d8-e13a-4851-8dc7-6939ef2aa45b","Type":"ContainerStarted","Data":"ceff3a7ea486078fc4342813e62c319302f74d447a06a93758656e2767edb77f"} Feb 14 19:03:34 crc kubenswrapper[4897]: I0214 19:03:34.845142 4897 generic.go:334] "Generic (PLEG): container finished" podID="42b73b5c-bc43-4e91-9e3d-255ed69831db" containerID="53fe9c492b6aef0c76559eeb95e05410cfe0e717f929994304c4c15b84519dcf" exitCode=0 Feb 14 19:03:34 crc kubenswrapper[4897]: I0214 19:03:34.845221 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"42b73b5c-bc43-4e91-9e3d-255ed69831db","Type":"ContainerDied","Data":"53fe9c492b6aef0c76559eeb95e05410cfe0e717f929994304c4c15b84519dcf"} Feb 14 19:03:34 crc kubenswrapper[4897]: I0214 19:03:34.849894 4897 generic.go:334] "Generic (PLEG): container finished" podID="392af334-f2c0-4b48-9078-37085e1b4750" containerID="6fffb82b34baeab21f60010f29ac485c75a2be39522b457e72e70762791abdd7" exitCode=0 Feb 14 19:03:34 crc kubenswrapper[4897]: I0214 19:03:34.851013 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-c4mp2" event={"ID":"392af334-f2c0-4b48-9078-37085e1b4750","Type":"ContainerDied","Data":"6fffb82b34baeab21f60010f29ac485c75a2be39522b457e72e70762791abdd7"} Feb 14 19:03:35 crc kubenswrapper[4897]: I0214 19:03:35.301243 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 14 19:03:35 crc kubenswrapper[4897]: I0214 19:03:35.301320 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 14 19:03:35 crc kubenswrapper[4897]: I0214 19:03:35.407532 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 14 19:03:35 crc kubenswrapper[4897]: I0214 19:03:35.956714 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 14 19:03:36 crc kubenswrapper[4897]: I0214 19:03:36.464880 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 14 19:03:36 crc kubenswrapper[4897]: I0214 19:03:36.464935 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 14 19:03:36 crc kubenswrapper[4897]: I0214 19:03:36.580739 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 14 19:03:36 crc kubenswrapper[4897]: I0214 19:03:36.918641 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-6000-account-create-update-phbsk"] Feb 14 19:03:36 crc kubenswrapper[4897]: I0214 19:03:36.920268 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6000-account-create-update-phbsk" Feb 14 19:03:36 crc kubenswrapper[4897]: I0214 19:03:36.923491 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 14 19:03:36 crc kubenswrapper[4897]: I0214 19:03:36.950939 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6000-account-create-update-phbsk"] Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.014348 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-4dvpw"] Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.016092 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4dvpw" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.035492 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4dvpw"] Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.045793 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.057736 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q85p6\" (UniqueName: \"kubernetes.io/projected/59cbf86b-ab14-4d24-953d-5dc1388d0371-kube-api-access-q85p6\") pod \"glance-6000-account-create-update-phbsk\" (UID: \"59cbf86b-ab14-4d24-953d-5dc1388d0371\") " pod="openstack/glance-6000-account-create-update-phbsk" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.057812 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59cbf86b-ab14-4d24-953d-5dc1388d0371-operator-scripts\") pod \"glance-6000-account-create-update-phbsk\" (UID: \"59cbf86b-ab14-4d24-953d-5dc1388d0371\") " pod="openstack/glance-6000-account-create-update-phbsk" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.160265 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q85p6\" (UniqueName: \"kubernetes.io/projected/59cbf86b-ab14-4d24-953d-5dc1388d0371-kube-api-access-q85p6\") pod \"glance-6000-account-create-update-phbsk\" (UID: \"59cbf86b-ab14-4d24-953d-5dc1388d0371\") " pod="openstack/glance-6000-account-create-update-phbsk" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.160331 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh5xr\" (UniqueName: \"kubernetes.io/projected/d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9-kube-api-access-qh5xr\") pod \"glance-db-create-4dvpw\" (UID: \"d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9\") " pod="openstack/glance-db-create-4dvpw" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.160393 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59cbf86b-ab14-4d24-953d-5dc1388d0371-operator-scripts\") pod \"glance-6000-account-create-update-phbsk\" (UID: \"59cbf86b-ab14-4d24-953d-5dc1388d0371\") " pod="openstack/glance-6000-account-create-update-phbsk" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.160444 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9-operator-scripts\") pod \"glance-db-create-4dvpw\" (UID: \"d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9\") " pod="openstack/glance-db-create-4dvpw" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.161641 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59cbf86b-ab14-4d24-953d-5dc1388d0371-operator-scripts\") pod \"glance-6000-account-create-update-phbsk\" (UID: \"59cbf86b-ab14-4d24-953d-5dc1388d0371\") " pod="openstack/glance-6000-account-create-update-phbsk" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.185855 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q85p6\" (UniqueName: \"kubernetes.io/projected/59cbf86b-ab14-4d24-953d-5dc1388d0371-kube-api-access-q85p6\") pod \"glance-6000-account-create-update-phbsk\" (UID: \"59cbf86b-ab14-4d24-953d-5dc1388d0371\") " pod="openstack/glance-6000-account-create-update-phbsk" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.262909 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9-operator-scripts\") pod \"glance-db-create-4dvpw\" (UID: \"d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9\") " pod="openstack/glance-db-create-4dvpw" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.263242 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh5xr\" (UniqueName: \"kubernetes.io/projected/d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9-kube-api-access-qh5xr\") pod \"glance-db-create-4dvpw\" (UID: \"d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9\") " pod="openstack/glance-db-create-4dvpw" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.264911 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9-operator-scripts\") pod \"glance-db-create-4dvpw\" (UID: \"d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9\") " pod="openstack/glance-db-create-4dvpw" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.280494 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6000-account-create-update-phbsk" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.286478 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh5xr\" (UniqueName: \"kubernetes.io/projected/d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9-kube-api-access-qh5xr\") pod \"glance-db-create-4dvpw\" (UID: \"d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9\") " pod="openstack/glance-db-create-4dvpw" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.334149 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4dvpw" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.531394 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-xlfcf"] Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.536396 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xlfcf" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.546294 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-xlfcf"] Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.609299 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7b27-account-create-update-p5jlf"] Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.610514 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7b27-account-create-update-p5jlf" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.613627 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.664563 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7b27-account-create-update-p5jlf"] Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.673241 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c20fa3de-5325-4d13-a447-78392f703250-operator-scripts\") pod \"keystone-7b27-account-create-update-p5jlf\" (UID: \"c20fa3de-5325-4d13-a447-78392f703250\") " pod="openstack/keystone-7b27-account-create-update-p5jlf" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.673454 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x85gj\" (UniqueName: \"kubernetes.io/projected/c20fa3de-5325-4d13-a447-78392f703250-kube-api-access-x85gj\") pod \"keystone-7b27-account-create-update-p5jlf\" (UID: \"c20fa3de-5325-4d13-a447-78392f703250\") " pod="openstack/keystone-7b27-account-create-update-p5jlf" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.673512 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17b0c552-7591-4dbd-85ae-bab84ebb7763-operator-scripts\") pod \"keystone-db-create-xlfcf\" (UID: \"17b0c552-7591-4dbd-85ae-bab84ebb7763\") " pod="openstack/keystone-db-create-xlfcf" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.673819 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g758z\" (UniqueName: \"kubernetes.io/projected/17b0c552-7591-4dbd-85ae-bab84ebb7763-kube-api-access-g758z\") pod \"keystone-db-create-xlfcf\" (UID: \"17b0c552-7591-4dbd-85ae-bab84ebb7763\") " pod="openstack/keystone-db-create-xlfcf" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.775549 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g758z\" (UniqueName: \"kubernetes.io/projected/17b0c552-7591-4dbd-85ae-bab84ebb7763-kube-api-access-g758z\") pod \"keystone-db-create-xlfcf\" (UID: \"17b0c552-7591-4dbd-85ae-bab84ebb7763\") " pod="openstack/keystone-db-create-xlfcf" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.775682 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c20fa3de-5325-4d13-a447-78392f703250-operator-scripts\") pod \"keystone-7b27-account-create-update-p5jlf\" (UID: \"c20fa3de-5325-4d13-a447-78392f703250\") " pod="openstack/keystone-7b27-account-create-update-p5jlf" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.775743 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/674b3cbc-fa6f-4475-bebd-314f24beaaa0-etc-swift\") pod \"swift-storage-0\" (UID: \"674b3cbc-fa6f-4475-bebd-314f24beaaa0\") " pod="openstack/swift-storage-0" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.775767 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x85gj\" (UniqueName: \"kubernetes.io/projected/c20fa3de-5325-4d13-a447-78392f703250-kube-api-access-x85gj\") pod \"keystone-7b27-account-create-update-p5jlf\" (UID: \"c20fa3de-5325-4d13-a447-78392f703250\") " pod="openstack/keystone-7b27-account-create-update-p5jlf" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.775794 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17b0c552-7591-4dbd-85ae-bab84ebb7763-operator-scripts\") pod \"keystone-db-create-xlfcf\" (UID: \"17b0c552-7591-4dbd-85ae-bab84ebb7763\") " pod="openstack/keystone-db-create-xlfcf" Feb 14 19:03:37 crc kubenswrapper[4897]: E0214 19:03:37.775860 4897 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 19:03:37 crc kubenswrapper[4897]: E0214 19:03:37.775887 4897 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 19:03:37 crc kubenswrapper[4897]: E0214 19:03:37.775948 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/674b3cbc-fa6f-4475-bebd-314f24beaaa0-etc-swift podName:674b3cbc-fa6f-4475-bebd-314f24beaaa0 nodeName:}" failed. No retries permitted until 2026-02-14 19:03:45.775928481 +0000 UTC m=+1278.752336974 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/674b3cbc-fa6f-4475-bebd-314f24beaaa0-etc-swift") pod "swift-storage-0" (UID: "674b3cbc-fa6f-4475-bebd-314f24beaaa0") : configmap "swift-ring-files" not found Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.776462 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c20fa3de-5325-4d13-a447-78392f703250-operator-scripts\") pod \"keystone-7b27-account-create-update-p5jlf\" (UID: \"c20fa3de-5325-4d13-a447-78392f703250\") " pod="openstack/keystone-7b27-account-create-update-p5jlf" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.776525 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17b0c552-7591-4dbd-85ae-bab84ebb7763-operator-scripts\") pod \"keystone-db-create-xlfcf\" (UID: \"17b0c552-7591-4dbd-85ae-bab84ebb7763\") " pod="openstack/keystone-db-create-xlfcf" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.807264 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x85gj\" (UniqueName: \"kubernetes.io/projected/c20fa3de-5325-4d13-a447-78392f703250-kube-api-access-x85gj\") pod \"keystone-7b27-account-create-update-p5jlf\" (UID: \"c20fa3de-5325-4d13-a447-78392f703250\") " pod="openstack/keystone-7b27-account-create-update-p5jlf" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.814138 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g758z\" (UniqueName: \"kubernetes.io/projected/17b0c552-7591-4dbd-85ae-bab84ebb7763-kube-api-access-g758z\") pod \"keystone-db-create-xlfcf\" (UID: \"17b0c552-7591-4dbd-85ae-bab84ebb7763\") " pod="openstack/keystone-db-create-xlfcf" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.825865 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-vs6xr"] Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.827327 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vs6xr" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.841760 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-vs6xr"] Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.862177 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xlfcf" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.902811 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-2543-account-create-update-66jmj"] Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.912734 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2543-account-create-update-66jmj" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.914786 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.925246 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7b27-account-create-update-p5jlf" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.928095 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2543-account-create-update-66jmj"] Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.993090 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bp79\" (UniqueName: \"kubernetes.io/projected/b336c8ba-c121-4c43-a75b-8111283a595b-kube-api-access-6bp79\") pod \"placement-db-create-vs6xr\" (UID: \"b336c8ba-c121-4c43-a75b-8111283a595b\") " pod="openstack/placement-db-create-vs6xr" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.993448 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b336c8ba-c121-4c43-a75b-8111283a595b-operator-scripts\") pod \"placement-db-create-vs6xr\" (UID: \"b336c8ba-c121-4c43-a75b-8111283a595b\") " pod="openstack/placement-db-create-vs6xr" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.993572 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgxr6\" (UniqueName: \"kubernetes.io/projected/fe5bbf96-28f9-4afd-ae13-d4927c001e7a-kube-api-access-zgxr6\") pod \"placement-2543-account-create-update-66jmj\" (UID: \"fe5bbf96-28f9-4afd-ae13-d4927c001e7a\") " pod="openstack/placement-2543-account-create-update-66jmj" Feb 14 19:03:37 crc kubenswrapper[4897]: I0214 19:03:37.993630 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe5bbf96-28f9-4afd-ae13-d4927c001e7a-operator-scripts\") pod \"placement-2543-account-create-update-66jmj\" (UID: \"fe5bbf96-28f9-4afd-ae13-d4927c001e7a\") " pod="openstack/placement-2543-account-create-update-66jmj" Feb 14 19:03:38 crc kubenswrapper[4897]: I0214 19:03:38.096116 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bp79\" (UniqueName: \"kubernetes.io/projected/b336c8ba-c121-4c43-a75b-8111283a595b-kube-api-access-6bp79\") pod \"placement-db-create-vs6xr\" (UID: \"b336c8ba-c121-4c43-a75b-8111283a595b\") " pod="openstack/placement-db-create-vs6xr" Feb 14 19:03:38 crc kubenswrapper[4897]: I0214 19:03:38.096208 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b336c8ba-c121-4c43-a75b-8111283a595b-operator-scripts\") pod \"placement-db-create-vs6xr\" (UID: \"b336c8ba-c121-4c43-a75b-8111283a595b\") " pod="openstack/placement-db-create-vs6xr" Feb 14 19:03:38 crc kubenswrapper[4897]: I0214 19:03:38.096342 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgxr6\" (UniqueName: \"kubernetes.io/projected/fe5bbf96-28f9-4afd-ae13-d4927c001e7a-kube-api-access-zgxr6\") pod \"placement-2543-account-create-update-66jmj\" (UID: \"fe5bbf96-28f9-4afd-ae13-d4927c001e7a\") " pod="openstack/placement-2543-account-create-update-66jmj" Feb 14 19:03:38 crc kubenswrapper[4897]: I0214 19:03:38.096399 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe5bbf96-28f9-4afd-ae13-d4927c001e7a-operator-scripts\") pod \"placement-2543-account-create-update-66jmj\" (UID: \"fe5bbf96-28f9-4afd-ae13-d4927c001e7a\") " pod="openstack/placement-2543-account-create-update-66jmj" Feb 14 19:03:38 crc kubenswrapper[4897]: I0214 19:03:38.097570 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe5bbf96-28f9-4afd-ae13-d4927c001e7a-operator-scripts\") pod \"placement-2543-account-create-update-66jmj\" (UID: \"fe5bbf96-28f9-4afd-ae13-d4927c001e7a\") " pod="openstack/placement-2543-account-create-update-66jmj" Feb 14 19:03:38 crc kubenswrapper[4897]: I0214 19:03:38.097615 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b336c8ba-c121-4c43-a75b-8111283a595b-operator-scripts\") pod \"placement-db-create-vs6xr\" (UID: \"b336c8ba-c121-4c43-a75b-8111283a595b\") " pod="openstack/placement-db-create-vs6xr" Feb 14 19:03:38 crc kubenswrapper[4897]: I0214 19:03:38.118863 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bp79\" (UniqueName: \"kubernetes.io/projected/b336c8ba-c121-4c43-a75b-8111283a595b-kube-api-access-6bp79\") pod \"placement-db-create-vs6xr\" (UID: \"b336c8ba-c121-4c43-a75b-8111283a595b\") " pod="openstack/placement-db-create-vs6xr" Feb 14 19:03:38 crc kubenswrapper[4897]: I0214 19:03:38.125432 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgxr6\" (UniqueName: \"kubernetes.io/projected/fe5bbf96-28f9-4afd-ae13-d4927c001e7a-kube-api-access-zgxr6\") pod \"placement-2543-account-create-update-66jmj\" (UID: \"fe5bbf96-28f9-4afd-ae13-d4927c001e7a\") " pod="openstack/placement-2543-account-create-update-66jmj" Feb 14 19:03:38 crc kubenswrapper[4897]: I0214 19:03:38.229489 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vs6xr" Feb 14 19:03:38 crc kubenswrapper[4897]: I0214 19:03:38.238774 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2543-account-create-update-66jmj" Feb 14 19:03:38 crc kubenswrapper[4897]: I0214 19:03:38.626890 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-xvs2m"] Feb 14 19:03:38 crc kubenswrapper[4897]: I0214 19:03:38.630132 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-xvs2m" Feb 14 19:03:38 crc kubenswrapper[4897]: I0214 19:03:38.647609 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-xvs2m"] Feb 14 19:03:38 crc kubenswrapper[4897]: I0214 19:03:38.715888 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tqxg\" (UniqueName: \"kubernetes.io/projected/4940f666-ec19-4b4c-9eb6-4cce233844f9-kube-api-access-6tqxg\") pod \"mysqld-exporter-openstack-db-create-xvs2m\" (UID: \"4940f666-ec19-4b4c-9eb6-4cce233844f9\") " pod="openstack/mysqld-exporter-openstack-db-create-xvs2m" Feb 14 19:03:38 crc kubenswrapper[4897]: I0214 19:03:38.716508 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4940f666-ec19-4b4c-9eb6-4cce233844f9-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-xvs2m\" (UID: \"4940f666-ec19-4b4c-9eb6-4cce233844f9\") " pod="openstack/mysqld-exporter-openstack-db-create-xvs2m" Feb 14 19:03:38 crc kubenswrapper[4897]: I0214 19:03:38.818242 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4940f666-ec19-4b4c-9eb6-4cce233844f9-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-xvs2m\" (UID: \"4940f666-ec19-4b4c-9eb6-4cce233844f9\") " pod="openstack/mysqld-exporter-openstack-db-create-xvs2m" Feb 14 19:03:38 crc kubenswrapper[4897]: I0214 19:03:38.818402 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tqxg\" (UniqueName: \"kubernetes.io/projected/4940f666-ec19-4b4c-9eb6-4cce233844f9-kube-api-access-6tqxg\") pod \"mysqld-exporter-openstack-db-create-xvs2m\" (UID: \"4940f666-ec19-4b4c-9eb6-4cce233844f9\") " pod="openstack/mysqld-exporter-openstack-db-create-xvs2m" Feb 14 19:03:38 crc kubenswrapper[4897]: I0214 19:03:38.819608 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4940f666-ec19-4b4c-9eb6-4cce233844f9-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-xvs2m\" (UID: \"4940f666-ec19-4b4c-9eb6-4cce233844f9\") " pod="openstack/mysqld-exporter-openstack-db-create-xvs2m" Feb 14 19:03:38 crc kubenswrapper[4897]: I0214 19:03:38.825910 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-6627-account-create-update-jr9tq"] Feb 14 19:03:38 crc kubenswrapper[4897]: I0214 19:03:38.828643 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-6627-account-create-update-jr9tq" Feb 14 19:03:38 crc kubenswrapper[4897]: I0214 19:03:38.833332 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Feb 14 19:03:38 crc kubenswrapper[4897]: I0214 19:03:38.853094 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tqxg\" (UniqueName: \"kubernetes.io/projected/4940f666-ec19-4b4c-9eb6-4cce233844f9-kube-api-access-6tqxg\") pod \"mysqld-exporter-openstack-db-create-xvs2m\" (UID: \"4940f666-ec19-4b4c-9eb6-4cce233844f9\") " pod="openstack/mysqld-exporter-openstack-db-create-xvs2m" Feb 14 19:03:38 crc kubenswrapper[4897]: I0214 19:03:38.857441 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-6627-account-create-update-jr9tq"] Feb 14 19:03:38 crc kubenswrapper[4897]: I0214 19:03:38.887953 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 14 19:03:38 crc kubenswrapper[4897]: I0214 19:03:38.920747 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/064037fd-b986-4cd9-bb3e-1000c25a3606-operator-scripts\") pod \"mysqld-exporter-6627-account-create-update-jr9tq\" (UID: \"064037fd-b986-4cd9-bb3e-1000c25a3606\") " pod="openstack/mysqld-exporter-6627-account-create-update-jr9tq" Feb 14 19:03:38 crc kubenswrapper[4897]: I0214 19:03:38.921003 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4zwg\" (UniqueName: \"kubernetes.io/projected/064037fd-b986-4cd9-bb3e-1000c25a3606-kube-api-access-r4zwg\") pod \"mysqld-exporter-6627-account-create-update-jr9tq\" (UID: \"064037fd-b986-4cd9-bb3e-1000c25a3606\") " pod="openstack/mysqld-exporter-6627-account-create-update-jr9tq" Feb 14 19:03:38 crc kubenswrapper[4897]: I0214 19:03:38.962152 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-xvs2m" Feb 14 19:03:39 crc kubenswrapper[4897]: I0214 19:03:39.023663 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/064037fd-b986-4cd9-bb3e-1000c25a3606-operator-scripts\") pod \"mysqld-exporter-6627-account-create-update-jr9tq\" (UID: \"064037fd-b986-4cd9-bb3e-1000c25a3606\") " pod="openstack/mysqld-exporter-6627-account-create-update-jr9tq" Feb 14 19:03:39 crc kubenswrapper[4897]: I0214 19:03:39.023766 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4zwg\" (UniqueName: \"kubernetes.io/projected/064037fd-b986-4cd9-bb3e-1000c25a3606-kube-api-access-r4zwg\") pod \"mysqld-exporter-6627-account-create-update-jr9tq\" (UID: \"064037fd-b986-4cd9-bb3e-1000c25a3606\") " pod="openstack/mysqld-exporter-6627-account-create-update-jr9tq" Feb 14 19:03:39 crc kubenswrapper[4897]: I0214 19:03:39.024613 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/064037fd-b986-4cd9-bb3e-1000c25a3606-operator-scripts\") pod \"mysqld-exporter-6627-account-create-update-jr9tq\" (UID: \"064037fd-b986-4cd9-bb3e-1000c25a3606\") " pod="openstack/mysqld-exporter-6627-account-create-update-jr9tq" Feb 14 19:03:39 crc kubenswrapper[4897]: I0214 19:03:39.038703 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4zwg\" (UniqueName: \"kubernetes.io/projected/064037fd-b986-4cd9-bb3e-1000c25a3606-kube-api-access-r4zwg\") pod \"mysqld-exporter-6627-account-create-update-jr9tq\" (UID: \"064037fd-b986-4cd9-bb3e-1000c25a3606\") " pod="openstack/mysqld-exporter-6627-account-create-update-jr9tq" Feb 14 19:03:39 crc kubenswrapper[4897]: I0214 19:03:39.211655 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-6627-account-create-update-jr9tq" Feb 14 19:03:40 crc kubenswrapper[4897]: I0214 19:03:40.902495 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-c4mp2" Feb 14 19:03:40 crc kubenswrapper[4897]: I0214 19:03:40.935369 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-c4mp2" event={"ID":"392af334-f2c0-4b48-9078-37085e1b4750","Type":"ContainerDied","Data":"20b32f38ad2bcb169c555370c6a350ac16e3a66016103fe37a295678f268fa07"} Feb 14 19:03:40 crc kubenswrapper[4897]: I0214 19:03:40.935426 4897 scope.go:117] "RemoveContainer" containerID="6fffb82b34baeab21f60010f29ac485c75a2be39522b457e72e70762791abdd7" Feb 14 19:03:40 crc kubenswrapper[4897]: I0214 19:03:40.935450 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-c4mp2" Feb 14 19:03:41 crc kubenswrapper[4897]: I0214 19:03:41.006909 4897 scope.go:117] "RemoveContainer" containerID="bb5d692144e3d992ae0206bb3ba4d7560d1f86fd04b883c05acde5be7143da4a" Feb 14 19:03:41 crc kubenswrapper[4897]: I0214 19:03:41.070811 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2nld\" (UniqueName: \"kubernetes.io/projected/392af334-f2c0-4b48-9078-37085e1b4750-kube-api-access-f2nld\") pod \"392af334-f2c0-4b48-9078-37085e1b4750\" (UID: \"392af334-f2c0-4b48-9078-37085e1b4750\") " Feb 14 19:03:41 crc kubenswrapper[4897]: I0214 19:03:41.070872 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/392af334-f2c0-4b48-9078-37085e1b4750-dns-svc\") pod \"392af334-f2c0-4b48-9078-37085e1b4750\" (UID: \"392af334-f2c0-4b48-9078-37085e1b4750\") " Feb 14 19:03:41 crc kubenswrapper[4897]: I0214 19:03:41.070924 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/392af334-f2c0-4b48-9078-37085e1b4750-config\") pod \"392af334-f2c0-4b48-9078-37085e1b4750\" (UID: \"392af334-f2c0-4b48-9078-37085e1b4750\") " Feb 14 19:03:41 crc kubenswrapper[4897]: I0214 19:03:41.075476 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/392af334-f2c0-4b48-9078-37085e1b4750-kube-api-access-f2nld" (OuterVolumeSpecName: "kube-api-access-f2nld") pod "392af334-f2c0-4b48-9078-37085e1b4750" (UID: "392af334-f2c0-4b48-9078-37085e1b4750"). InnerVolumeSpecName "kube-api-access-f2nld". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:03:41 crc kubenswrapper[4897]: I0214 19:03:41.116359 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/392af334-f2c0-4b48-9078-37085e1b4750-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "392af334-f2c0-4b48-9078-37085e1b4750" (UID: "392af334-f2c0-4b48-9078-37085e1b4750"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:03:41 crc kubenswrapper[4897]: I0214 19:03:41.122474 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/392af334-f2c0-4b48-9078-37085e1b4750-config" (OuterVolumeSpecName: "config") pod "392af334-f2c0-4b48-9078-37085e1b4750" (UID: "392af334-f2c0-4b48-9078-37085e1b4750"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:03:41 crc kubenswrapper[4897]: I0214 19:03:41.174632 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2nld\" (UniqueName: \"kubernetes.io/projected/392af334-f2c0-4b48-9078-37085e1b4750-kube-api-access-f2nld\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:41 crc kubenswrapper[4897]: I0214 19:03:41.174667 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/392af334-f2c0-4b48-9078-37085e1b4750-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:41 crc kubenswrapper[4897]: I0214 19:03:41.174677 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/392af334-f2c0-4b48-9078-37085e1b4750-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:41 crc kubenswrapper[4897]: I0214 19:03:41.263710 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 14 19:03:41 crc kubenswrapper[4897]: I0214 19:03:41.291024 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-c4mp2"] Feb 14 19:03:41 crc kubenswrapper[4897]: I0214 19:03:41.299395 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-c4mp2"] Feb 14 19:03:41 crc kubenswrapper[4897]: I0214 19:03:41.775784 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4dvpw"] Feb 14 19:03:41 crc kubenswrapper[4897]: W0214 19:03:41.802636 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod064037fd_b986_4cd9_bb3e_1000c25a3606.slice/crio-eda38f8c0598254a3fba2f450e9bdfc31d850ba53ff9d2953f149d281457abca WatchSource:0}: Error finding container eda38f8c0598254a3fba2f450e9bdfc31d850ba53ff9d2953f149d281457abca: Status 404 returned error can't find the container with id eda38f8c0598254a3fba2f450e9bdfc31d850ba53ff9d2953f149d281457abca Feb 14 19:03:41 crc kubenswrapper[4897]: I0214 19:03:41.807905 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="392af334-f2c0-4b48-9078-37085e1b4750" path="/var/lib/kubelet/pods/392af334-f2c0-4b48-9078-37085e1b4750/volumes" Feb 14 19:03:41 crc kubenswrapper[4897]: W0214 19:03:41.808637 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59cbf86b_ab14_4d24_953d_5dc1388d0371.slice/crio-8cb4a89af5c514c8b7a86c76b4de4ad92c45134e8088c9afb564a8b527045741 WatchSource:0}: Error finding container 8cb4a89af5c514c8b7a86c76b4de4ad92c45134e8088c9afb564a8b527045741: Status 404 returned error can't find the container with id 8cb4a89af5c514c8b7a86c76b4de4ad92c45134e8088c9afb564a8b527045741 Feb 14 19:03:41 crc kubenswrapper[4897]: I0214 19:03:41.808705 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-6627-account-create-update-jr9tq"] Feb 14 19:03:41 crc kubenswrapper[4897]: I0214 19:03:41.808727 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6000-account-create-update-phbsk"] Feb 14 19:03:41 crc kubenswrapper[4897]: I0214 19:03:41.946729 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d9e0766f-fee2-48be-b8d6-1b04e52fe8ee","Type":"ContainerStarted","Data":"cdf0ebd33bdff1a5b23053736b7515b53bf42384fdd0c1a448a5375efffe62bf"} Feb 14 19:03:41 crc kubenswrapper[4897]: I0214 19:03:41.949170 4897 generic.go:334] "Generic (PLEG): container finished" podID="44adf1d8-e13a-4851-8dc7-6939ef2aa45b" containerID="b233d2b5a7cc405f3917a1e17bfad0c495758eda8b5064577300ca62448da2b0" exitCode=0 Feb 14 19:03:41 crc kubenswrapper[4897]: I0214 19:03:41.949231 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7sjdq" event={"ID":"44adf1d8-e13a-4851-8dc7-6939ef2aa45b","Type":"ContainerDied","Data":"b233d2b5a7cc405f3917a1e17bfad0c495758eda8b5064577300ca62448da2b0"} Feb 14 19:03:41 crc kubenswrapper[4897]: I0214 19:03:41.953453 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nm7qg" event={"ID":"18272353-8a77-4df9-baab-a4c2a6e6d0cb","Type":"ContainerStarted","Data":"197953c4a994a27c189320255e8ed9c03f2054f7520abe2cc96ee59500d7b0cd"} Feb 14 19:03:41 crc kubenswrapper[4897]: I0214 19:03:41.960658 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4dvpw" event={"ID":"d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9","Type":"ContainerStarted","Data":"14cdb8474595b94cbf8ec90984b7c33469cff41f315b3d604495124ea05032ee"} Feb 14 19:03:41 crc kubenswrapper[4897]: I0214 19:03:41.963446 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-6627-account-create-update-jr9tq" event={"ID":"064037fd-b986-4cd9-bb3e-1000c25a3606","Type":"ContainerStarted","Data":"eda38f8c0598254a3fba2f450e9bdfc31d850ba53ff9d2953f149d281457abca"} Feb 14 19:03:41 crc kubenswrapper[4897]: I0214 19:03:41.982408 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-r68wj" event={"ID":"71c86e05-3ae7-4139-bd89-cf4311b2deed","Type":"ContainerStarted","Data":"b41241ac3e9d5136cb2c98a94e9d4863adb3547d2421f70c3a90f8e0410b8463"} Feb 14 19:03:41 crc kubenswrapper[4897]: I0214 19:03:41.986962 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c89d5d749-r68wj" Feb 14 19:03:41 crc kubenswrapper[4897]: I0214 19:03:41.992771 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6000-account-create-update-phbsk" event={"ID":"59cbf86b-ab14-4d24-953d-5dc1388d0371","Type":"ContainerStarted","Data":"8cb4a89af5c514c8b7a86c76b4de4ad92c45134e8088c9afb564a8b527045741"} Feb 14 19:03:42 crc kubenswrapper[4897]: I0214 19:03:42.002842 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2543-account-create-update-66jmj"] Feb 14 19:03:42 crc kubenswrapper[4897]: W0214 19:03:42.017162 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe5bbf96_28f9_4afd_ae13_d4927c001e7a.slice/crio-79247d93bb3c642ba8733b96d245e32f8515146bc39bf27640652e67d0d813da WatchSource:0}: Error finding container 79247d93bb3c642ba8733b96d245e32f8515146bc39bf27640652e67d0d813da: Status 404 returned error can't find the container with id 79247d93bb3c642ba8733b96d245e32f8515146bc39bf27640652e67d0d813da Feb 14 19:03:42 crc kubenswrapper[4897]: W0214 19:03:42.021380 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb336c8ba_c121_4c43_a75b_8111283a595b.slice/crio-ddc00d5a688e0bf9b4f9bb4cb279007aa93ec50600c005b288a3e0f0142f3990 WatchSource:0}: Error finding container ddc00d5a688e0bf9b4f9bb4cb279007aa93ec50600c005b288a3e0f0142f3990: Status 404 returned error can't find the container with id ddc00d5a688e0bf9b4f9bb4cb279007aa93ec50600c005b288a3e0f0142f3990 Feb 14 19:03:42 crc kubenswrapper[4897]: I0214 19:03:42.029165 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-xvs2m"] Feb 14 19:03:42 crc kubenswrapper[4897]: I0214 19:03:42.036910 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-nm7qg" podStartSLOduration=2.520520421 podStartE2EDuration="12.036888706s" podCreationTimestamp="2026-02-14 19:03:30 +0000 UTC" firstStartedPulling="2026-02-14 19:03:31.376407277 +0000 UTC m=+1264.352815760" lastFinishedPulling="2026-02-14 19:03:40.892775562 +0000 UTC m=+1273.869184045" observedRunningTime="2026-02-14 19:03:41.991456132 +0000 UTC m=+1274.967864625" watchObservedRunningTime="2026-02-14 19:03:42.036888706 +0000 UTC m=+1275.013297189" Feb 14 19:03:42 crc kubenswrapper[4897]: I0214 19:03:42.065756 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-vs6xr"] Feb 14 19:03:42 crc kubenswrapper[4897]: I0214 19:03:42.070833 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c89d5d749-r68wj" podStartSLOduration=11.070808655 podStartE2EDuration="11.070808655s" podCreationTimestamp="2026-02-14 19:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:03:42.014972449 +0000 UTC m=+1274.991380942" watchObservedRunningTime="2026-02-14 19:03:42.070808655 +0000 UTC m=+1275.047217138" Feb 14 19:03:42 crc kubenswrapper[4897]: I0214 19:03:42.095321 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7b27-account-create-update-p5jlf"] Feb 14 19:03:42 crc kubenswrapper[4897]: I0214 19:03:42.103768 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-xlfcf"] Feb 14 19:03:42 crc kubenswrapper[4897]: W0214 19:03:42.112093 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17b0c552_7591_4dbd_85ae_bab84ebb7763.slice/crio-8290b01f1a5e08a3907884f14005c97bb4c6a120f828bcc91d1133ea2c62d773 WatchSource:0}: Error finding container 8290b01f1a5e08a3907884f14005c97bb4c6a120f828bcc91d1133ea2c62d773: Status 404 returned error can't find the container with id 8290b01f1a5e08a3907884f14005c97bb4c6a120f828bcc91d1133ea2c62d773 Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.004773 4897 generic.go:334] "Generic (PLEG): container finished" podID="c20fa3de-5325-4d13-a447-78392f703250" containerID="7ed401468f44d5ee9f4acbfaa3c359b34bb2becc5e599ed8c47b3bd54fa18f84" exitCode=0 Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.004922 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7b27-account-create-update-p5jlf" event={"ID":"c20fa3de-5325-4d13-a447-78392f703250","Type":"ContainerDied","Data":"7ed401468f44d5ee9f4acbfaa3c359b34bb2becc5e599ed8c47b3bd54fa18f84"} Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.005210 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7b27-account-create-update-p5jlf" event={"ID":"c20fa3de-5325-4d13-a447-78392f703250","Type":"ContainerStarted","Data":"d457d1d19296b8ba5bfe4b6678388b1713442bb6a9b904f2b30fd02f4efc0de4"} Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.007086 4897 generic.go:334] "Generic (PLEG): container finished" podID="fe5bbf96-28f9-4afd-ae13-d4927c001e7a" containerID="84b1b2ba21c137d84a2cecc0ab53e6c8e2ec2460981434e9231f02872d0e2d5a" exitCode=0 Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.007123 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2543-account-create-update-66jmj" event={"ID":"fe5bbf96-28f9-4afd-ae13-d4927c001e7a","Type":"ContainerDied","Data":"84b1b2ba21c137d84a2cecc0ab53e6c8e2ec2460981434e9231f02872d0e2d5a"} Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.007168 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2543-account-create-update-66jmj" event={"ID":"fe5bbf96-28f9-4afd-ae13-d4927c001e7a","Type":"ContainerStarted","Data":"79247d93bb3c642ba8733b96d245e32f8515146bc39bf27640652e67d0d813da"} Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.009649 4897 generic.go:334] "Generic (PLEG): container finished" podID="b336c8ba-c121-4c43-a75b-8111283a595b" containerID="af8e13fa059457da4b28a33305d835bbbfcdca8d17c865c95f7c3bbcc0e7a01c" exitCode=0 Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.009704 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vs6xr" event={"ID":"b336c8ba-c121-4c43-a75b-8111283a595b","Type":"ContainerDied","Data":"af8e13fa059457da4b28a33305d835bbbfcdca8d17c865c95f7c3bbcc0e7a01c"} Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.009721 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vs6xr" event={"ID":"b336c8ba-c121-4c43-a75b-8111283a595b","Type":"ContainerStarted","Data":"ddc00d5a688e0bf9b4f9bb4cb279007aa93ec50600c005b288a3e0f0142f3990"} Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.011520 4897 generic.go:334] "Generic (PLEG): container finished" podID="17b0c552-7591-4dbd-85ae-bab84ebb7763" containerID="99e1b916759cbd65cc2fe9c5eb37c5a4f92325c3cf72fc55eb003601db030e02" exitCode=0 Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.011580 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-xlfcf" event={"ID":"17b0c552-7591-4dbd-85ae-bab84ebb7763","Type":"ContainerDied","Data":"99e1b916759cbd65cc2fe9c5eb37c5a4f92325c3cf72fc55eb003601db030e02"} Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.011606 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-xlfcf" event={"ID":"17b0c552-7591-4dbd-85ae-bab84ebb7763","Type":"ContainerStarted","Data":"8290b01f1a5e08a3907884f14005c97bb4c6a120f828bcc91d1133ea2c62d773"} Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.013595 4897 generic.go:334] "Generic (PLEG): container finished" podID="064037fd-b986-4cd9-bb3e-1000c25a3606" containerID="b6c780467831e9ecc8226893feb04cd5fb39d32275431c5cfe579b681fac3f02" exitCode=0 Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.013661 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-6627-account-create-update-jr9tq" event={"ID":"064037fd-b986-4cd9-bb3e-1000c25a3606","Type":"ContainerDied","Data":"b6c780467831e9ecc8226893feb04cd5fb39d32275431c5cfe579b681fac3f02"} Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.015505 4897 generic.go:334] "Generic (PLEG): container finished" podID="59cbf86b-ab14-4d24-953d-5dc1388d0371" containerID="b80ec8163eb0793ed425e9a7a931f54593bd3245bbc6454c578b10d00c0069ba" exitCode=0 Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.015549 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6000-account-create-update-phbsk" event={"ID":"59cbf86b-ab14-4d24-953d-5dc1388d0371","Type":"ContainerDied","Data":"b80ec8163eb0793ed425e9a7a931f54593bd3245bbc6454c578b10d00c0069ba"} Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.017661 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7sjdq" event={"ID":"44adf1d8-e13a-4851-8dc7-6939ef2aa45b","Type":"ContainerStarted","Data":"dcfad339552725f49966586667622fe50d8a17978d89e13288aee810e5dd908c"} Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.018196 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-7sjdq" Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.025962 4897 generic.go:334] "Generic (PLEG): container finished" podID="d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9" containerID="92d31e4bfe331edc54debbf0fa29daa0b4c6a31c37b7e70cf91c3ed1b7a0a7e2" exitCode=0 Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.026134 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4dvpw" event={"ID":"d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9","Type":"ContainerDied","Data":"92d31e4bfe331edc54debbf0fa29daa0b4c6a31c37b7e70cf91c3ed1b7a0a7e2"} Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.028088 4897 generic.go:334] "Generic (PLEG): container finished" podID="4940f666-ec19-4b4c-9eb6-4cce233844f9" containerID="222ac0e3a5fd2dabfd1e5940f06bbb37e4faa29f071c2dd63a406c165506d9ca" exitCode=0 Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.028177 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-xvs2m" event={"ID":"4940f666-ec19-4b4c-9eb6-4cce233844f9","Type":"ContainerDied","Data":"222ac0e3a5fd2dabfd1e5940f06bbb37e4faa29f071c2dd63a406c165506d9ca"} Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.028212 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-xvs2m" event={"ID":"4940f666-ec19-4b4c-9eb6-4cce233844f9","Type":"ContainerStarted","Data":"943ed03613e63c345c6087ed97063c524bf4d4a56aae0a1b3c54f23a0cb8db53"} Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.052238 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-7sjdq" podStartSLOduration=11.052216256 podStartE2EDuration="11.052216256s" podCreationTimestamp="2026-02-14 19:03:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:03:43.0438534 +0000 UTC m=+1276.020261893" watchObservedRunningTime="2026-02-14 19:03:43.052216256 +0000 UTC m=+1276.028624739" Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.559749 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-wxmjz"] Feb 14 19:03:43 crc kubenswrapper[4897]: E0214 19:03:43.561720 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="392af334-f2c0-4b48-9078-37085e1b4750" containerName="init" Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.561752 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="392af334-f2c0-4b48-9078-37085e1b4750" containerName="init" Feb 14 19:03:43 crc kubenswrapper[4897]: E0214 19:03:43.561774 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="392af334-f2c0-4b48-9078-37085e1b4750" containerName="dnsmasq-dns" Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.561780 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="392af334-f2c0-4b48-9078-37085e1b4750" containerName="dnsmasq-dns" Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.562018 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="392af334-f2c0-4b48-9078-37085e1b4750" containerName="dnsmasq-dns" Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.562854 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wxmjz" Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.565474 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.576192 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wxmjz"] Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.744236 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4zb2\" (UniqueName: \"kubernetes.io/projected/ed950cca-3c6f-42a6-ac02-9e1290251fba-kube-api-access-f4zb2\") pod \"root-account-create-update-wxmjz\" (UID: \"ed950cca-3c6f-42a6-ac02-9e1290251fba\") " pod="openstack/root-account-create-update-wxmjz" Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.744937 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed950cca-3c6f-42a6-ac02-9e1290251fba-operator-scripts\") pod \"root-account-create-update-wxmjz\" (UID: \"ed950cca-3c6f-42a6-ac02-9e1290251fba\") " pod="openstack/root-account-create-update-wxmjz" Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.847471 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4zb2\" (UniqueName: \"kubernetes.io/projected/ed950cca-3c6f-42a6-ac02-9e1290251fba-kube-api-access-f4zb2\") pod \"root-account-create-update-wxmjz\" (UID: \"ed950cca-3c6f-42a6-ac02-9e1290251fba\") " pod="openstack/root-account-create-update-wxmjz" Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.847617 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed950cca-3c6f-42a6-ac02-9e1290251fba-operator-scripts\") pod \"root-account-create-update-wxmjz\" (UID: \"ed950cca-3c6f-42a6-ac02-9e1290251fba\") " pod="openstack/root-account-create-update-wxmjz" Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.848898 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed950cca-3c6f-42a6-ac02-9e1290251fba-operator-scripts\") pod \"root-account-create-update-wxmjz\" (UID: \"ed950cca-3c6f-42a6-ac02-9e1290251fba\") " pod="openstack/root-account-create-update-wxmjz" Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.877859 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4zb2\" (UniqueName: \"kubernetes.io/projected/ed950cca-3c6f-42a6-ac02-9e1290251fba-kube-api-access-f4zb2\") pod \"root-account-create-update-wxmjz\" (UID: \"ed950cca-3c6f-42a6-ac02-9e1290251fba\") " pod="openstack/root-account-create-update-wxmjz" Feb 14 19:03:43 crc kubenswrapper[4897]: I0214 19:03:43.893368 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wxmjz" Feb 14 19:03:44 crc kubenswrapper[4897]: I0214 19:03:44.044434 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d9e0766f-fee2-48be-b8d6-1b04e52fe8ee","Type":"ContainerStarted","Data":"aebefdb3859dc3515c60e1ff8ca9a1f5afd5bc6c9e1eb751614ff170c1b29b2f"} Feb 14 19:03:44 crc kubenswrapper[4897]: I0214 19:03:44.044757 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d9e0766f-fee2-48be-b8d6-1b04e52fe8ee","Type":"ContainerStarted","Data":"f92a9852496dd7927704fd9e857fd35ca4f2a670165b1716d1af4a3435d0a975"} Feb 14 19:03:44 crc kubenswrapper[4897]: I0214 19:03:44.045205 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 14 19:03:44 crc kubenswrapper[4897]: I0214 19:03:44.079994 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=9.13260749 podStartE2EDuration="11.0799745s" podCreationTimestamp="2026-02-14 19:03:33 +0000 UTC" firstStartedPulling="2026-02-14 19:03:41.283576429 +0000 UTC m=+1274.259984912" lastFinishedPulling="2026-02-14 19:03:43.230943439 +0000 UTC m=+1276.207351922" observedRunningTime="2026-02-14 19:03:44.072014916 +0000 UTC m=+1277.048423399" watchObservedRunningTime="2026-02-14 19:03:44.0799745 +0000 UTC m=+1277.056382983" Feb 14 19:03:44 crc kubenswrapper[4897]: I0214 19:03:44.088920 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7cb5889db5-c4mp2" podUID="392af334-f2c0-4b48-9078-37085e1b4750" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: i/o timeout" Feb 14 19:03:44 crc kubenswrapper[4897]: I0214 19:03:44.834181 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6000-account-create-update-phbsk" Feb 14 19:03:44 crc kubenswrapper[4897]: I0214 19:03:44.861389 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-xvs2m" Feb 14 19:03:44 crc kubenswrapper[4897]: I0214 19:03:44.979374 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7b27-account-create-update-p5jlf" Feb 14 19:03:44 crc kubenswrapper[4897]: I0214 19:03:44.981197 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2543-account-create-update-66jmj" Feb 14 19:03:44 crc kubenswrapper[4897]: I0214 19:03:44.988299 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q85p6\" (UniqueName: \"kubernetes.io/projected/59cbf86b-ab14-4d24-953d-5dc1388d0371-kube-api-access-q85p6\") pod \"59cbf86b-ab14-4d24-953d-5dc1388d0371\" (UID: \"59cbf86b-ab14-4d24-953d-5dc1388d0371\") " Feb 14 19:03:44 crc kubenswrapper[4897]: I0214 19:03:44.988403 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59cbf86b-ab14-4d24-953d-5dc1388d0371-operator-scripts\") pod \"59cbf86b-ab14-4d24-953d-5dc1388d0371\" (UID: \"59cbf86b-ab14-4d24-953d-5dc1388d0371\") " Feb 14 19:03:44 crc kubenswrapper[4897]: I0214 19:03:44.988629 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vs6xr" Feb 14 19:03:44 crc kubenswrapper[4897]: I0214 19:03:44.988688 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4940f666-ec19-4b4c-9eb6-4cce233844f9-operator-scripts\") pod \"4940f666-ec19-4b4c-9eb6-4cce233844f9\" (UID: \"4940f666-ec19-4b4c-9eb6-4cce233844f9\") " Feb 14 19:03:44 crc kubenswrapper[4897]: I0214 19:03:44.988739 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tqxg\" (UniqueName: \"kubernetes.io/projected/4940f666-ec19-4b4c-9eb6-4cce233844f9-kube-api-access-6tqxg\") pod \"4940f666-ec19-4b4c-9eb6-4cce233844f9\" (UID: \"4940f666-ec19-4b4c-9eb6-4cce233844f9\") " Feb 14 19:03:44 crc kubenswrapper[4897]: I0214 19:03:44.988787 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59cbf86b-ab14-4d24-953d-5dc1388d0371-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "59cbf86b-ab14-4d24-953d-5dc1388d0371" (UID: "59cbf86b-ab14-4d24-953d-5dc1388d0371"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:03:44 crc kubenswrapper[4897]: I0214 19:03:44.989243 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59cbf86b-ab14-4d24-953d-5dc1388d0371-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:44 crc kubenswrapper[4897]: I0214 19:03:44.989479 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4940f666-ec19-4b4c-9eb6-4cce233844f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4940f666-ec19-4b4c-9eb6-4cce233844f9" (UID: "4940f666-ec19-4b4c-9eb6-4cce233844f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:03:44 crc kubenswrapper[4897]: I0214 19:03:44.994177 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4dvpw" Feb 14 19:03:44 crc kubenswrapper[4897]: I0214 19:03:44.994161 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4940f666-ec19-4b4c-9eb6-4cce233844f9-kube-api-access-6tqxg" (OuterVolumeSpecName: "kube-api-access-6tqxg") pod "4940f666-ec19-4b4c-9eb6-4cce233844f9" (UID: "4940f666-ec19-4b4c-9eb6-4cce233844f9"). InnerVolumeSpecName "kube-api-access-6tqxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:44.998337 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59cbf86b-ab14-4d24-953d-5dc1388d0371-kube-api-access-q85p6" (OuterVolumeSpecName: "kube-api-access-q85p6") pod "59cbf86b-ab14-4d24-953d-5dc1388d0371" (UID: "59cbf86b-ab14-4d24-953d-5dc1388d0371"). InnerVolumeSpecName "kube-api-access-q85p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.054279 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xlfcf" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.062737 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2543-account-create-update-66jmj" event={"ID":"fe5bbf96-28f9-4afd-ae13-d4927c001e7a","Type":"ContainerDied","Data":"79247d93bb3c642ba8733b96d245e32f8515146bc39bf27640652e67d0d813da"} Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.062796 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79247d93bb3c642ba8733b96d245e32f8515146bc39bf27640652e67d0d813da" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.062877 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2543-account-create-update-66jmj" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.064043 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-6627-account-create-update-jr9tq" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.065221 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vs6xr" event={"ID":"b336c8ba-c121-4c43-a75b-8111283a595b","Type":"ContainerDied","Data":"ddc00d5a688e0bf9b4f9bb4cb279007aa93ec50600c005b288a3e0f0142f3990"} Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.065249 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vs6xr" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.065262 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddc00d5a688e0bf9b4f9bb4cb279007aa93ec50600c005b288a3e0f0142f3990" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.072551 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4dvpw" event={"ID":"d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9","Type":"ContainerDied","Data":"14cdb8474595b94cbf8ec90984b7c33469cff41f315b3d604495124ea05032ee"} Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.072584 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14cdb8474595b94cbf8ec90984b7c33469cff41f315b3d604495124ea05032ee" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.072639 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4dvpw" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.078055 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-xlfcf" event={"ID":"17b0c552-7591-4dbd-85ae-bab84ebb7763","Type":"ContainerDied","Data":"8290b01f1a5e08a3907884f14005c97bb4c6a120f828bcc91d1133ea2c62d773"} Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.078095 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8290b01f1a5e08a3907884f14005c97bb4c6a120f828bcc91d1133ea2c62d773" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.078161 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xlfcf" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.091999 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9-operator-scripts\") pod \"d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9\" (UID: \"d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9\") " Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.092132 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x85gj\" (UniqueName: \"kubernetes.io/projected/c20fa3de-5325-4d13-a447-78392f703250-kube-api-access-x85gj\") pod \"c20fa3de-5325-4d13-a447-78392f703250\" (UID: \"c20fa3de-5325-4d13-a447-78392f703250\") " Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.092199 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qh5xr\" (UniqueName: \"kubernetes.io/projected/d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9-kube-api-access-qh5xr\") pod \"d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9\" (UID: \"d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9\") " Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.092332 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c20fa3de-5325-4d13-a447-78392f703250-operator-scripts\") pod \"c20fa3de-5325-4d13-a447-78392f703250\" (UID: \"c20fa3de-5325-4d13-a447-78392f703250\") " Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.092403 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe5bbf96-28f9-4afd-ae13-d4927c001e7a-operator-scripts\") pod \"fe5bbf96-28f9-4afd-ae13-d4927c001e7a\" (UID: \"fe5bbf96-28f9-4afd-ae13-d4927c001e7a\") " Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.092443 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b336c8ba-c121-4c43-a75b-8111283a595b-operator-scripts\") pod \"b336c8ba-c121-4c43-a75b-8111283a595b\" (UID: \"b336c8ba-c121-4c43-a75b-8111283a595b\") " Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.092488 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgxr6\" (UniqueName: \"kubernetes.io/projected/fe5bbf96-28f9-4afd-ae13-d4927c001e7a-kube-api-access-zgxr6\") pod \"fe5bbf96-28f9-4afd-ae13-d4927c001e7a\" (UID: \"fe5bbf96-28f9-4afd-ae13-d4927c001e7a\") " Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.092561 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bp79\" (UniqueName: \"kubernetes.io/projected/b336c8ba-c121-4c43-a75b-8111283a595b-kube-api-access-6bp79\") pod \"b336c8ba-c121-4c43-a75b-8111283a595b\" (UID: \"b336c8ba-c121-4c43-a75b-8111283a595b\") " Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.092646 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9" (UID: "d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.093083 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c20fa3de-5325-4d13-a447-78392f703250-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c20fa3de-5325-4d13-a447-78392f703250" (UID: "c20fa3de-5325-4d13-a447-78392f703250"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.093236 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q85p6\" (UniqueName: \"kubernetes.io/projected/59cbf86b-ab14-4d24-953d-5dc1388d0371-kube-api-access-q85p6\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.093263 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c20fa3de-5325-4d13-a447-78392f703250-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.093282 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4940f666-ec19-4b4c-9eb6-4cce233844f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.093298 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tqxg\" (UniqueName: \"kubernetes.io/projected/4940f666-ec19-4b4c-9eb6-4cce233844f9-kube-api-access-6tqxg\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.093321 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.093726 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b336c8ba-c121-4c43-a75b-8111283a595b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b336c8ba-c121-4c43-a75b-8111283a595b" (UID: "b336c8ba-c121-4c43-a75b-8111283a595b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.094114 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe5bbf96-28f9-4afd-ae13-d4927c001e7a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe5bbf96-28f9-4afd-ae13-d4927c001e7a" (UID: "fe5bbf96-28f9-4afd-ae13-d4927c001e7a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.094897 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-xvs2m" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.094895 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-xvs2m" event={"ID":"4940f666-ec19-4b4c-9eb6-4cce233844f9","Type":"ContainerDied","Data":"943ed03613e63c345c6087ed97063c524bf4d4a56aae0a1b3c54f23a0cb8db53"} Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.095159 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="943ed03613e63c345c6087ed97063c524bf4d4a56aae0a1b3c54f23a0cb8db53" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.103802 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9-kube-api-access-qh5xr" (OuterVolumeSpecName: "kube-api-access-qh5xr") pod "d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9" (UID: "d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9"). InnerVolumeSpecName "kube-api-access-qh5xr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.103943 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c20fa3de-5325-4d13-a447-78392f703250-kube-api-access-x85gj" (OuterVolumeSpecName: "kube-api-access-x85gj") pod "c20fa3de-5325-4d13-a447-78392f703250" (UID: "c20fa3de-5325-4d13-a447-78392f703250"). InnerVolumeSpecName "kube-api-access-x85gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.104749 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-6627-account-create-update-jr9tq" event={"ID":"064037fd-b986-4cd9-bb3e-1000c25a3606","Type":"ContainerDied","Data":"eda38f8c0598254a3fba2f450e9bdfc31d850ba53ff9d2953f149d281457abca"} Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.104790 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eda38f8c0598254a3fba2f450e9bdfc31d850ba53ff9d2953f149d281457abca" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.104812 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-6627-account-create-update-jr9tq" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.104833 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe5bbf96-28f9-4afd-ae13-d4927c001e7a-kube-api-access-zgxr6" (OuterVolumeSpecName: "kube-api-access-zgxr6") pod "fe5bbf96-28f9-4afd-ae13-d4927c001e7a" (UID: "fe5bbf96-28f9-4afd-ae13-d4927c001e7a"). InnerVolumeSpecName "kube-api-access-zgxr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.106344 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7b27-account-create-update-p5jlf" event={"ID":"c20fa3de-5325-4d13-a447-78392f703250","Type":"ContainerDied","Data":"d457d1d19296b8ba5bfe4b6678388b1713442bb6a9b904f2b30fd02f4efc0de4"} Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.106372 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d457d1d19296b8ba5bfe4b6678388b1713442bb6a9b904f2b30fd02f4efc0de4" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.106408 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7b27-account-create-update-p5jlf" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.116233 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6000-account-create-update-phbsk" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.119671 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6000-account-create-update-phbsk" event={"ID":"59cbf86b-ab14-4d24-953d-5dc1388d0371","Type":"ContainerDied","Data":"8cb4a89af5c514c8b7a86c76b4de4ad92c45134e8088c9afb564a8b527045741"} Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.119701 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cb4a89af5c514c8b7a86c76b4de4ad92c45134e8088c9afb564a8b527045741" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.126699 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wxmjz"] Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.138542 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b336c8ba-c121-4c43-a75b-8111283a595b-kube-api-access-6bp79" (OuterVolumeSpecName: "kube-api-access-6bp79") pod "b336c8ba-c121-4c43-a75b-8111283a595b" (UID: "b336c8ba-c121-4c43-a75b-8111283a595b"). InnerVolumeSpecName "kube-api-access-6bp79". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.195069 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17b0c552-7591-4dbd-85ae-bab84ebb7763-operator-scripts\") pod \"17b0c552-7591-4dbd-85ae-bab84ebb7763\" (UID: \"17b0c552-7591-4dbd-85ae-bab84ebb7763\") " Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.195123 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g758z\" (UniqueName: \"kubernetes.io/projected/17b0c552-7591-4dbd-85ae-bab84ebb7763-kube-api-access-g758z\") pod \"17b0c552-7591-4dbd-85ae-bab84ebb7763\" (UID: \"17b0c552-7591-4dbd-85ae-bab84ebb7763\") " Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.195163 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/064037fd-b986-4cd9-bb3e-1000c25a3606-operator-scripts\") pod \"064037fd-b986-4cd9-bb3e-1000c25a3606\" (UID: \"064037fd-b986-4cd9-bb3e-1000c25a3606\") " Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.195242 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4zwg\" (UniqueName: \"kubernetes.io/projected/064037fd-b986-4cd9-bb3e-1000c25a3606-kube-api-access-r4zwg\") pod \"064037fd-b986-4cd9-bb3e-1000c25a3606\" (UID: \"064037fd-b986-4cd9-bb3e-1000c25a3606\") " Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.195883 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bp79\" (UniqueName: \"kubernetes.io/projected/b336c8ba-c121-4c43-a75b-8111283a595b-kube-api-access-6bp79\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.195901 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x85gj\" (UniqueName: \"kubernetes.io/projected/c20fa3de-5325-4d13-a447-78392f703250-kube-api-access-x85gj\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.195910 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qh5xr\" (UniqueName: \"kubernetes.io/projected/d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9-kube-api-access-qh5xr\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.195919 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe5bbf96-28f9-4afd-ae13-d4927c001e7a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.195929 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b336c8ba-c121-4c43-a75b-8111283a595b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.195938 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgxr6\" (UniqueName: \"kubernetes.io/projected/fe5bbf96-28f9-4afd-ae13-d4927c001e7a-kube-api-access-zgxr6\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.196516 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17b0c552-7591-4dbd-85ae-bab84ebb7763-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "17b0c552-7591-4dbd-85ae-bab84ebb7763" (UID: "17b0c552-7591-4dbd-85ae-bab84ebb7763"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.197097 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/064037fd-b986-4cd9-bb3e-1000c25a3606-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "064037fd-b986-4cd9-bb3e-1000c25a3606" (UID: "064037fd-b986-4cd9-bb3e-1000c25a3606"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.198455 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17b0c552-7591-4dbd-85ae-bab84ebb7763-kube-api-access-g758z" (OuterVolumeSpecName: "kube-api-access-g758z") pod "17b0c552-7591-4dbd-85ae-bab84ebb7763" (UID: "17b0c552-7591-4dbd-85ae-bab84ebb7763"). InnerVolumeSpecName "kube-api-access-g758z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.198964 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/064037fd-b986-4cd9-bb3e-1000c25a3606-kube-api-access-r4zwg" (OuterVolumeSpecName: "kube-api-access-r4zwg") pod "064037fd-b986-4cd9-bb3e-1000c25a3606" (UID: "064037fd-b986-4cd9-bb3e-1000c25a3606"). InnerVolumeSpecName "kube-api-access-r4zwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.298307 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17b0c552-7591-4dbd-85ae-bab84ebb7763-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.298346 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g758z\" (UniqueName: \"kubernetes.io/projected/17b0c552-7591-4dbd-85ae-bab84ebb7763-kube-api-access-g758z\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.298355 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/064037fd-b986-4cd9-bb3e-1000c25a3606-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.298365 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4zwg\" (UniqueName: \"kubernetes.io/projected/064037fd-b986-4cd9-bb3e-1000c25a3606-kube-api-access-r4zwg\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:45 crc kubenswrapper[4897]: I0214 19:03:45.807438 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/674b3cbc-fa6f-4475-bebd-314f24beaaa0-etc-swift\") pod \"swift-storage-0\" (UID: \"674b3cbc-fa6f-4475-bebd-314f24beaaa0\") " pod="openstack/swift-storage-0" Feb 14 19:03:45 crc kubenswrapper[4897]: E0214 19:03:45.807943 4897 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 19:03:45 crc kubenswrapper[4897]: E0214 19:03:45.807960 4897 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 19:03:45 crc kubenswrapper[4897]: E0214 19:03:45.808011 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/674b3cbc-fa6f-4475-bebd-314f24beaaa0-etc-swift podName:674b3cbc-fa6f-4475-bebd-314f24beaaa0 nodeName:}" failed. No retries permitted until 2026-02-14 19:04:01.807992633 +0000 UTC m=+1294.784401116 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/674b3cbc-fa6f-4475-bebd-314f24beaaa0-etc-swift") pod "swift-storage-0" (UID: "674b3cbc-fa6f-4475-bebd-314f24beaaa0") : configmap "swift-ring-files" not found Feb 14 19:03:47 crc kubenswrapper[4897]: W0214 19:03:47.098492 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded950cca_3c6f_42a6_ac02_9e1290251fba.slice/crio-ef21452f378a37c9fc8d9eb448f43c84bdadea6527c9147e47ee1a1da0a0dcda WatchSource:0}: Error finding container ef21452f378a37c9fc8d9eb448f43c84bdadea6527c9147e47ee1a1da0a0dcda: Status 404 returned error can't find the container with id ef21452f378a37c9fc8d9eb448f43c84bdadea6527c9147e47ee1a1da0a0dcda Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.157367 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wxmjz" event={"ID":"ed950cca-3c6f-42a6-ac02-9e1290251fba","Type":"ContainerStarted","Data":"ef21452f378a37c9fc8d9eb448f43c84bdadea6527c9147e47ee1a1da0a0dcda"} Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.162272 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c89d5d749-r68wj" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.212504 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-7gfbg"] Feb 14 19:03:47 crc kubenswrapper[4897]: E0214 19:03:47.214653 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20fa3de-5325-4d13-a447-78392f703250" containerName="mariadb-account-create-update" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.214689 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20fa3de-5325-4d13-a447-78392f703250" containerName="mariadb-account-create-update" Feb 14 19:03:47 crc kubenswrapper[4897]: E0214 19:03:47.214710 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9" containerName="mariadb-database-create" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.214717 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9" containerName="mariadb-database-create" Feb 14 19:03:47 crc kubenswrapper[4897]: E0214 19:03:47.214735 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b336c8ba-c121-4c43-a75b-8111283a595b" containerName="mariadb-database-create" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.214743 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b336c8ba-c121-4c43-a75b-8111283a595b" containerName="mariadb-database-create" Feb 14 19:03:47 crc kubenswrapper[4897]: E0214 19:03:47.214758 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17b0c552-7591-4dbd-85ae-bab84ebb7763" containerName="mariadb-database-create" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.214766 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="17b0c552-7591-4dbd-85ae-bab84ebb7763" containerName="mariadb-database-create" Feb 14 19:03:47 crc kubenswrapper[4897]: E0214 19:03:47.214786 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59cbf86b-ab14-4d24-953d-5dc1388d0371" containerName="mariadb-account-create-update" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.214794 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="59cbf86b-ab14-4d24-953d-5dc1388d0371" containerName="mariadb-account-create-update" Feb 14 19:03:47 crc kubenswrapper[4897]: E0214 19:03:47.214806 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4940f666-ec19-4b4c-9eb6-4cce233844f9" containerName="mariadb-database-create" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.214815 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="4940f666-ec19-4b4c-9eb6-4cce233844f9" containerName="mariadb-database-create" Feb 14 19:03:47 crc kubenswrapper[4897]: E0214 19:03:47.214830 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064037fd-b986-4cd9-bb3e-1000c25a3606" containerName="mariadb-account-create-update" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.214838 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="064037fd-b986-4cd9-bb3e-1000c25a3606" containerName="mariadb-account-create-update" Feb 14 19:03:47 crc kubenswrapper[4897]: E0214 19:03:47.214847 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe5bbf96-28f9-4afd-ae13-d4927c001e7a" containerName="mariadb-account-create-update" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.214854 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe5bbf96-28f9-4afd-ae13-d4927c001e7a" containerName="mariadb-account-create-update" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.215136 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="4940f666-ec19-4b4c-9eb6-4cce233844f9" containerName="mariadb-database-create" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.215158 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="c20fa3de-5325-4d13-a447-78392f703250" containerName="mariadb-account-create-update" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.215168 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9" containerName="mariadb-database-create" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.215184 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="17b0c552-7591-4dbd-85ae-bab84ebb7763" containerName="mariadb-database-create" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.215193 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="59cbf86b-ab14-4d24-953d-5dc1388d0371" containerName="mariadb-account-create-update" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.215207 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="b336c8ba-c121-4c43-a75b-8111283a595b" containerName="mariadb-database-create" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.215217 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="064037fd-b986-4cd9-bb3e-1000c25a3606" containerName="mariadb-account-create-update" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.215228 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe5bbf96-28f9-4afd-ae13-d4927c001e7a" containerName="mariadb-account-create-update" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.216144 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7gfbg" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.219472 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.219653 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wcdfs" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.237744 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7gfbg"] Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.389282 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rnl9\" (UniqueName: \"kubernetes.io/projected/731750fa-408a-46ef-89bb-5491267222fb-kube-api-access-7rnl9\") pod \"glance-db-sync-7gfbg\" (UID: \"731750fa-408a-46ef-89bb-5491267222fb\") " pod="openstack/glance-db-sync-7gfbg" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.389618 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/731750fa-408a-46ef-89bb-5491267222fb-combined-ca-bundle\") pod \"glance-db-sync-7gfbg\" (UID: \"731750fa-408a-46ef-89bb-5491267222fb\") " pod="openstack/glance-db-sync-7gfbg" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.389697 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/731750fa-408a-46ef-89bb-5491267222fb-config-data\") pod \"glance-db-sync-7gfbg\" (UID: \"731750fa-408a-46ef-89bb-5491267222fb\") " pod="openstack/glance-db-sync-7gfbg" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.389877 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/731750fa-408a-46ef-89bb-5491267222fb-db-sync-config-data\") pod \"glance-db-sync-7gfbg\" (UID: \"731750fa-408a-46ef-89bb-5491267222fb\") " pod="openstack/glance-db-sync-7gfbg" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.492288 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rnl9\" (UniqueName: \"kubernetes.io/projected/731750fa-408a-46ef-89bb-5491267222fb-kube-api-access-7rnl9\") pod \"glance-db-sync-7gfbg\" (UID: \"731750fa-408a-46ef-89bb-5491267222fb\") " pod="openstack/glance-db-sync-7gfbg" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.492481 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/731750fa-408a-46ef-89bb-5491267222fb-combined-ca-bundle\") pod \"glance-db-sync-7gfbg\" (UID: \"731750fa-408a-46ef-89bb-5491267222fb\") " pod="openstack/glance-db-sync-7gfbg" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.493384 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/731750fa-408a-46ef-89bb-5491267222fb-config-data\") pod \"glance-db-sync-7gfbg\" (UID: \"731750fa-408a-46ef-89bb-5491267222fb\") " pod="openstack/glance-db-sync-7gfbg" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.493571 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/731750fa-408a-46ef-89bb-5491267222fb-db-sync-config-data\") pod \"glance-db-sync-7gfbg\" (UID: \"731750fa-408a-46ef-89bb-5491267222fb\") " pod="openstack/glance-db-sync-7gfbg" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.500637 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/731750fa-408a-46ef-89bb-5491267222fb-config-data\") pod \"glance-db-sync-7gfbg\" (UID: \"731750fa-408a-46ef-89bb-5491267222fb\") " pod="openstack/glance-db-sync-7gfbg" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.501153 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/731750fa-408a-46ef-89bb-5491267222fb-db-sync-config-data\") pod \"glance-db-sync-7gfbg\" (UID: \"731750fa-408a-46ef-89bb-5491267222fb\") " pod="openstack/glance-db-sync-7gfbg" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.505509 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/731750fa-408a-46ef-89bb-5491267222fb-combined-ca-bundle\") pod \"glance-db-sync-7gfbg\" (UID: \"731750fa-408a-46ef-89bb-5491267222fb\") " pod="openstack/glance-db-sync-7gfbg" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.511447 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rnl9\" (UniqueName: \"kubernetes.io/projected/731750fa-408a-46ef-89bb-5491267222fb-kube-api-access-7rnl9\") pod \"glance-db-sync-7gfbg\" (UID: \"731750fa-408a-46ef-89bb-5491267222fb\") " pod="openstack/glance-db-sync-7gfbg" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.655236 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-7sjdq" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.697171 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7gfbg" Feb 14 19:03:47 crc kubenswrapper[4897]: I0214 19:03:47.712661 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-r68wj"] Feb 14 19:03:48 crc kubenswrapper[4897]: I0214 19:03:48.170902 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"42b73b5c-bc43-4e91-9e3d-255ed69831db","Type":"ContainerStarted","Data":"a9eafbd8ceae4ac75efdffe4f8ba4141b2e5c47225c19003b0f379d5b0e48c75"} Feb 14 19:03:48 crc kubenswrapper[4897]: I0214 19:03:48.172495 4897 generic.go:334] "Generic (PLEG): container finished" podID="ed950cca-3c6f-42a6-ac02-9e1290251fba" containerID="04221d64d92696a181811c2ac60b75595be0e422b637baaa9f90ac2bb60af323" exitCode=0 Feb 14 19:03:48 crc kubenswrapper[4897]: I0214 19:03:48.172555 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wxmjz" event={"ID":"ed950cca-3c6f-42a6-ac02-9e1290251fba","Type":"ContainerDied","Data":"04221d64d92696a181811c2ac60b75595be0e422b637baaa9f90ac2bb60af323"} Feb 14 19:03:48 crc kubenswrapper[4897]: I0214 19:03:48.172674 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c89d5d749-r68wj" podUID="71c86e05-3ae7-4139-bd89-cf4311b2deed" containerName="dnsmasq-dns" containerID="cri-o://b41241ac3e9d5136cb2c98a94e9d4863adb3547d2421f70c3a90f8e0410b8463" gracePeriod=10 Feb 14 19:03:48 crc kubenswrapper[4897]: I0214 19:03:48.272257 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7gfbg"] Feb 14 19:03:48 crc kubenswrapper[4897]: I0214 19:03:48.796558 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-r68wj" Feb 14 19:03:48 crc kubenswrapper[4897]: I0214 19:03:48.824932 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71c86e05-3ae7-4139-bd89-cf4311b2deed-ovsdbserver-sb\") pod \"71c86e05-3ae7-4139-bd89-cf4311b2deed\" (UID: \"71c86e05-3ae7-4139-bd89-cf4311b2deed\") " Feb 14 19:03:48 crc kubenswrapper[4897]: I0214 19:03:48.825082 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71c86e05-3ae7-4139-bd89-cf4311b2deed-config\") pod \"71c86e05-3ae7-4139-bd89-cf4311b2deed\" (UID: \"71c86e05-3ae7-4139-bd89-cf4311b2deed\") " Feb 14 19:03:48 crc kubenswrapper[4897]: I0214 19:03:48.825264 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv26r\" (UniqueName: \"kubernetes.io/projected/71c86e05-3ae7-4139-bd89-cf4311b2deed-kube-api-access-nv26r\") pod \"71c86e05-3ae7-4139-bd89-cf4311b2deed\" (UID: \"71c86e05-3ae7-4139-bd89-cf4311b2deed\") " Feb 14 19:03:48 crc kubenswrapper[4897]: I0214 19:03:48.825357 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71c86e05-3ae7-4139-bd89-cf4311b2deed-dns-svc\") pod \"71c86e05-3ae7-4139-bd89-cf4311b2deed\" (UID: \"71c86e05-3ae7-4139-bd89-cf4311b2deed\") " Feb 14 19:03:48 crc kubenswrapper[4897]: I0214 19:03:48.835624 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c86e05-3ae7-4139-bd89-cf4311b2deed-kube-api-access-nv26r" (OuterVolumeSpecName: "kube-api-access-nv26r") pod "71c86e05-3ae7-4139-bd89-cf4311b2deed" (UID: "71c86e05-3ae7-4139-bd89-cf4311b2deed"). InnerVolumeSpecName "kube-api-access-nv26r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:03:48 crc kubenswrapper[4897]: I0214 19:03:48.907756 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71c86e05-3ae7-4139-bd89-cf4311b2deed-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "71c86e05-3ae7-4139-bd89-cf4311b2deed" (UID: "71c86e05-3ae7-4139-bd89-cf4311b2deed"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:03:48 crc kubenswrapper[4897]: I0214 19:03:48.915521 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71c86e05-3ae7-4139-bd89-cf4311b2deed-config" (OuterVolumeSpecName: "config") pod "71c86e05-3ae7-4139-bd89-cf4311b2deed" (UID: "71c86e05-3ae7-4139-bd89-cf4311b2deed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:03:48 crc kubenswrapper[4897]: I0214 19:03:48.929258 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71c86e05-3ae7-4139-bd89-cf4311b2deed-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:48 crc kubenswrapper[4897]: I0214 19:03:48.929289 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv26r\" (UniqueName: \"kubernetes.io/projected/71c86e05-3ae7-4139-bd89-cf4311b2deed-kube-api-access-nv26r\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:48 crc kubenswrapper[4897]: I0214 19:03:48.929303 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71c86e05-3ae7-4139-bd89-cf4311b2deed-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:48 crc kubenswrapper[4897]: I0214 19:03:48.930619 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71c86e05-3ae7-4139-bd89-cf4311b2deed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "71c86e05-3ae7-4139-bd89-cf4311b2deed" (UID: "71c86e05-3ae7-4139-bd89-cf4311b2deed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:03:48 crc kubenswrapper[4897]: I0214 19:03:48.976579 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-hrbjk"] Feb 14 19:03:48 crc kubenswrapper[4897]: E0214 19:03:48.977061 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71c86e05-3ae7-4139-bd89-cf4311b2deed" containerName="dnsmasq-dns" Feb 14 19:03:48 crc kubenswrapper[4897]: I0214 19:03:48.977079 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="71c86e05-3ae7-4139-bd89-cf4311b2deed" containerName="dnsmasq-dns" Feb 14 19:03:48 crc kubenswrapper[4897]: E0214 19:03:48.977120 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71c86e05-3ae7-4139-bd89-cf4311b2deed" containerName="init" Feb 14 19:03:48 crc kubenswrapper[4897]: I0214 19:03:48.977127 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="71c86e05-3ae7-4139-bd89-cf4311b2deed" containerName="init" Feb 14 19:03:48 crc kubenswrapper[4897]: I0214 19:03:48.977373 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="71c86e05-3ae7-4139-bd89-cf4311b2deed" containerName="dnsmasq-dns" Feb 14 19:03:48 crc kubenswrapper[4897]: I0214 19:03:48.978084 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-hrbjk" Feb 14 19:03:48 crc kubenswrapper[4897]: I0214 19:03:48.986103 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-hrbjk"] Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.030911 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxcpz\" (UniqueName: \"kubernetes.io/projected/1ddc51d6-ba42-4a8c-8488-24ab847bd808-kube-api-access-nxcpz\") pod \"mysqld-exporter-openstack-cell1-db-create-hrbjk\" (UID: \"1ddc51d6-ba42-4a8c-8488-24ab847bd808\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-hrbjk" Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.031052 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ddc51d6-ba42-4a8c-8488-24ab847bd808-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-hrbjk\" (UID: \"1ddc51d6-ba42-4a8c-8488-24ab847bd808\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-hrbjk" Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.031200 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71c86e05-3ae7-4139-bd89-cf4311b2deed-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.132943 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxcpz\" (UniqueName: \"kubernetes.io/projected/1ddc51d6-ba42-4a8c-8488-24ab847bd808-kube-api-access-nxcpz\") pod \"mysqld-exporter-openstack-cell1-db-create-hrbjk\" (UID: \"1ddc51d6-ba42-4a8c-8488-24ab847bd808\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-hrbjk" Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.133173 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ddc51d6-ba42-4a8c-8488-24ab847bd808-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-hrbjk\" (UID: \"1ddc51d6-ba42-4a8c-8488-24ab847bd808\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-hrbjk" Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.133849 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ddc51d6-ba42-4a8c-8488-24ab847bd808-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-hrbjk\" (UID: \"1ddc51d6-ba42-4a8c-8488-24ab847bd808\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-hrbjk" Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.153580 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxcpz\" (UniqueName: \"kubernetes.io/projected/1ddc51d6-ba42-4a8c-8488-24ab847bd808-kube-api-access-nxcpz\") pod \"mysqld-exporter-openstack-cell1-db-create-hrbjk\" (UID: \"1ddc51d6-ba42-4a8c-8488-24ab847bd808\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-hrbjk" Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.196670 4897 generic.go:334] "Generic (PLEG): container finished" podID="71c86e05-3ae7-4139-bd89-cf4311b2deed" containerID="b41241ac3e9d5136cb2c98a94e9d4863adb3547d2421f70c3a90f8e0410b8463" exitCode=0 Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.196776 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-r68wj" Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.197474 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-r68wj" event={"ID":"71c86e05-3ae7-4139-bd89-cf4311b2deed","Type":"ContainerDied","Data":"b41241ac3e9d5136cb2c98a94e9d4863adb3547d2421f70c3a90f8e0410b8463"} Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.197504 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-r68wj" event={"ID":"71c86e05-3ae7-4139-bd89-cf4311b2deed","Type":"ContainerDied","Data":"5a325a14d2ef81b773cb2e0fe65e3a5d52ab9b695ab426a27e5ad62609dbd55d"} Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.197516 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-9baa-account-create-update-rbzr5"] Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.198053 4897 scope.go:117] "RemoveContainer" containerID="b41241ac3e9d5136cb2c98a94e9d4863adb3547d2421f70c3a90f8e0410b8463" Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.198987 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7gfbg" event={"ID":"731750fa-408a-46ef-89bb-5491267222fb","Type":"ContainerStarted","Data":"7a44322ebb64834d3af613363cbdd550f935ed7107834c5d264cb285fd9208f9"} Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.199096 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-9baa-account-create-update-rbzr5" Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.202143 4897 generic.go:334] "Generic (PLEG): container finished" podID="18272353-8a77-4df9-baab-a4c2a6e6d0cb" containerID="197953c4a994a27c189320255e8ed9c03f2054f7520abe2cc96ee59500d7b0cd" exitCode=0 Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.202293 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nm7qg" event={"ID":"18272353-8a77-4df9-baab-a4c2a6e6d0cb","Type":"ContainerDied","Data":"197953c4a994a27c189320255e8ed9c03f2054f7520abe2cc96ee59500d7b0cd"} Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.205692 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.209524 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-9baa-account-create-update-rbzr5"] Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.235729 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-624qx\" (UniqueName: \"kubernetes.io/projected/7e556a75-3106-43db-b4da-53c6df99cd35-kube-api-access-624qx\") pod \"mysqld-exporter-9baa-account-create-update-rbzr5\" (UID: \"7e556a75-3106-43db-b4da-53c6df99cd35\") " pod="openstack/mysqld-exporter-9baa-account-create-update-rbzr5" Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.235806 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e556a75-3106-43db-b4da-53c6df99cd35-operator-scripts\") pod \"mysqld-exporter-9baa-account-create-update-rbzr5\" (UID: \"7e556a75-3106-43db-b4da-53c6df99cd35\") " pod="openstack/mysqld-exporter-9baa-account-create-update-rbzr5" Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.236734 4897 scope.go:117] "RemoveContainer" containerID="85f0a9f9ae8d9514f0175172d444cb9405389d3643896161e283a68bf3887862" Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.285995 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-r68wj"] Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.295527 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-hrbjk" Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.297159 4897 scope.go:117] "RemoveContainer" containerID="b41241ac3e9d5136cb2c98a94e9d4863adb3547d2421f70c3a90f8e0410b8463" Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.297506 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-r68wj"] Feb 14 19:03:49 crc kubenswrapper[4897]: E0214 19:03:49.297662 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b41241ac3e9d5136cb2c98a94e9d4863adb3547d2421f70c3a90f8e0410b8463\": container with ID starting with b41241ac3e9d5136cb2c98a94e9d4863adb3547d2421f70c3a90f8e0410b8463 not found: ID does not exist" containerID="b41241ac3e9d5136cb2c98a94e9d4863adb3547d2421f70c3a90f8e0410b8463" Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.297739 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b41241ac3e9d5136cb2c98a94e9d4863adb3547d2421f70c3a90f8e0410b8463"} err="failed to get container status \"b41241ac3e9d5136cb2c98a94e9d4863adb3547d2421f70c3a90f8e0410b8463\": rpc error: code = NotFound desc = could not find container \"b41241ac3e9d5136cb2c98a94e9d4863adb3547d2421f70c3a90f8e0410b8463\": container with ID starting with b41241ac3e9d5136cb2c98a94e9d4863adb3547d2421f70c3a90f8e0410b8463 not found: ID does not exist" Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.297770 4897 scope.go:117] "RemoveContainer" containerID="85f0a9f9ae8d9514f0175172d444cb9405389d3643896161e283a68bf3887862" Feb 14 19:03:49 crc kubenswrapper[4897]: E0214 19:03:49.298046 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85f0a9f9ae8d9514f0175172d444cb9405389d3643896161e283a68bf3887862\": container with ID starting with 85f0a9f9ae8d9514f0175172d444cb9405389d3643896161e283a68bf3887862 not found: ID does not exist" containerID="85f0a9f9ae8d9514f0175172d444cb9405389d3643896161e283a68bf3887862" Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.298073 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85f0a9f9ae8d9514f0175172d444cb9405389d3643896161e283a68bf3887862"} err="failed to get container status \"85f0a9f9ae8d9514f0175172d444cb9405389d3643896161e283a68bf3887862\": rpc error: code = NotFound desc = could not find container \"85f0a9f9ae8d9514f0175172d444cb9405389d3643896161e283a68bf3887862\": container with ID starting with 85f0a9f9ae8d9514f0175172d444cb9405389d3643896161e283a68bf3887862 not found: ID does not exist" Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.337555 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-624qx\" (UniqueName: \"kubernetes.io/projected/7e556a75-3106-43db-b4da-53c6df99cd35-kube-api-access-624qx\") pod \"mysqld-exporter-9baa-account-create-update-rbzr5\" (UID: \"7e556a75-3106-43db-b4da-53c6df99cd35\") " pod="openstack/mysqld-exporter-9baa-account-create-update-rbzr5" Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.337644 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e556a75-3106-43db-b4da-53c6df99cd35-operator-scripts\") pod \"mysqld-exporter-9baa-account-create-update-rbzr5\" (UID: \"7e556a75-3106-43db-b4da-53c6df99cd35\") " pod="openstack/mysqld-exporter-9baa-account-create-update-rbzr5" Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.340687 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e556a75-3106-43db-b4da-53c6df99cd35-operator-scripts\") pod \"mysqld-exporter-9baa-account-create-update-rbzr5\" (UID: \"7e556a75-3106-43db-b4da-53c6df99cd35\") " pod="openstack/mysqld-exporter-9baa-account-create-update-rbzr5" Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.385578 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-624qx\" (UniqueName: \"kubernetes.io/projected/7e556a75-3106-43db-b4da-53c6df99cd35-kube-api-access-624qx\") pod \"mysqld-exporter-9baa-account-create-update-rbzr5\" (UID: \"7e556a75-3106-43db-b4da-53c6df99cd35\") " pod="openstack/mysqld-exporter-9baa-account-create-update-rbzr5" Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.532739 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-9baa-account-create-update-rbzr5" Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.805061 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c86e05-3ae7-4139-bd89-cf4311b2deed" path="/var/lib/kubelet/pods/71c86e05-3ae7-4139-bd89-cf4311b2deed/volumes" Feb 14 19:03:49 crc kubenswrapper[4897]: I0214 19:03:49.867506 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-hrbjk"] Feb 14 19:03:49 crc kubenswrapper[4897]: W0214 19:03:49.875982 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ddc51d6_ba42_4a8c_8488_24ab847bd808.slice/crio-2c6ece658d0bf0e388939dbbe5dee711cadb37a538a40a49dee7c724527f59c0 WatchSource:0}: Error finding container 2c6ece658d0bf0e388939dbbe5dee711cadb37a538a40a49dee7c724527f59c0: Status 404 returned error can't find the container with id 2c6ece658d0bf0e388939dbbe5dee711cadb37a538a40a49dee7c724527f59c0 Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.003670 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-9baa-account-create-update-rbzr5"] Feb 14 19:03:50 crc kubenswrapper[4897]: W0214 19:03:50.017206 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e556a75_3106_43db_b4da_53c6df99cd35.slice/crio-eabb6414b8694afe47b807aaf22e2a0e7bf62df39459976a9332b613c5d98f83 WatchSource:0}: Error finding container eabb6414b8694afe47b807aaf22e2a0e7bf62df39459976a9332b613c5d98f83: Status 404 returned error can't find the container with id eabb6414b8694afe47b807aaf22e2a0e7bf62df39459976a9332b613c5d98f83 Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.217277 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-hrbjk" event={"ID":"1ddc51d6-ba42-4a8c-8488-24ab847bd808","Type":"ContainerStarted","Data":"2c6ece658d0bf0e388939dbbe5dee711cadb37a538a40a49dee7c724527f59c0"} Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.222048 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wxmjz" event={"ID":"ed950cca-3c6f-42a6-ac02-9e1290251fba","Type":"ContainerDied","Data":"ef21452f378a37c9fc8d9eb448f43c84bdadea6527c9147e47ee1a1da0a0dcda"} Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.222106 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef21452f378a37c9fc8d9eb448f43c84bdadea6527c9147e47ee1a1da0a0dcda" Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.224940 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-9baa-account-create-update-rbzr5" event={"ID":"7e556a75-3106-43db-b4da-53c6df99cd35","Type":"ContainerStarted","Data":"eabb6414b8694afe47b807aaf22e2a0e7bf62df39459976a9332b613c5d98f83"} Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.531296 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wxmjz" Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.564717 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4zb2\" (UniqueName: \"kubernetes.io/projected/ed950cca-3c6f-42a6-ac02-9e1290251fba-kube-api-access-f4zb2\") pod \"ed950cca-3c6f-42a6-ac02-9e1290251fba\" (UID: \"ed950cca-3c6f-42a6-ac02-9e1290251fba\") " Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.564935 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed950cca-3c6f-42a6-ac02-9e1290251fba-operator-scripts\") pod \"ed950cca-3c6f-42a6-ac02-9e1290251fba\" (UID: \"ed950cca-3c6f-42a6-ac02-9e1290251fba\") " Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.565923 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed950cca-3c6f-42a6-ac02-9e1290251fba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed950cca-3c6f-42a6-ac02-9e1290251fba" (UID: "ed950cca-3c6f-42a6-ac02-9e1290251fba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.617355 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed950cca-3c6f-42a6-ac02-9e1290251fba-kube-api-access-f4zb2" (OuterVolumeSpecName: "kube-api-access-f4zb2") pod "ed950cca-3c6f-42a6-ac02-9e1290251fba" (UID: "ed950cca-3c6f-42a6-ac02-9e1290251fba"). InnerVolumeSpecName "kube-api-access-f4zb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.667694 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4zb2\" (UniqueName: \"kubernetes.io/projected/ed950cca-3c6f-42a6-ac02-9e1290251fba-kube-api-access-f4zb2\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.667723 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed950cca-3c6f-42a6-ac02-9e1290251fba-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.713368 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nm7qg" Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.769316 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18272353-8a77-4df9-baab-a4c2a6e6d0cb-scripts\") pod \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.769899 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzbgp\" (UniqueName: \"kubernetes.io/projected/18272353-8a77-4df9-baab-a4c2a6e6d0cb-kube-api-access-bzbgp\") pod \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.769933 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/18272353-8a77-4df9-baab-a4c2a6e6d0cb-swiftconf\") pod \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.769982 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18272353-8a77-4df9-baab-a4c2a6e6d0cb-combined-ca-bundle\") pod \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.770017 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/18272353-8a77-4df9-baab-a4c2a6e6d0cb-dispersionconf\") pod \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.770147 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/18272353-8a77-4df9-baab-a4c2a6e6d0cb-ring-data-devices\") pod \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.770208 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/18272353-8a77-4df9-baab-a4c2a6e6d0cb-etc-swift\") pod \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\" (UID: \"18272353-8a77-4df9-baab-a4c2a6e6d0cb\") " Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.771900 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18272353-8a77-4df9-baab-a4c2a6e6d0cb-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "18272353-8a77-4df9-baab-a4c2a6e6d0cb" (UID: "18272353-8a77-4df9-baab-a4c2a6e6d0cb"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.772666 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18272353-8a77-4df9-baab-a4c2a6e6d0cb-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "18272353-8a77-4df9-baab-a4c2a6e6d0cb" (UID: "18272353-8a77-4df9-baab-a4c2a6e6d0cb"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.784882 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18272353-8a77-4df9-baab-a4c2a6e6d0cb-kube-api-access-bzbgp" (OuterVolumeSpecName: "kube-api-access-bzbgp") pod "18272353-8a77-4df9-baab-a4c2a6e6d0cb" (UID: "18272353-8a77-4df9-baab-a4c2a6e6d0cb"). InnerVolumeSpecName "kube-api-access-bzbgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.823968 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18272353-8a77-4df9-baab-a4c2a6e6d0cb-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "18272353-8a77-4df9-baab-a4c2a6e6d0cb" (UID: "18272353-8a77-4df9-baab-a4c2a6e6d0cb"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.836998 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18272353-8a77-4df9-baab-a4c2a6e6d0cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18272353-8a77-4df9-baab-a4c2a6e6d0cb" (UID: "18272353-8a77-4df9-baab-a4c2a6e6d0cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.845379 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18272353-8a77-4df9-baab-a4c2a6e6d0cb-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "18272353-8a77-4df9-baab-a4c2a6e6d0cb" (UID: "18272353-8a77-4df9-baab-a4c2a6e6d0cb"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.861296 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18272353-8a77-4df9-baab-a4c2a6e6d0cb-scripts" (OuterVolumeSpecName: "scripts") pod "18272353-8a77-4df9-baab-a4c2a6e6d0cb" (UID: "18272353-8a77-4df9-baab-a4c2a6e6d0cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.872021 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzbgp\" (UniqueName: \"kubernetes.io/projected/18272353-8a77-4df9-baab-a4c2a6e6d0cb-kube-api-access-bzbgp\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.872065 4897 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/18272353-8a77-4df9-baab-a4c2a6e6d0cb-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.872075 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18272353-8a77-4df9-baab-a4c2a6e6d0cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.872084 4897 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/18272353-8a77-4df9-baab-a4c2a6e6d0cb-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.872092 4897 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/18272353-8a77-4df9-baab-a4c2a6e6d0cb-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.872101 4897 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/18272353-8a77-4df9-baab-a4c2a6e6d0cb-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:50 crc kubenswrapper[4897]: I0214 19:03:50.872109 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18272353-8a77-4df9-baab-a4c2a6e6d0cb-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:51 crc kubenswrapper[4897]: I0214 19:03:51.243841 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nm7qg" Feb 14 19:03:51 crc kubenswrapper[4897]: I0214 19:03:51.243841 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nm7qg" event={"ID":"18272353-8a77-4df9-baab-a4c2a6e6d0cb","Type":"ContainerDied","Data":"d3ddd761b1677b724655be81adee5036f8d4d55a86b258c2f92ae0e349c84d6d"} Feb 14 19:03:51 crc kubenswrapper[4897]: I0214 19:03:51.243964 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3ddd761b1677b724655be81adee5036f8d4d55a86b258c2f92ae0e349c84d6d" Feb 14 19:03:51 crc kubenswrapper[4897]: I0214 19:03:51.245785 4897 generic.go:334] "Generic (PLEG): container finished" podID="1ddc51d6-ba42-4a8c-8488-24ab847bd808" containerID="c74d445a737c375bee8a01d9bf3450f8e4268cf4ec0fae55c46f535645e79997" exitCode=0 Feb 14 19:03:51 crc kubenswrapper[4897]: I0214 19:03:51.245839 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-hrbjk" event={"ID":"1ddc51d6-ba42-4a8c-8488-24ab847bd808","Type":"ContainerDied","Data":"c74d445a737c375bee8a01d9bf3450f8e4268cf4ec0fae55c46f535645e79997"} Feb 14 19:03:51 crc kubenswrapper[4897]: I0214 19:03:51.250694 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"42b73b5c-bc43-4e91-9e3d-255ed69831db","Type":"ContainerStarted","Data":"5d0fe991b3797d44332828873637ab219420e4aaeba2b665e41a89fe818ebd6e"} Feb 14 19:03:51 crc kubenswrapper[4897]: I0214 19:03:51.274759 4897 generic.go:334] "Generic (PLEG): container finished" podID="7e556a75-3106-43db-b4da-53c6df99cd35" containerID="c8ae0358a1da4f7011f4b7fb3ca54d054d2b5cc51f5f21f3d18861677b8a13f0" exitCode=0 Feb 14 19:03:51 crc kubenswrapper[4897]: I0214 19:03:51.274977 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wxmjz" Feb 14 19:03:51 crc kubenswrapper[4897]: I0214 19:03:51.275718 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-9baa-account-create-update-rbzr5" event={"ID":"7e556a75-3106-43db-b4da-53c6df99cd35","Type":"ContainerDied","Data":"c8ae0358a1da4f7011f4b7fb3ca54d054d2b5cc51f5f21f3d18861677b8a13f0"} Feb 14 19:03:52 crc kubenswrapper[4897]: I0214 19:03:52.667008 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-hrbjk" Feb 14 19:03:52 crc kubenswrapper[4897]: I0214 19:03:52.720369 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ddc51d6-ba42-4a8c-8488-24ab847bd808-operator-scripts\") pod \"1ddc51d6-ba42-4a8c-8488-24ab847bd808\" (UID: \"1ddc51d6-ba42-4a8c-8488-24ab847bd808\") " Feb 14 19:03:52 crc kubenswrapper[4897]: I0214 19:03:52.720585 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxcpz\" (UniqueName: \"kubernetes.io/projected/1ddc51d6-ba42-4a8c-8488-24ab847bd808-kube-api-access-nxcpz\") pod \"1ddc51d6-ba42-4a8c-8488-24ab847bd808\" (UID: \"1ddc51d6-ba42-4a8c-8488-24ab847bd808\") " Feb 14 19:03:52 crc kubenswrapper[4897]: I0214 19:03:52.721202 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ddc51d6-ba42-4a8c-8488-24ab847bd808-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1ddc51d6-ba42-4a8c-8488-24ab847bd808" (UID: "1ddc51d6-ba42-4a8c-8488-24ab847bd808"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:03:52 crc kubenswrapper[4897]: I0214 19:03:52.721685 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ddc51d6-ba42-4a8c-8488-24ab847bd808-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:52 crc kubenswrapper[4897]: I0214 19:03:52.795152 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ddc51d6-ba42-4a8c-8488-24ab847bd808-kube-api-access-nxcpz" (OuterVolumeSpecName: "kube-api-access-nxcpz") pod "1ddc51d6-ba42-4a8c-8488-24ab847bd808" (UID: "1ddc51d6-ba42-4a8c-8488-24ab847bd808"). InnerVolumeSpecName "kube-api-access-nxcpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:03:52 crc kubenswrapper[4897]: I0214 19:03:52.823811 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxcpz\" (UniqueName: \"kubernetes.io/projected/1ddc51d6-ba42-4a8c-8488-24ab847bd808-kube-api-access-nxcpz\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:53 crc kubenswrapper[4897]: I0214 19:03:53.299395 4897 generic.go:334] "Generic (PLEG): container finished" podID="75b00edc-276b-4e3b-84c1-db17e1eeb3ee" containerID="cbdac35dc72f27a3253bb19267a193ec38202343ba5dde4d824ec972949ec729" exitCode=0 Feb 14 19:03:53 crc kubenswrapper[4897]: I0214 19:03:53.299542 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"75b00edc-276b-4e3b-84c1-db17e1eeb3ee","Type":"ContainerDied","Data":"cbdac35dc72f27a3253bb19267a193ec38202343ba5dde4d824ec972949ec729"} Feb 14 19:03:53 crc kubenswrapper[4897]: I0214 19:03:53.302488 4897 generic.go:334] "Generic (PLEG): container finished" podID="3e532d34-b3bb-4f63-bc64-6b6cc22666b0" containerID="bb5453fc7c803ba4c78169d1d9f1ca44c2597e317e1cdc22384f1796b179a86c" exitCode=0 Feb 14 19:03:53 crc kubenswrapper[4897]: I0214 19:03:53.302549 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"3e532d34-b3bb-4f63-bc64-6b6cc22666b0","Type":"ContainerDied","Data":"bb5453fc7c803ba4c78169d1d9f1ca44c2597e317e1cdc22384f1796b179a86c"} Feb 14 19:03:53 crc kubenswrapper[4897]: I0214 19:03:53.304710 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-hrbjk" event={"ID":"1ddc51d6-ba42-4a8c-8488-24ab847bd808","Type":"ContainerDied","Data":"2c6ece658d0bf0e388939dbbe5dee711cadb37a538a40a49dee7c724527f59c0"} Feb 14 19:03:53 crc kubenswrapper[4897]: I0214 19:03:53.304743 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c6ece658d0bf0e388939dbbe5dee711cadb37a538a40a49dee7c724527f59c0" Feb 14 19:03:53 crc kubenswrapper[4897]: I0214 19:03:53.304798 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-hrbjk" Feb 14 19:03:53 crc kubenswrapper[4897]: I0214 19:03:53.518360 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 14 19:03:53 crc kubenswrapper[4897]: I0214 19:03:53.667154 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-9baa-account-create-update-rbzr5" Feb 14 19:03:53 crc kubenswrapper[4897]: I0214 19:03:53.753772 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e556a75-3106-43db-b4da-53c6df99cd35-operator-scripts\") pod \"7e556a75-3106-43db-b4da-53c6df99cd35\" (UID: \"7e556a75-3106-43db-b4da-53c6df99cd35\") " Feb 14 19:03:53 crc kubenswrapper[4897]: I0214 19:03:53.754274 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e556a75-3106-43db-b4da-53c6df99cd35-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7e556a75-3106-43db-b4da-53c6df99cd35" (UID: "7e556a75-3106-43db-b4da-53c6df99cd35"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:03:53 crc kubenswrapper[4897]: I0214 19:03:53.754434 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-624qx\" (UniqueName: \"kubernetes.io/projected/7e556a75-3106-43db-b4da-53c6df99cd35-kube-api-access-624qx\") pod \"7e556a75-3106-43db-b4da-53c6df99cd35\" (UID: \"7e556a75-3106-43db-b4da-53c6df99cd35\") " Feb 14 19:03:53 crc kubenswrapper[4897]: I0214 19:03:53.754954 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e556a75-3106-43db-b4da-53c6df99cd35-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:53 crc kubenswrapper[4897]: I0214 19:03:53.767719 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e556a75-3106-43db-b4da-53c6df99cd35-kube-api-access-624qx" (OuterVolumeSpecName: "kube-api-access-624qx") pod "7e556a75-3106-43db-b4da-53c6df99cd35" (UID: "7e556a75-3106-43db-b4da-53c6df99cd35"). InnerVolumeSpecName "kube-api-access-624qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:03:53 crc kubenswrapper[4897]: I0214 19:03:53.856913 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-624qx\" (UniqueName: \"kubernetes.io/projected/7e556a75-3106-43db-b4da-53c6df99cd35-kube-api-access-624qx\") on node \"crc\" DevicePath \"\"" Feb 14 19:03:54 crc kubenswrapper[4897]: I0214 19:03:54.316609 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"3e532d34-b3bb-4f63-bc64-6b6cc22666b0","Type":"ContainerStarted","Data":"09d64742c29c0487e12d87473de7e26082faebf923d1f5ccc5a3856364def3a5"} Feb 14 19:03:54 crc kubenswrapper[4897]: I0214 19:03:54.317113 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 14 19:03:54 crc kubenswrapper[4897]: I0214 19:03:54.320662 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"42b73b5c-bc43-4e91-9e3d-255ed69831db","Type":"ContainerStarted","Data":"177f9d537ec44b951e59558035d383275150de60899385d672dba1bf4c407189"} Feb 14 19:03:54 crc kubenswrapper[4897]: I0214 19:03:54.324846 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"75b00edc-276b-4e3b-84c1-db17e1eeb3ee","Type":"ContainerStarted","Data":"f55537400280848c8107974904a1cdcd30ba7c25d7ae2f56bedeab430743c3f3"} Feb 14 19:03:54 crc kubenswrapper[4897]: I0214 19:03:54.325095 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:03:54 crc kubenswrapper[4897]: I0214 19:03:54.327857 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-9baa-account-create-update-rbzr5" event={"ID":"7e556a75-3106-43db-b4da-53c6df99cd35","Type":"ContainerDied","Data":"eabb6414b8694afe47b807aaf22e2a0e7bf62df39459976a9332b613c5d98f83"} Feb 14 19:03:54 crc kubenswrapper[4897]: I0214 19:03:54.327887 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eabb6414b8694afe47b807aaf22e2a0e7bf62df39459976a9332b613c5d98f83" Feb 14 19:03:54 crc kubenswrapper[4897]: I0214 19:03:54.327949 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-9baa-account-create-update-rbzr5" Feb 14 19:03:54 crc kubenswrapper[4897]: I0214 19:03:54.342113 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=37.866765217 podStartE2EDuration="1m2.34209666s" podCreationTimestamp="2026-02-14 19:02:52 +0000 UTC" firstStartedPulling="2026-02-14 19:02:54.486224405 +0000 UTC m=+1227.462632888" lastFinishedPulling="2026-02-14 19:03:18.961555848 +0000 UTC m=+1251.937964331" observedRunningTime="2026-02-14 19:03:54.337892426 +0000 UTC m=+1287.314300899" watchObservedRunningTime="2026-02-14 19:03:54.34209666 +0000 UTC m=+1287.318505143" Feb 14 19:03:54 crc kubenswrapper[4897]: I0214 19:03:54.364139 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=22.414310977 podStartE2EDuration="56.36411981s" podCreationTimestamp="2026-02-14 19:02:58 +0000 UTC" firstStartedPulling="2026-02-14 19:03:19.607479039 +0000 UTC m=+1252.583887512" lastFinishedPulling="2026-02-14 19:03:53.557287852 +0000 UTC m=+1286.533696345" observedRunningTime="2026-02-14 19:03:54.358664267 +0000 UTC m=+1287.335072780" watchObservedRunningTime="2026-02-14 19:03:54.36411981 +0000 UTC m=+1287.340528293" Feb 14 19:03:54 crc kubenswrapper[4897]: I0214 19:03:54.391069 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.384767667 podStartE2EDuration="1m2.391046586s" podCreationTimestamp="2026-02-14 19:02:52 +0000 UTC" firstStartedPulling="2026-02-14 19:02:54.729490518 +0000 UTC m=+1227.705899001" lastFinishedPulling="2026-02-14 19:03:18.735769437 +0000 UTC m=+1251.712177920" observedRunningTime="2026-02-14 19:03:54.390288703 +0000 UTC m=+1287.366697196" watchObservedRunningTime="2026-02-14 19:03:54.391046586 +0000 UTC m=+1287.367455069" Feb 14 19:03:54 crc kubenswrapper[4897]: I0214 19:03:54.973793 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-wxmjz"] Feb 14 19:03:54 crc kubenswrapper[4897]: I0214 19:03:54.982281 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-wxmjz"] Feb 14 19:03:55 crc kubenswrapper[4897]: I0214 19:03:55.009914 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 14 19:03:55 crc kubenswrapper[4897]: I0214 19:03:55.807401 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed950cca-3c6f-42a6-ac02-9e1290251fba" path="/var/lib/kubelet/pods/ed950cca-3c6f-42a6-ac02-9e1290251fba/volumes" Feb 14 19:03:55 crc kubenswrapper[4897]: I0214 19:03:55.984879 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7b9ddbfb7b-bnlsc" podUID="5431c44c-05b0-4319-867b-49e3bf15174c" containerName="console" containerID="cri-o://6063952d86797d4bae77425130ce9ce6013b306adac3ea54a297e35c746736af" gracePeriod=15 Feb 14 19:03:56 crc kubenswrapper[4897]: I0214 19:03:56.370153 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7b9ddbfb7b-bnlsc_5431c44c-05b0-4319-867b-49e3bf15174c/console/0.log" Feb 14 19:03:56 crc kubenswrapper[4897]: I0214 19:03:56.370603 4897 generic.go:334] "Generic (PLEG): container finished" podID="5431c44c-05b0-4319-867b-49e3bf15174c" containerID="6063952d86797d4bae77425130ce9ce6013b306adac3ea54a297e35c746736af" exitCode=2 Feb 14 19:03:56 crc kubenswrapper[4897]: I0214 19:03:56.370721 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b9ddbfb7b-bnlsc" event={"ID":"5431c44c-05b0-4319-867b-49e3bf15174c","Type":"ContainerDied","Data":"6063952d86797d4bae77425130ce9ce6013b306adac3ea54a297e35c746736af"} Feb 14 19:03:56 crc kubenswrapper[4897]: I0214 19:03:56.906892 4897 patch_prober.go:28] interesting pod/console-7b9ddbfb7b-bnlsc container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.90:8443/health\": dial tcp 10.217.0.90:8443: connect: connection refused" start-of-body= Feb 14 19:03:56 crc kubenswrapper[4897]: I0214 19:03:56.907324 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-7b9ddbfb7b-bnlsc" podUID="5431c44c-05b0-4319-867b-49e3bf15174c" containerName="console" probeResult="failure" output="Get \"https://10.217.0.90:8443/health\": dial tcp 10.217.0.90:8443: connect: connection refused" Feb 14 19:03:57 crc kubenswrapper[4897]: I0214 19:03:57.256346 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-wlxqg" podUID="c6a557e7-f135-4a79-9525-aed106fd814c" containerName="ovn-controller" probeResult="failure" output=< Feb 14 19:03:57 crc kubenswrapper[4897]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 14 19:03:57 crc kubenswrapper[4897]: > Feb 14 19:03:59 crc kubenswrapper[4897]: I0214 19:03:59.327509 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 19:03:59 crc kubenswrapper[4897]: E0214 19:03:59.328299 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e556a75-3106-43db-b4da-53c6df99cd35" containerName="mariadb-account-create-update" Feb 14 19:03:59 crc kubenswrapper[4897]: I0214 19:03:59.328319 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e556a75-3106-43db-b4da-53c6df99cd35" containerName="mariadb-account-create-update" Feb 14 19:03:59 crc kubenswrapper[4897]: E0214 19:03:59.328337 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ddc51d6-ba42-4a8c-8488-24ab847bd808" containerName="mariadb-database-create" Feb 14 19:03:59 crc kubenswrapper[4897]: I0214 19:03:59.328346 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ddc51d6-ba42-4a8c-8488-24ab847bd808" containerName="mariadb-database-create" Feb 14 19:03:59 crc kubenswrapper[4897]: E0214 19:03:59.328373 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed950cca-3c6f-42a6-ac02-9e1290251fba" containerName="mariadb-account-create-update" Feb 14 19:03:59 crc kubenswrapper[4897]: I0214 19:03:59.328381 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed950cca-3c6f-42a6-ac02-9e1290251fba" containerName="mariadb-account-create-update" Feb 14 19:03:59 crc kubenswrapper[4897]: E0214 19:03:59.328409 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18272353-8a77-4df9-baab-a4c2a6e6d0cb" containerName="swift-ring-rebalance" Feb 14 19:03:59 crc kubenswrapper[4897]: I0214 19:03:59.328419 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="18272353-8a77-4df9-baab-a4c2a6e6d0cb" containerName="swift-ring-rebalance" Feb 14 19:03:59 crc kubenswrapper[4897]: I0214 19:03:59.328666 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="18272353-8a77-4df9-baab-a4c2a6e6d0cb" containerName="swift-ring-rebalance" Feb 14 19:03:59 crc kubenswrapper[4897]: I0214 19:03:59.328689 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ddc51d6-ba42-4a8c-8488-24ab847bd808" containerName="mariadb-database-create" Feb 14 19:03:59 crc kubenswrapper[4897]: I0214 19:03:59.328727 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e556a75-3106-43db-b4da-53c6df99cd35" containerName="mariadb-account-create-update" Feb 14 19:03:59 crc kubenswrapper[4897]: I0214 19:03:59.328754 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed950cca-3c6f-42a6-ac02-9e1290251fba" containerName="mariadb-account-create-update" Feb 14 19:03:59 crc kubenswrapper[4897]: I0214 19:03:59.329600 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 14 19:03:59 crc kubenswrapper[4897]: I0214 19:03:59.332625 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 14 19:03:59 crc kubenswrapper[4897]: I0214 19:03:59.348554 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 19:03:59 crc kubenswrapper[4897]: I0214 19:03:59.477964 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xl5g\" (UniqueName: \"kubernetes.io/projected/a3bb3e8e-2264-4122-be43-4c1be375ceb1-kube-api-access-9xl5g\") pod \"mysqld-exporter-0\" (UID: \"a3bb3e8e-2264-4122-be43-4c1be375ceb1\") " pod="openstack/mysqld-exporter-0" Feb 14 19:03:59 crc kubenswrapper[4897]: I0214 19:03:59.478051 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3bb3e8e-2264-4122-be43-4c1be375ceb1-config-data\") pod \"mysqld-exporter-0\" (UID: \"a3bb3e8e-2264-4122-be43-4c1be375ceb1\") " pod="openstack/mysqld-exporter-0" Feb 14 19:03:59 crc kubenswrapper[4897]: I0214 19:03:59.478231 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3bb3e8e-2264-4122-be43-4c1be375ceb1-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"a3bb3e8e-2264-4122-be43-4c1be375ceb1\") " pod="openstack/mysqld-exporter-0" Feb 14 19:03:59 crc kubenswrapper[4897]: I0214 19:03:59.580053 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3bb3e8e-2264-4122-be43-4c1be375ceb1-config-data\") pod \"mysqld-exporter-0\" (UID: \"a3bb3e8e-2264-4122-be43-4c1be375ceb1\") " pod="openstack/mysqld-exporter-0" Feb 14 19:03:59 crc kubenswrapper[4897]: I0214 19:03:59.580218 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3bb3e8e-2264-4122-be43-4c1be375ceb1-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"a3bb3e8e-2264-4122-be43-4c1be375ceb1\") " pod="openstack/mysqld-exporter-0" Feb 14 19:03:59 crc kubenswrapper[4897]: I0214 19:03:59.580289 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xl5g\" (UniqueName: \"kubernetes.io/projected/a3bb3e8e-2264-4122-be43-4c1be375ceb1-kube-api-access-9xl5g\") pod \"mysqld-exporter-0\" (UID: \"a3bb3e8e-2264-4122-be43-4c1be375ceb1\") " pod="openstack/mysqld-exporter-0" Feb 14 19:03:59 crc kubenswrapper[4897]: I0214 19:03:59.585950 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3bb3e8e-2264-4122-be43-4c1be375ceb1-config-data\") pod \"mysqld-exporter-0\" (UID: \"a3bb3e8e-2264-4122-be43-4c1be375ceb1\") " pod="openstack/mysqld-exporter-0" Feb 14 19:03:59 crc kubenswrapper[4897]: I0214 19:03:59.586510 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3bb3e8e-2264-4122-be43-4c1be375ceb1-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"a3bb3e8e-2264-4122-be43-4c1be375ceb1\") " pod="openstack/mysqld-exporter-0" Feb 14 19:03:59 crc kubenswrapper[4897]: I0214 19:03:59.603203 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xl5g\" (UniqueName: \"kubernetes.io/projected/a3bb3e8e-2264-4122-be43-4c1be375ceb1-kube-api-access-9xl5g\") pod \"mysqld-exporter-0\" (UID: \"a3bb3e8e-2264-4122-be43-4c1be375ceb1\") " pod="openstack/mysqld-exporter-0" Feb 14 19:03:59 crc kubenswrapper[4897]: I0214 19:03:59.653431 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 14 19:04:00 crc kubenswrapper[4897]: I0214 19:04:00.003199 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9gch6"] Feb 14 19:04:00 crc kubenswrapper[4897]: I0214 19:04:00.010248 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:00 crc kubenswrapper[4897]: I0214 19:04:00.010575 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9gch6" Feb 14 19:04:00 crc kubenswrapper[4897]: I0214 19:04:00.013307 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:00 crc kubenswrapper[4897]: I0214 19:04:00.028429 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 14 19:04:00 crc kubenswrapper[4897]: I0214 19:04:00.061471 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9gch6"] Feb 14 19:04:00 crc kubenswrapper[4897]: I0214 19:04:00.092163 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpgxl\" (UniqueName: \"kubernetes.io/projected/d0aeb6a0-bc14-4f52-8c20-d483e67320b5-kube-api-access-qpgxl\") pod \"root-account-create-update-9gch6\" (UID: \"d0aeb6a0-bc14-4f52-8c20-d483e67320b5\") " pod="openstack/root-account-create-update-9gch6" Feb 14 19:04:00 crc kubenswrapper[4897]: I0214 19:04:00.092447 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0aeb6a0-bc14-4f52-8c20-d483e67320b5-operator-scripts\") pod \"root-account-create-update-9gch6\" (UID: \"d0aeb6a0-bc14-4f52-8c20-d483e67320b5\") " pod="openstack/root-account-create-update-9gch6" Feb 14 19:04:00 crc kubenswrapper[4897]: I0214 19:04:00.194241 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0aeb6a0-bc14-4f52-8c20-d483e67320b5-operator-scripts\") pod \"root-account-create-update-9gch6\" (UID: \"d0aeb6a0-bc14-4f52-8c20-d483e67320b5\") " pod="openstack/root-account-create-update-9gch6" Feb 14 19:04:00 crc kubenswrapper[4897]: I0214 19:04:00.194324 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpgxl\" (UniqueName: \"kubernetes.io/projected/d0aeb6a0-bc14-4f52-8c20-d483e67320b5-kube-api-access-qpgxl\") pod \"root-account-create-update-9gch6\" (UID: \"d0aeb6a0-bc14-4f52-8c20-d483e67320b5\") " pod="openstack/root-account-create-update-9gch6" Feb 14 19:04:00 crc kubenswrapper[4897]: I0214 19:04:00.195254 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0aeb6a0-bc14-4f52-8c20-d483e67320b5-operator-scripts\") pod \"root-account-create-update-9gch6\" (UID: \"d0aeb6a0-bc14-4f52-8c20-d483e67320b5\") " pod="openstack/root-account-create-update-9gch6" Feb 14 19:04:00 crc kubenswrapper[4897]: I0214 19:04:00.235108 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpgxl\" (UniqueName: \"kubernetes.io/projected/d0aeb6a0-bc14-4f52-8c20-d483e67320b5-kube-api-access-qpgxl\") pod \"root-account-create-update-9gch6\" (UID: \"d0aeb6a0-bc14-4f52-8c20-d483e67320b5\") " pod="openstack/root-account-create-update-9gch6" Feb 14 19:04:00 crc kubenswrapper[4897]: I0214 19:04:00.371370 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9gch6" Feb 14 19:04:00 crc kubenswrapper[4897]: I0214 19:04:00.408382 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:01 crc kubenswrapper[4897]: I0214 19:04:01.825388 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/674b3cbc-fa6f-4475-bebd-314f24beaaa0-etc-swift\") pod \"swift-storage-0\" (UID: \"674b3cbc-fa6f-4475-bebd-314f24beaaa0\") " pod="openstack/swift-storage-0" Feb 14 19:04:01 crc kubenswrapper[4897]: I0214 19:04:01.832524 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/674b3cbc-fa6f-4475-bebd-314f24beaaa0-etc-swift\") pod \"swift-storage-0\" (UID: \"674b3cbc-fa6f-4475-bebd-314f24beaaa0\") " pod="openstack/swift-storage-0" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.030082 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.305427 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-wlxqg" podUID="c6a557e7-f135-4a79-9525-aed106fd814c" containerName="ovn-controller" probeResult="failure" output=< Feb 14 19:04:02 crc kubenswrapper[4897]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 14 19:04:02 crc kubenswrapper[4897]: > Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.316739 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-8jqrb" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.319011 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-8jqrb" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.482446 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7b9ddbfb7b-bnlsc_5431c44c-05b0-4319-867b-49e3bf15174c/console/0.log" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.483561 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.556198 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-wlxqg-config-2jcmq"] Feb 14 19:04:02 crc kubenswrapper[4897]: E0214 19:04:02.558149 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5431c44c-05b0-4319-867b-49e3bf15174c" containerName="console" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.558179 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5431c44c-05b0-4319-867b-49e3bf15174c" containerName="console" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.559607 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="5431c44c-05b0-4319-867b-49e3bf15174c" containerName="console" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.561933 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wlxqg-config-2jcmq" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.565931 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5431c44c-05b0-4319-867b-49e3bf15174c-console-config\") pod \"5431c44c-05b0-4319-867b-49e3bf15174c\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.576185 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5431c44c-05b0-4319-867b-49e3bf15174c-console-serving-cert\") pod \"5431c44c-05b0-4319-867b-49e3bf15174c\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.576280 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5431c44c-05b0-4319-867b-49e3bf15174c-console-oauth-config\") pod \"5431c44c-05b0-4319-867b-49e3bf15174c\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.576344 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5431c44c-05b0-4319-867b-49e3bf15174c-trusted-ca-bundle\") pod \"5431c44c-05b0-4319-867b-49e3bf15174c\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.576386 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5431c44c-05b0-4319-867b-49e3bf15174c-service-ca\") pod \"5431c44c-05b0-4319-867b-49e3bf15174c\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.576451 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mjpg\" (UniqueName: \"kubernetes.io/projected/5431c44c-05b0-4319-867b-49e3bf15174c-kube-api-access-6mjpg\") pod \"5431c44c-05b0-4319-867b-49e3bf15174c\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.576475 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5431c44c-05b0-4319-867b-49e3bf15174c-oauth-serving-cert\") pod \"5431c44c-05b0-4319-867b-49e3bf15174c\" (UID: \"5431c44c-05b0-4319-867b-49e3bf15174c\") " Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.568189 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5431c44c-05b0-4319-867b-49e3bf15174c-console-config" (OuterVolumeSpecName: "console-config") pod "5431c44c-05b0-4319-867b-49e3bf15174c" (UID: "5431c44c-05b0-4319-867b-49e3bf15174c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.578331 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wlxqg-config-2jcmq"] Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.578488 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5431c44c-05b0-4319-867b-49e3bf15174c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "5431c44c-05b0-4319-867b-49e3bf15174c" (UID: "5431c44c-05b0-4319-867b-49e3bf15174c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.578786 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5431c44c-05b0-4319-867b-49e3bf15174c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5431c44c-05b0-4319-867b-49e3bf15174c" (UID: "5431c44c-05b0-4319-867b-49e3bf15174c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.579184 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5431c44c-05b0-4319-867b-49e3bf15174c-service-ca" (OuterVolumeSpecName: "service-ca") pod "5431c44c-05b0-4319-867b-49e3bf15174c" (UID: "5431c44c-05b0-4319-867b-49e3bf15174c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.579320 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.587808 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5431c44c-05b0-4319-867b-49e3bf15174c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "5431c44c-05b0-4319-867b-49e3bf15174c" (UID: "5431c44c-05b0-4319-867b-49e3bf15174c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.594679 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5431c44c-05b0-4319-867b-49e3bf15174c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "5431c44c-05b0-4319-867b-49e3bf15174c" (UID: "5431c44c-05b0-4319-867b-49e3bf15174c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.604617 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5431c44c-05b0-4319-867b-49e3bf15174c-kube-api-access-6mjpg" (OuterVolumeSpecName: "kube-api-access-6mjpg") pod "5431c44c-05b0-4319-867b-49e3bf15174c" (UID: "5431c44c-05b0-4319-867b-49e3bf15174c"). InnerVolumeSpecName "kube-api-access-6mjpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.654927 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.655231 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="42b73b5c-bc43-4e91-9e3d-255ed69831db" containerName="config-reloader" containerID="cri-o://5d0fe991b3797d44332828873637ab219420e4aaeba2b665e41a89fe818ebd6e" gracePeriod=600 Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.655222 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="42b73b5c-bc43-4e91-9e3d-255ed69831db" containerName="thanos-sidecar" containerID="cri-o://177f9d537ec44b951e59558035d383275150de60899385d672dba1bf4c407189" gracePeriod=600 Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.655184 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="42b73b5c-bc43-4e91-9e3d-255ed69831db" containerName="prometheus" containerID="cri-o://a9eafbd8ceae4ac75efdffe4f8ba4141b2e5c47225c19003b0f379d5b0e48c75" gracePeriod=600 Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.683507 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/13bbca50-8ee9-4865-b3cd-19701f17e330-var-run-ovn\") pod \"ovn-controller-wlxqg-config-2jcmq\" (UID: \"13bbca50-8ee9-4865-b3cd-19701f17e330\") " pod="openstack/ovn-controller-wlxqg-config-2jcmq" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.683569 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/13bbca50-8ee9-4865-b3cd-19701f17e330-var-run\") pod \"ovn-controller-wlxqg-config-2jcmq\" (UID: \"13bbca50-8ee9-4865-b3cd-19701f17e330\") " pod="openstack/ovn-controller-wlxqg-config-2jcmq" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.683632 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/13bbca50-8ee9-4865-b3cd-19701f17e330-additional-scripts\") pod \"ovn-controller-wlxqg-config-2jcmq\" (UID: \"13bbca50-8ee9-4865-b3cd-19701f17e330\") " pod="openstack/ovn-controller-wlxqg-config-2jcmq" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.683706 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f79ss\" (UniqueName: \"kubernetes.io/projected/13bbca50-8ee9-4865-b3cd-19701f17e330-kube-api-access-f79ss\") pod \"ovn-controller-wlxqg-config-2jcmq\" (UID: \"13bbca50-8ee9-4865-b3cd-19701f17e330\") " pod="openstack/ovn-controller-wlxqg-config-2jcmq" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.683797 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13bbca50-8ee9-4865-b3cd-19701f17e330-scripts\") pod \"ovn-controller-wlxqg-config-2jcmq\" (UID: \"13bbca50-8ee9-4865-b3cd-19701f17e330\") " pod="openstack/ovn-controller-wlxqg-config-2jcmq" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.683831 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/13bbca50-8ee9-4865-b3cd-19701f17e330-var-log-ovn\") pod \"ovn-controller-wlxqg-config-2jcmq\" (UID: \"13bbca50-8ee9-4865-b3cd-19701f17e330\") " pod="openstack/ovn-controller-wlxqg-config-2jcmq" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.683912 4897 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5431c44c-05b0-4319-867b-49e3bf15174c-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.683924 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mjpg\" (UniqueName: \"kubernetes.io/projected/5431c44c-05b0-4319-867b-49e3bf15174c-kube-api-access-6mjpg\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.683934 4897 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5431c44c-05b0-4319-867b-49e3bf15174c-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.683941 4897 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5431c44c-05b0-4319-867b-49e3bf15174c-console-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.683949 4897 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5431c44c-05b0-4319-867b-49e3bf15174c-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.683956 4897 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5431c44c-05b0-4319-867b-49e3bf15174c-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.683965 4897 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5431c44c-05b0-4319-867b-49e3bf15174c-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.767172 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 19:04:02 crc kubenswrapper[4897]: W0214 19:04:02.779857 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3bb3e8e_2264_4122_be43_4c1be375ceb1.slice/crio-b514975a338e793b13f844a9ca625722a4c053c2ee4ec8371d4f42f2332d8f6e WatchSource:0}: Error finding container b514975a338e793b13f844a9ca625722a4c053c2ee4ec8371d4f42f2332d8f6e: Status 404 returned error can't find the container with id b514975a338e793b13f844a9ca625722a4c053c2ee4ec8371d4f42f2332d8f6e Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.788728 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/13bbca50-8ee9-4865-b3cd-19701f17e330-additional-scripts\") pod \"ovn-controller-wlxqg-config-2jcmq\" (UID: \"13bbca50-8ee9-4865-b3cd-19701f17e330\") " pod="openstack/ovn-controller-wlxqg-config-2jcmq" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.788844 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f79ss\" (UniqueName: \"kubernetes.io/projected/13bbca50-8ee9-4865-b3cd-19701f17e330-kube-api-access-f79ss\") pod \"ovn-controller-wlxqg-config-2jcmq\" (UID: \"13bbca50-8ee9-4865-b3cd-19701f17e330\") " pod="openstack/ovn-controller-wlxqg-config-2jcmq" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.788962 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13bbca50-8ee9-4865-b3cd-19701f17e330-scripts\") pod \"ovn-controller-wlxqg-config-2jcmq\" (UID: \"13bbca50-8ee9-4865-b3cd-19701f17e330\") " pod="openstack/ovn-controller-wlxqg-config-2jcmq" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.789002 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/13bbca50-8ee9-4865-b3cd-19701f17e330-var-log-ovn\") pod \"ovn-controller-wlxqg-config-2jcmq\" (UID: \"13bbca50-8ee9-4865-b3cd-19701f17e330\") " pod="openstack/ovn-controller-wlxqg-config-2jcmq" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.789113 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/13bbca50-8ee9-4865-b3cd-19701f17e330-var-run-ovn\") pod \"ovn-controller-wlxqg-config-2jcmq\" (UID: \"13bbca50-8ee9-4865-b3cd-19701f17e330\") " pod="openstack/ovn-controller-wlxqg-config-2jcmq" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.789142 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/13bbca50-8ee9-4865-b3cd-19701f17e330-var-run\") pod \"ovn-controller-wlxqg-config-2jcmq\" (UID: \"13bbca50-8ee9-4865-b3cd-19701f17e330\") " pod="openstack/ovn-controller-wlxqg-config-2jcmq" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.789458 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/13bbca50-8ee9-4865-b3cd-19701f17e330-var-run\") pod \"ovn-controller-wlxqg-config-2jcmq\" (UID: \"13bbca50-8ee9-4865-b3cd-19701f17e330\") " pod="openstack/ovn-controller-wlxqg-config-2jcmq" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.790053 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/13bbca50-8ee9-4865-b3cd-19701f17e330-additional-scripts\") pod \"ovn-controller-wlxqg-config-2jcmq\" (UID: \"13bbca50-8ee9-4865-b3cd-19701f17e330\") " pod="openstack/ovn-controller-wlxqg-config-2jcmq" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.791846 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13bbca50-8ee9-4865-b3cd-19701f17e330-scripts\") pod \"ovn-controller-wlxqg-config-2jcmq\" (UID: \"13bbca50-8ee9-4865-b3cd-19701f17e330\") " pod="openstack/ovn-controller-wlxqg-config-2jcmq" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.791892 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/13bbca50-8ee9-4865-b3cd-19701f17e330-var-log-ovn\") pod \"ovn-controller-wlxqg-config-2jcmq\" (UID: \"13bbca50-8ee9-4865-b3cd-19701f17e330\") " pod="openstack/ovn-controller-wlxqg-config-2jcmq" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.791925 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/13bbca50-8ee9-4865-b3cd-19701f17e330-var-run-ovn\") pod \"ovn-controller-wlxqg-config-2jcmq\" (UID: \"13bbca50-8ee9-4865-b3cd-19701f17e330\") " pod="openstack/ovn-controller-wlxqg-config-2jcmq" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.792742 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9gch6"] Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.799575 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.810492 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f79ss\" (UniqueName: \"kubernetes.io/projected/13bbca50-8ee9-4865-b3cd-19701f17e330-kube-api-access-f79ss\") pod \"ovn-controller-wlxqg-config-2jcmq\" (UID: \"13bbca50-8ee9-4865-b3cd-19701f17e330\") " pod="openstack/ovn-controller-wlxqg-config-2jcmq" Feb 14 19:04:02 crc kubenswrapper[4897]: I0214 19:04:02.892587 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wlxqg-config-2jcmq" Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.063446 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.413397 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wlxqg-config-2jcmq"] Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.457832 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9gch6" event={"ID":"d0aeb6a0-bc14-4f52-8c20-d483e67320b5","Type":"ContainerStarted","Data":"b79850ead4cbcf0016b329c20855eb14b393ef6e9bf11faa7571b8df600150c4"} Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.457873 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9gch6" event={"ID":"d0aeb6a0-bc14-4f52-8c20-d483e67320b5","Type":"ContainerStarted","Data":"a99e1b4a2d26a4856e47810ddb4ddfbdebffbd91ccda7440a308b4ae7c347f5f"} Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.460150 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"a3bb3e8e-2264-4122-be43-4c1be375ceb1","Type":"ContainerStarted","Data":"b514975a338e793b13f844a9ca625722a4c053c2ee4ec8371d4f42f2332d8f6e"} Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.475833 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-9gch6" podStartSLOduration=4.475812776 podStartE2EDuration="4.475812776s" podCreationTimestamp="2026-02-14 19:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:04:03.474093431 +0000 UTC m=+1296.450501924" watchObservedRunningTime="2026-02-14 19:04:03.475812776 +0000 UTC m=+1296.452221259" Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.477762 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7b9ddbfb7b-bnlsc_5431c44c-05b0-4319-867b-49e3bf15174c/console/0.log" Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.477835 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b9ddbfb7b-bnlsc" event={"ID":"5431c44c-05b0-4319-867b-49e3bf15174c","Type":"ContainerDied","Data":"9bfebe629d9cafd8981a767f449fe725089170a10821983ed14d3eeaff8a45d0"} Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.477870 4897 scope.go:117] "RemoveContainer" containerID="6063952d86797d4bae77425130ce9ce6013b306adac3ea54a297e35c746736af" Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.477985 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b9ddbfb7b-bnlsc" Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.489983 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7gfbg" event={"ID":"731750fa-408a-46ef-89bb-5491267222fb","Type":"ContainerStarted","Data":"3fffb61f615afaa98a0b5adbddabb548d77bd6b052a72ac670ddc2da16f9e975"} Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.504995 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wlxqg-config-2jcmq" event={"ID":"13bbca50-8ee9-4865-b3cd-19701f17e330","Type":"ContainerStarted","Data":"892f83890e85f8e724413409f2fffbbcaeaa236ac37c8f5ca4587cb20237c537"} Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.507567 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-7gfbg" podStartSLOduration=2.55520168 podStartE2EDuration="16.507546865s" podCreationTimestamp="2026-02-14 19:03:47 +0000 UTC" firstStartedPulling="2026-02-14 19:03:48.283472537 +0000 UTC m=+1281.259881020" lastFinishedPulling="2026-02-14 19:04:02.235817702 +0000 UTC m=+1295.212226205" observedRunningTime="2026-02-14 19:04:03.505197421 +0000 UTC m=+1296.481605904" watchObservedRunningTime="2026-02-14 19:04:03.507546865 +0000 UTC m=+1296.483955348" Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.514525 4897 generic.go:334] "Generic (PLEG): container finished" podID="42b73b5c-bc43-4e91-9e3d-255ed69831db" containerID="177f9d537ec44b951e59558035d383275150de60899385d672dba1bf4c407189" exitCode=0 Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.514562 4897 generic.go:334] "Generic (PLEG): container finished" podID="42b73b5c-bc43-4e91-9e3d-255ed69831db" containerID="5d0fe991b3797d44332828873637ab219420e4aaeba2b665e41a89fe818ebd6e" exitCode=0 Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.514573 4897 generic.go:334] "Generic (PLEG): container finished" podID="42b73b5c-bc43-4e91-9e3d-255ed69831db" containerID="a9eafbd8ceae4ac75efdffe4f8ba4141b2e5c47225c19003b0f379d5b0e48c75" exitCode=0 Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.514627 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"42b73b5c-bc43-4e91-9e3d-255ed69831db","Type":"ContainerDied","Data":"177f9d537ec44b951e59558035d383275150de60899385d672dba1bf4c407189"} Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.514657 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"42b73b5c-bc43-4e91-9e3d-255ed69831db","Type":"ContainerDied","Data":"5d0fe991b3797d44332828873637ab219420e4aaeba2b665e41a89fe818ebd6e"} Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.514666 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"42b73b5c-bc43-4e91-9e3d-255ed69831db","Type":"ContainerDied","Data":"a9eafbd8ceae4ac75efdffe4f8ba4141b2e5c47225c19003b0f379d5b0e48c75"} Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.520224 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"674b3cbc-fa6f-4475-bebd-314f24beaaa0","Type":"ContainerStarted","Data":"3e05493ca489a6bbc524ba719e40f745761a91b411018c60b89ec69f98811b4d"} Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.604186 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7b9ddbfb7b-bnlsc"] Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.610710 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7b9ddbfb7b-bnlsc"] Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.809762 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5431c44c-05b0-4319-867b-49e3bf15174c" path="/var/lib/kubelet/pods/5431c44c-05b0-4319-867b-49e3bf15174c/volumes" Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.846405 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="3e532d34-b3bb-4f63-bc64-6b6cc22666b0" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.860060 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.923820 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-852a68ed-aa87-465e-9176-9ccd923320c6\") pod \"42b73b5c-bc43-4e91-9e3d-255ed69831db\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.923873 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/42b73b5c-bc43-4e91-9e3d-255ed69831db-prometheus-metric-storage-rulefiles-2\") pod \"42b73b5c-bc43-4e91-9e3d-255ed69831db\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.923945 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/42b73b5c-bc43-4e91-9e3d-255ed69831db-web-config\") pod \"42b73b5c-bc43-4e91-9e3d-255ed69831db\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.923966 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/42b73b5c-bc43-4e91-9e3d-255ed69831db-thanos-prometheus-http-client-file\") pod \"42b73b5c-bc43-4e91-9e3d-255ed69831db\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.924047 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/42b73b5c-bc43-4e91-9e3d-255ed69831db-config-out\") pod \"42b73b5c-bc43-4e91-9e3d-255ed69831db\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.924071 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/42b73b5c-bc43-4e91-9e3d-255ed69831db-config\") pod \"42b73b5c-bc43-4e91-9e3d-255ed69831db\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.924156 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/42b73b5c-bc43-4e91-9e3d-255ed69831db-tls-assets\") pod \"42b73b5c-bc43-4e91-9e3d-255ed69831db\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.924208 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7gdd\" (UniqueName: \"kubernetes.io/projected/42b73b5c-bc43-4e91-9e3d-255ed69831db-kube-api-access-l7gdd\") pod \"42b73b5c-bc43-4e91-9e3d-255ed69831db\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.924271 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/42b73b5c-bc43-4e91-9e3d-255ed69831db-prometheus-metric-storage-rulefiles-0\") pod \"42b73b5c-bc43-4e91-9e3d-255ed69831db\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.924302 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/42b73b5c-bc43-4e91-9e3d-255ed69831db-prometheus-metric-storage-rulefiles-1\") pod \"42b73b5c-bc43-4e91-9e3d-255ed69831db\" (UID: \"42b73b5c-bc43-4e91-9e3d-255ed69831db\") " Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.926403 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42b73b5c-bc43-4e91-9e3d-255ed69831db-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "42b73b5c-bc43-4e91-9e3d-255ed69831db" (UID: "42b73b5c-bc43-4e91-9e3d-255ed69831db"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.926969 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42b73b5c-bc43-4e91-9e3d-255ed69831db-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "42b73b5c-bc43-4e91-9e3d-255ed69831db" (UID: "42b73b5c-bc43-4e91-9e3d-255ed69831db"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.928702 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42b73b5c-bc43-4e91-9e3d-255ed69831db-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "42b73b5c-bc43-4e91-9e3d-255ed69831db" (UID: "42b73b5c-bc43-4e91-9e3d-255ed69831db"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.935920 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42b73b5c-bc43-4e91-9e3d-255ed69831db-config-out" (OuterVolumeSpecName: "config-out") pod "42b73b5c-bc43-4e91-9e3d-255ed69831db" (UID: "42b73b5c-bc43-4e91-9e3d-255ed69831db"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.936658 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b73b5c-bc43-4e91-9e3d-255ed69831db-kube-api-access-l7gdd" (OuterVolumeSpecName: "kube-api-access-l7gdd") pod "42b73b5c-bc43-4e91-9e3d-255ed69831db" (UID: "42b73b5c-bc43-4e91-9e3d-255ed69831db"). InnerVolumeSpecName "kube-api-access-l7gdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.951866 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42b73b5c-bc43-4e91-9e3d-255ed69831db-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "42b73b5c-bc43-4e91-9e3d-255ed69831db" (UID: "42b73b5c-bc43-4e91-9e3d-255ed69831db"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.952227 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42b73b5c-bc43-4e91-9e3d-255ed69831db-config" (OuterVolumeSpecName: "config") pod "42b73b5c-bc43-4e91-9e3d-255ed69831db" (UID: "42b73b5c-bc43-4e91-9e3d-255ed69831db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.956177 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b73b5c-bc43-4e91-9e3d-255ed69831db-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "42b73b5c-bc43-4e91-9e3d-255ed69831db" (UID: "42b73b5c-bc43-4e91-9e3d-255ed69831db"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:04:03 crc kubenswrapper[4897]: I0214 19:04:03.972276 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-852a68ed-aa87-465e-9176-9ccd923320c6" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "42b73b5c-bc43-4e91-9e3d-255ed69831db" (UID: "42b73b5c-bc43-4e91-9e3d-255ed69831db"). InnerVolumeSpecName "pvc-852a68ed-aa87-465e-9176-9ccd923320c6". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.002436 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42b73b5c-bc43-4e91-9e3d-255ed69831db-web-config" (OuterVolumeSpecName: "web-config") pod "42b73b5c-bc43-4e91-9e3d-255ed69831db" (UID: "42b73b5c-bc43-4e91-9e3d-255ed69831db"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.026757 4897 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/42b73b5c-bc43-4e91-9e3d-255ed69831db-config-out\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.027000 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/42b73b5c-bc43-4e91-9e3d-255ed69831db-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.027071 4897 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/42b73b5c-bc43-4e91-9e3d-255ed69831db-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.027176 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7gdd\" (UniqueName: \"kubernetes.io/projected/42b73b5c-bc43-4e91-9e3d-255ed69831db-kube-api-access-l7gdd\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.027302 4897 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/42b73b5c-bc43-4e91-9e3d-255ed69831db-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.027446 4897 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/42b73b5c-bc43-4e91-9e3d-255ed69831db-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.027556 4897 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-852a68ed-aa87-465e-9176-9ccd923320c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-852a68ed-aa87-465e-9176-9ccd923320c6\") on node \"crc\" " Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.027630 4897 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/42b73b5c-bc43-4e91-9e3d-255ed69831db-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.028426 4897 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/42b73b5c-bc43-4e91-9e3d-255ed69831db-web-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.028526 4897 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/42b73b5c-bc43-4e91-9e3d-255ed69831db-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.063053 4897 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.063185 4897 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-852a68ed-aa87-465e-9176-9ccd923320c6" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-852a68ed-aa87-465e-9176-9ccd923320c6") on node "crc" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.131104 4897 reconciler_common.go:293] "Volume detached for volume \"pvc-852a68ed-aa87-465e-9176-9ccd923320c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-852a68ed-aa87-465e-9176-9ccd923320c6\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.175223 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.531811 4897 generic.go:334] "Generic (PLEG): container finished" podID="32d6ef5f-5f6d-4563-91e7-94928fbe901d" containerID="0559c79f3f8e876a576da1845a722e9632027d8d7c9eb9100730338292c01d04" exitCode=0 Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.531883 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"32d6ef5f-5f6d-4563-91e7-94928fbe901d","Type":"ContainerDied","Data":"0559c79f3f8e876a576da1845a722e9632027d8d7c9eb9100730338292c01d04"} Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.534598 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wlxqg-config-2jcmq" event={"ID":"13bbca50-8ee9-4865-b3cd-19701f17e330","Type":"ContainerStarted","Data":"7ab556f728a06c47b4fde7486fc8ac96b1b5906651fbc47fff920342644a0761"} Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.537941 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"42b73b5c-bc43-4e91-9e3d-255ed69831db","Type":"ContainerDied","Data":"178227235efbc6fdf2a9a03f9742b3057f52abc07163ec63bf042cd3ccc28931"} Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.537982 4897 scope.go:117] "RemoveContainer" containerID="177f9d537ec44b951e59558035d383275150de60899385d672dba1bf4c407189" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.537979 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.540670 4897 generic.go:334] "Generic (PLEG): container finished" podID="d0aeb6a0-bc14-4f52-8c20-d483e67320b5" containerID="b79850ead4cbcf0016b329c20855eb14b393ef6e9bf11faa7571b8df600150c4" exitCode=0 Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.541530 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9gch6" event={"ID":"d0aeb6a0-bc14-4f52-8c20-d483e67320b5","Type":"ContainerDied","Data":"b79850ead4cbcf0016b329c20855eb14b393ef6e9bf11faa7571b8df600150c4"} Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.644960 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-wlxqg-config-2jcmq" podStartSLOduration=2.6449407259999997 podStartE2EDuration="2.644940726s" podCreationTimestamp="2026-02-14 19:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:04:04.592754456 +0000 UTC m=+1297.569162939" watchObservedRunningTime="2026-02-14 19:04:04.644940726 +0000 UTC m=+1297.621349209" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.674627 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.690021 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.716071 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 19:04:04 crc kubenswrapper[4897]: E0214 19:04:04.716895 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b73b5c-bc43-4e91-9e3d-255ed69831db" containerName="config-reloader" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.718992 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b73b5c-bc43-4e91-9e3d-255ed69831db" containerName="config-reloader" Feb 14 19:04:04 crc kubenswrapper[4897]: E0214 19:04:04.719098 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b73b5c-bc43-4e91-9e3d-255ed69831db" containerName="thanos-sidecar" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.724605 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b73b5c-bc43-4e91-9e3d-255ed69831db" containerName="thanos-sidecar" Feb 14 19:04:04 crc kubenswrapper[4897]: E0214 19:04:04.724712 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b73b5c-bc43-4e91-9e3d-255ed69831db" containerName="init-config-reloader" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.724778 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b73b5c-bc43-4e91-9e3d-255ed69831db" containerName="init-config-reloader" Feb 14 19:04:04 crc kubenswrapper[4897]: E0214 19:04:04.724841 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b73b5c-bc43-4e91-9e3d-255ed69831db" containerName="prometheus" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.724891 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b73b5c-bc43-4e91-9e3d-255ed69831db" containerName="prometheus" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.725337 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="42b73b5c-bc43-4e91-9e3d-255ed69831db" containerName="thanos-sidecar" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.725399 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="42b73b5c-bc43-4e91-9e3d-255ed69831db" containerName="config-reloader" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.725458 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="42b73b5c-bc43-4e91-9e3d-255ed69831db" containerName="prometheus" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.727380 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.746970 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.747074 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.747102 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.747314 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-b7qjw" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.747396 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.747507 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.747332 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.747356 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.750043 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.751480 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.851491 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.851547 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v7xn\" (UniqueName: \"kubernetes.io/projected/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-kube-api-access-2v7xn\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.851630 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.851685 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.851703 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-config\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.851735 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.851808 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.851835 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.851854 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-852a68ed-aa87-465e-9176-9ccd923320c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-852a68ed-aa87-465e-9176-9ccd923320c6\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.851871 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.851885 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.851913 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.851951 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.953280 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.953342 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.953361 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-852a68ed-aa87-465e-9176-9ccd923320c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-852a68ed-aa87-465e-9176-9ccd923320c6\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.953379 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.953403 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.953423 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.953444 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.953518 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.953551 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v7xn\" (UniqueName: \"kubernetes.io/projected/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-kube-api-access-2v7xn\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.953580 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.953619 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.953637 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.953653 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-config\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.954686 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.955123 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.955905 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.957525 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.958158 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.958270 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.958746 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-config\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.960367 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.960539 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.960561 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-852a68ed-aa87-465e-9176-9ccd923320c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-852a68ed-aa87-465e-9176-9ccd923320c6\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8463609d0a12805b11ee43aef10868d3872f9002ead69ad9b6a8dbbf5475c501/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.966409 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.971409 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v7xn\" (UniqueName: \"kubernetes.io/projected/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-kube-api-access-2v7xn\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.976746 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:04 crc kubenswrapper[4897]: I0214 19:04:04.978618 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3c77ebc2-8dc3-4b0f-8f95-b3208b853935-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:05 crc kubenswrapper[4897]: I0214 19:04:05.006521 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-852a68ed-aa87-465e-9176-9ccd923320c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-852a68ed-aa87-465e-9176-9ccd923320c6\") pod \"prometheus-metric-storage-0\" (UID: \"3c77ebc2-8dc3-4b0f-8f95-b3208b853935\") " pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:05 crc kubenswrapper[4897]: I0214 19:04:05.058927 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:05 crc kubenswrapper[4897]: I0214 19:04:05.553972 4897 generic.go:334] "Generic (PLEG): container finished" podID="13bbca50-8ee9-4865-b3cd-19701f17e330" containerID="7ab556f728a06c47b4fde7486fc8ac96b1b5906651fbc47fff920342644a0761" exitCode=0 Feb 14 19:04:05 crc kubenswrapper[4897]: I0214 19:04:05.554044 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wlxqg-config-2jcmq" event={"ID":"13bbca50-8ee9-4865-b3cd-19701f17e330","Type":"ContainerDied","Data":"7ab556f728a06c47b4fde7486fc8ac96b1b5906651fbc47fff920342644a0761"} Feb 14 19:04:05 crc kubenswrapper[4897]: I0214 19:04:05.807628 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42b73b5c-bc43-4e91-9e3d-255ed69831db" path="/var/lib/kubelet/pods/42b73b5c-bc43-4e91-9e3d-255ed69831db/volumes" Feb 14 19:04:06 crc kubenswrapper[4897]: I0214 19:04:06.566111 4897 generic.go:334] "Generic (PLEG): container finished" podID="c8eb488b-8b48-4dea-8a34-dee3346005ef" containerID="a95df9cbd2a6de16e6cd9decf3036159b9c57f996a07f4fb70e3865a9af7ea81" exitCode=0 Feb 14 19:04:06 crc kubenswrapper[4897]: I0214 19:04:06.566190 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"c8eb488b-8b48-4dea-8a34-dee3346005ef","Type":"ContainerDied","Data":"a95df9cbd2a6de16e6cd9decf3036159b9c57f996a07f4fb70e3865a9af7ea81"} Feb 14 19:04:07 crc kubenswrapper[4897]: I0214 19:04:07.261583 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-wlxqg" Feb 14 19:04:07 crc kubenswrapper[4897]: I0214 19:04:07.889653 4897 scope.go:117] "RemoveContainer" containerID="5d0fe991b3797d44332828873637ab219420e4aaeba2b665e41a89fe818ebd6e" Feb 14 19:04:07 crc kubenswrapper[4897]: I0214 19:04:07.966316 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9gch6" Feb 14 19:04:07 crc kubenswrapper[4897]: I0214 19:04:07.971099 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wlxqg-config-2jcmq" Feb 14 19:04:07 crc kubenswrapper[4897]: I0214 19:04:07.976470 4897 scope.go:117] "RemoveContainer" containerID="a9eafbd8ceae4ac75efdffe4f8ba4141b2e5c47225c19003b0f379d5b0e48c75" Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:07.999704 4897 scope.go:117] "RemoveContainer" containerID="53fe9c492b6aef0c76559eeb95e05410cfe0e717f929994304c4c15b84519dcf" Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.015909 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpgxl\" (UniqueName: \"kubernetes.io/projected/d0aeb6a0-bc14-4f52-8c20-d483e67320b5-kube-api-access-qpgxl\") pod \"d0aeb6a0-bc14-4f52-8c20-d483e67320b5\" (UID: \"d0aeb6a0-bc14-4f52-8c20-d483e67320b5\") " Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.015994 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/13bbca50-8ee9-4865-b3cd-19701f17e330-var-run\") pod \"13bbca50-8ee9-4865-b3cd-19701f17e330\" (UID: \"13bbca50-8ee9-4865-b3cd-19701f17e330\") " Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.016093 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/13bbca50-8ee9-4865-b3cd-19701f17e330-additional-scripts\") pod \"13bbca50-8ee9-4865-b3cd-19701f17e330\" (UID: \"13bbca50-8ee9-4865-b3cd-19701f17e330\") " Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.016116 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0aeb6a0-bc14-4f52-8c20-d483e67320b5-operator-scripts\") pod \"d0aeb6a0-bc14-4f52-8c20-d483e67320b5\" (UID: \"d0aeb6a0-bc14-4f52-8c20-d483e67320b5\") " Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.016146 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13bbca50-8ee9-4865-b3cd-19701f17e330-var-run" (OuterVolumeSpecName: "var-run") pod "13bbca50-8ee9-4865-b3cd-19701f17e330" (UID: "13bbca50-8ee9-4865-b3cd-19701f17e330"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.016196 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13bbca50-8ee9-4865-b3cd-19701f17e330-scripts\") pod \"13bbca50-8ee9-4865-b3cd-19701f17e330\" (UID: \"13bbca50-8ee9-4865-b3cd-19701f17e330\") " Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.016348 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/13bbca50-8ee9-4865-b3cd-19701f17e330-var-run-ovn\") pod \"13bbca50-8ee9-4865-b3cd-19701f17e330\" (UID: \"13bbca50-8ee9-4865-b3cd-19701f17e330\") " Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.016438 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f79ss\" (UniqueName: \"kubernetes.io/projected/13bbca50-8ee9-4865-b3cd-19701f17e330-kube-api-access-f79ss\") pod \"13bbca50-8ee9-4865-b3cd-19701f17e330\" (UID: \"13bbca50-8ee9-4865-b3cd-19701f17e330\") " Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.016558 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/13bbca50-8ee9-4865-b3cd-19701f17e330-var-log-ovn\") pod \"13bbca50-8ee9-4865-b3cd-19701f17e330\" (UID: \"13bbca50-8ee9-4865-b3cd-19701f17e330\") " Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.017113 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13bbca50-8ee9-4865-b3cd-19701f17e330-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "13bbca50-8ee9-4865-b3cd-19701f17e330" (UID: "13bbca50-8ee9-4865-b3cd-19701f17e330"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.017170 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13bbca50-8ee9-4865-b3cd-19701f17e330-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "13bbca50-8ee9-4865-b3cd-19701f17e330" (UID: "13bbca50-8ee9-4865-b3cd-19701f17e330"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.018013 4897 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/13bbca50-8ee9-4865-b3cd-19701f17e330-var-run\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.018058 4897 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/13bbca50-8ee9-4865-b3cd-19701f17e330-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.018073 4897 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/13bbca50-8ee9-4865-b3cd-19701f17e330-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.018103 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13bbca50-8ee9-4865-b3cd-19701f17e330-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "13bbca50-8ee9-4865-b3cd-19701f17e330" (UID: "13bbca50-8ee9-4865-b3cd-19701f17e330"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.018325 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13bbca50-8ee9-4865-b3cd-19701f17e330-scripts" (OuterVolumeSpecName: "scripts") pod "13bbca50-8ee9-4865-b3cd-19701f17e330" (UID: "13bbca50-8ee9-4865-b3cd-19701f17e330"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.018524 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0aeb6a0-bc14-4f52-8c20-d483e67320b5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d0aeb6a0-bc14-4f52-8c20-d483e67320b5" (UID: "d0aeb6a0-bc14-4f52-8c20-d483e67320b5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.020508 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13bbca50-8ee9-4865-b3cd-19701f17e330-kube-api-access-f79ss" (OuterVolumeSpecName: "kube-api-access-f79ss") pod "13bbca50-8ee9-4865-b3cd-19701f17e330" (UID: "13bbca50-8ee9-4865-b3cd-19701f17e330"). InnerVolumeSpecName "kube-api-access-f79ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.024527 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0aeb6a0-bc14-4f52-8c20-d483e67320b5-kube-api-access-qpgxl" (OuterVolumeSpecName: "kube-api-access-qpgxl") pod "d0aeb6a0-bc14-4f52-8c20-d483e67320b5" (UID: "d0aeb6a0-bc14-4f52-8c20-d483e67320b5"). InnerVolumeSpecName "kube-api-access-qpgxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.119321 4897 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/13bbca50-8ee9-4865-b3cd-19701f17e330-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.119548 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpgxl\" (UniqueName: \"kubernetes.io/projected/d0aeb6a0-bc14-4f52-8c20-d483e67320b5-kube-api-access-qpgxl\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.119560 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0aeb6a0-bc14-4f52-8c20-d483e67320b5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.119569 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13bbca50-8ee9-4865-b3cd-19701f17e330-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.119577 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f79ss\" (UniqueName: \"kubernetes.io/projected/13bbca50-8ee9-4865-b3cd-19701f17e330-kube-api-access-f79ss\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:08 crc kubenswrapper[4897]: W0214 19:04:08.580677 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c77ebc2_8dc3_4b0f_8f95_b3208b853935.slice/crio-acd453e584ac13f5b4f751d070e290e6d820e40ef76f7756b0bd44e98fa0c86e WatchSource:0}: Error finding container acd453e584ac13f5b4f751d070e290e6d820e40ef76f7756b0bd44e98fa0c86e: Status 404 returned error can't find the container with id acd453e584ac13f5b4f751d070e290e6d820e40ef76f7756b0bd44e98fa0c86e Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.589325 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.589929 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"674b3cbc-fa6f-4475-bebd-314f24beaaa0","Type":"ContainerStarted","Data":"ac125aab65b431eb2883d078b7b77f24a28f2795dd647adc57972b0dbbc58425"} Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.589969 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"674b3cbc-fa6f-4475-bebd-314f24beaaa0","Type":"ContainerStarted","Data":"1a5e90ebc9f5dc9a59660c6b2613e6a553750111e4e8b0f2c2118bfd7f851219"} Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.593126 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9gch6" event={"ID":"d0aeb6a0-bc14-4f52-8c20-d483e67320b5","Type":"ContainerDied","Data":"a99e1b4a2d26a4856e47810ddb4ddfbdebffbd91ccda7440a308b4ae7c347f5f"} Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.593149 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a99e1b4a2d26a4856e47810ddb4ddfbdebffbd91ccda7440a308b4ae7c347f5f" Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.593192 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9gch6" Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.601890 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"32d6ef5f-5f6d-4563-91e7-94928fbe901d","Type":"ContainerStarted","Data":"a85f12fcf585b3b64a3ade14f0c0902b6ccba6b56c914aeec7902ae4dd26d831"} Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.602093 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.613323 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"a3bb3e8e-2264-4122-be43-4c1be375ceb1","Type":"ContainerStarted","Data":"3039e5bd149172a4ee3cdfc626c4b4100f167772d57d98eb272ec9080ad8606d"} Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.615715 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"c8eb488b-8b48-4dea-8a34-dee3346005ef","Type":"ContainerStarted","Data":"1f4276d1d9c3894f4b7ccd2d8622cc95da988f32fd0a009d8f0acb8310cff86e"} Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.615915 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.617867 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wlxqg-config-2jcmq" event={"ID":"13bbca50-8ee9-4865-b3cd-19701f17e330","Type":"ContainerDied","Data":"892f83890e85f8e724413409f2fffbbcaeaa236ac37c8f5ca4587cb20237c537"} Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.617896 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="892f83890e85f8e724413409f2fffbbcaeaa236ac37c8f5ca4587cb20237c537" Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.617946 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wlxqg-config-2jcmq" Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.623148 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371960.231644 podStartE2EDuration="1m16.62313174s" podCreationTimestamp="2026-02-14 19:02:52 +0000 UTC" firstStartedPulling="2026-02-14 19:02:54.085099435 +0000 UTC m=+1227.061507918" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:04:08.622799929 +0000 UTC m=+1301.599208442" watchObservedRunningTime="2026-02-14 19:04:08.62313174 +0000 UTC m=+1301.599540223" Feb 14 19:04:08 crc kubenswrapper[4897]: I0214 19:04:08.685248 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=-9223371960.16955 podStartE2EDuration="1m16.685226183s" podCreationTimestamp="2026-02-14 19:02:52 +0000 UTC" firstStartedPulling="2026-02-14 19:02:54.265556974 +0000 UTC m=+1227.241965457" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:04:08.670925841 +0000 UTC m=+1301.647334324" watchObservedRunningTime="2026-02-14 19:04:08.685226183 +0000 UTC m=+1301.661634666" Feb 14 19:04:09 crc kubenswrapper[4897]: I0214 19:04:09.007323 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=4.869637275 podStartE2EDuration="10.007147309s" podCreationTimestamp="2026-02-14 19:03:59 +0000 UTC" firstStartedPulling="2026-02-14 19:04:02.799404525 +0000 UTC m=+1295.775813008" lastFinishedPulling="2026-02-14 19:04:07.936914519 +0000 UTC m=+1300.913323042" observedRunningTime="2026-02-14 19:04:08.708829058 +0000 UTC m=+1301.685237541" watchObservedRunningTime="2026-02-14 19:04:09.007147309 +0000 UTC m=+1301.983555792" Feb 14 19:04:09 crc kubenswrapper[4897]: I0214 19:04:09.113236 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-wlxqg-config-2jcmq"] Feb 14 19:04:09 crc kubenswrapper[4897]: I0214 19:04:09.122228 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-wlxqg-config-2jcmq"] Feb 14 19:04:09 crc kubenswrapper[4897]: I0214 19:04:09.627582 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"674b3cbc-fa6f-4475-bebd-314f24beaaa0","Type":"ContainerStarted","Data":"bbd96e48aba3b82c1fd814bd99daad779c7e6d92fe5a7be3277a1342d7f1795b"} Feb 14 19:04:09 crc kubenswrapper[4897]: I0214 19:04:09.627625 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"674b3cbc-fa6f-4475-bebd-314f24beaaa0","Type":"ContainerStarted","Data":"b942a535b03ad8ad1438412e8729b406922008ebd28139616eb949abf7281913"} Feb 14 19:04:09 crc kubenswrapper[4897]: I0214 19:04:09.629184 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3c77ebc2-8dc3-4b0f-8f95-b3208b853935","Type":"ContainerStarted","Data":"acd453e584ac13f5b4f751d070e290e6d820e40ef76f7756b0bd44e98fa0c86e"} Feb 14 19:04:09 crc kubenswrapper[4897]: I0214 19:04:09.811691 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13bbca50-8ee9-4865-b3cd-19701f17e330" path="/var/lib/kubelet/pods/13bbca50-8ee9-4865-b3cd-19701f17e330/volumes" Feb 14 19:04:11 crc kubenswrapper[4897]: I0214 19:04:11.649195 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3c77ebc2-8dc3-4b0f-8f95-b3208b853935","Type":"ContainerStarted","Data":"b7e89e968b6c9e7500105bf2ea44db741fd703c344c61e25bf12a9880f3ca86f"} Feb 14 19:04:12 crc kubenswrapper[4897]: I0214 19:04:12.667376 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"674b3cbc-fa6f-4475-bebd-314f24beaaa0","Type":"ContainerStarted","Data":"a2d99124a19f4f9093fb842b87f34153bca4234418143b5a09934c57a4977baf"} Feb 14 19:04:12 crc kubenswrapper[4897]: I0214 19:04:12.667720 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"674b3cbc-fa6f-4475-bebd-314f24beaaa0","Type":"ContainerStarted","Data":"684f77b056c3aa6402be66dfb743213e4c43e5895e00e3f04be3655c8e779947"} Feb 14 19:04:12 crc kubenswrapper[4897]: I0214 19:04:12.667737 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"674b3cbc-fa6f-4475-bebd-314f24beaaa0","Type":"ContainerStarted","Data":"e21e02c90e8f2ceb0761b039c2ca16cce4c36e77592d2f0e6b432f84b6d9c4ab"} Feb 14 19:04:13 crc kubenswrapper[4897]: I0214 19:04:13.678547 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"674b3cbc-fa6f-4475-bebd-314f24beaaa0","Type":"ContainerStarted","Data":"ac7f6e1e365d8bcb12e7b909a0bc5822d639887717f0244c7eeab5a9b0c6afa5"} Feb 14 19:04:13 crc kubenswrapper[4897]: I0214 19:04:13.842202 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 14 19:04:14 crc kubenswrapper[4897]: I0214 19:04:14.706840 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"674b3cbc-fa6f-4475-bebd-314f24beaaa0","Type":"ContainerStarted","Data":"b71a27ec5b5bf1167fe7a2a54de925905fc67521c88a69e7f98e73c0f3da4435"} Feb 14 19:04:14 crc kubenswrapper[4897]: I0214 19:04:14.708219 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"674b3cbc-fa6f-4475-bebd-314f24beaaa0","Type":"ContainerStarted","Data":"0a5e4911a5f7ccf0a1de2b9f7a870bfe1943073445d24d4c33da383c9134e3c3"} Feb 14 19:04:14 crc kubenswrapper[4897]: I0214 19:04:14.708327 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"674b3cbc-fa6f-4475-bebd-314f24beaaa0","Type":"ContainerStarted","Data":"a174ca27cf41077bbe6d90e606af1a1af7c2bdfc91cbf3dfc32a1e7d3e7c583a"} Feb 14 19:04:15 crc kubenswrapper[4897]: I0214 19:04:15.718718 4897 generic.go:334] "Generic (PLEG): container finished" podID="731750fa-408a-46ef-89bb-5491267222fb" containerID="3fffb61f615afaa98a0b5adbddabb548d77bd6b052a72ac670ddc2da16f9e975" exitCode=0 Feb 14 19:04:15 crc kubenswrapper[4897]: I0214 19:04:15.718782 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7gfbg" event={"ID":"731750fa-408a-46ef-89bb-5491267222fb","Type":"ContainerDied","Data":"3fffb61f615afaa98a0b5adbddabb548d77bd6b052a72ac670ddc2da16f9e975"} Feb 14 19:04:15 crc kubenswrapper[4897]: I0214 19:04:15.726008 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"674b3cbc-fa6f-4475-bebd-314f24beaaa0","Type":"ContainerStarted","Data":"85c6838679c51275e165c63c9f4fb20914d6e3bc98c42cb8ec8e5f8bfb68b9f9"} Feb 14 19:04:15 crc kubenswrapper[4897]: I0214 19:04:15.726062 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"674b3cbc-fa6f-4475-bebd-314f24beaaa0","Type":"ContainerStarted","Data":"c12499dede7c3606ccd4ce5c9fd547af04f6034e24b8918f44581cc7f4c7d064"} Feb 14 19:04:15 crc kubenswrapper[4897]: I0214 19:04:15.726072 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"674b3cbc-fa6f-4475-bebd-314f24beaaa0","Type":"ContainerStarted","Data":"1648076a8145904868deb50298eaa2da5f72f525d7e0f54276805577c21dea5a"} Feb 14 19:04:15 crc kubenswrapper[4897]: I0214 19:04:15.726081 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"674b3cbc-fa6f-4475-bebd-314f24beaaa0","Type":"ContainerStarted","Data":"52c7885006f4ce680ab9a9ddad6d36cd3b9b7c04cb110838462b263f63f397e5"} Feb 14 19:04:15 crc kubenswrapper[4897]: I0214 19:04:15.823995 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.791698355 podStartE2EDuration="47.823979379s" podCreationTimestamp="2026-02-14 19:03:28 +0000 UTC" firstStartedPulling="2026-02-14 19:04:03.073320256 +0000 UTC m=+1296.049728739" lastFinishedPulling="2026-02-14 19:04:14.10560129 +0000 UTC m=+1307.082009763" observedRunningTime="2026-02-14 19:04:15.820158708 +0000 UTC m=+1308.796567201" watchObservedRunningTime="2026-02-14 19:04:15.823979379 +0000 UTC m=+1308.800387862" Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.086932 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-8fpcf"] Feb 14 19:04:16 crc kubenswrapper[4897]: E0214 19:04:16.087438 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13bbca50-8ee9-4865-b3cd-19701f17e330" containerName="ovn-config" Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.087461 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="13bbca50-8ee9-4865-b3cd-19701f17e330" containerName="ovn-config" Feb 14 19:04:16 crc kubenswrapper[4897]: E0214 19:04:16.087505 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0aeb6a0-bc14-4f52-8c20-d483e67320b5" containerName="mariadb-account-create-update" Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.087514 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0aeb6a0-bc14-4f52-8c20-d483e67320b5" containerName="mariadb-account-create-update" Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.087763 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="13bbca50-8ee9-4865-b3cd-19701f17e330" containerName="ovn-config" Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.087786 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0aeb6a0-bc14-4f52-8c20-d483e67320b5" containerName="mariadb-account-create-update" Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.095657 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.101442 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.118655 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-8fpcf"] Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.138055 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r927m\" (UniqueName: \"kubernetes.io/projected/492cb897-bf24-4651-a7d7-21c8fd17ab79-kube-api-access-r927m\") pod \"dnsmasq-dns-764c5664d7-8fpcf\" (UID: \"492cb897-bf24-4651-a7d7-21c8fd17ab79\") " pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.138104 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-config\") pod \"dnsmasq-dns-764c5664d7-8fpcf\" (UID: \"492cb897-bf24-4651-a7d7-21c8fd17ab79\") " pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.138159 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-8fpcf\" (UID: \"492cb897-bf24-4651-a7d7-21c8fd17ab79\") " pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.138220 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-8fpcf\" (UID: \"492cb897-bf24-4651-a7d7-21c8fd17ab79\") " pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.138274 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-8fpcf\" (UID: \"492cb897-bf24-4651-a7d7-21c8fd17ab79\") " pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.138322 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-dns-svc\") pod \"dnsmasq-dns-764c5664d7-8fpcf\" (UID: \"492cb897-bf24-4651-a7d7-21c8fd17ab79\") " pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.240529 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-8fpcf\" (UID: \"492cb897-bf24-4651-a7d7-21c8fd17ab79\") " pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.240659 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-8fpcf\" (UID: \"492cb897-bf24-4651-a7d7-21c8fd17ab79\") " pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.240726 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-dns-svc\") pod \"dnsmasq-dns-764c5664d7-8fpcf\" (UID: \"492cb897-bf24-4651-a7d7-21c8fd17ab79\") " pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.240843 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r927m\" (UniqueName: \"kubernetes.io/projected/492cb897-bf24-4651-a7d7-21c8fd17ab79-kube-api-access-r927m\") pod \"dnsmasq-dns-764c5664d7-8fpcf\" (UID: \"492cb897-bf24-4651-a7d7-21c8fd17ab79\") " pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.240876 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-config\") pod \"dnsmasq-dns-764c5664d7-8fpcf\" (UID: \"492cb897-bf24-4651-a7d7-21c8fd17ab79\") " pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.240982 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-8fpcf\" (UID: \"492cb897-bf24-4651-a7d7-21c8fd17ab79\") " pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.241447 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-8fpcf\" (UID: \"492cb897-bf24-4651-a7d7-21c8fd17ab79\") " pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.241631 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-8fpcf\" (UID: \"492cb897-bf24-4651-a7d7-21c8fd17ab79\") " pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.241640 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-dns-svc\") pod \"dnsmasq-dns-764c5664d7-8fpcf\" (UID: \"492cb897-bf24-4651-a7d7-21c8fd17ab79\") " pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.241793 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-config\") pod \"dnsmasq-dns-764c5664d7-8fpcf\" (UID: \"492cb897-bf24-4651-a7d7-21c8fd17ab79\") " pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.242204 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-8fpcf\" (UID: \"492cb897-bf24-4651-a7d7-21c8fd17ab79\") " pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.263224 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r927m\" (UniqueName: \"kubernetes.io/projected/492cb897-bf24-4651-a7d7-21c8fd17ab79-kube-api-access-r927m\") pod \"dnsmasq-dns-764c5664d7-8fpcf\" (UID: \"492cb897-bf24-4651-a7d7-21c8fd17ab79\") " pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.431297 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" Feb 14 19:04:16 crc kubenswrapper[4897]: I0214 19:04:16.953745 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-8fpcf"] Feb 14 19:04:16 crc kubenswrapper[4897]: W0214 19:04:16.962632 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod492cb897_bf24_4651_a7d7_21c8fd17ab79.slice/crio-e175c554569803132318a2e421fea1339602b57633b4d8692e60311ea5d1d03c WatchSource:0}: Error finding container e175c554569803132318a2e421fea1339602b57633b4d8692e60311ea5d1d03c: Status 404 returned error can't find the container with id e175c554569803132318a2e421fea1339602b57633b4d8692e60311ea5d1d03c Feb 14 19:04:17 crc kubenswrapper[4897]: I0214 19:04:17.361972 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7gfbg" Feb 14 19:04:17 crc kubenswrapper[4897]: I0214 19:04:17.563806 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/731750fa-408a-46ef-89bb-5491267222fb-config-data\") pod \"731750fa-408a-46ef-89bb-5491267222fb\" (UID: \"731750fa-408a-46ef-89bb-5491267222fb\") " Feb 14 19:04:17 crc kubenswrapper[4897]: I0214 19:04:17.564171 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/731750fa-408a-46ef-89bb-5491267222fb-db-sync-config-data\") pod \"731750fa-408a-46ef-89bb-5491267222fb\" (UID: \"731750fa-408a-46ef-89bb-5491267222fb\") " Feb 14 19:04:17 crc kubenswrapper[4897]: I0214 19:04:17.564236 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rnl9\" (UniqueName: \"kubernetes.io/projected/731750fa-408a-46ef-89bb-5491267222fb-kube-api-access-7rnl9\") pod \"731750fa-408a-46ef-89bb-5491267222fb\" (UID: \"731750fa-408a-46ef-89bb-5491267222fb\") " Feb 14 19:04:17 crc kubenswrapper[4897]: I0214 19:04:17.564273 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/731750fa-408a-46ef-89bb-5491267222fb-combined-ca-bundle\") pod \"731750fa-408a-46ef-89bb-5491267222fb\" (UID: \"731750fa-408a-46ef-89bb-5491267222fb\") " Feb 14 19:04:17 crc kubenswrapper[4897]: I0214 19:04:17.577884 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/731750fa-408a-46ef-89bb-5491267222fb-kube-api-access-7rnl9" (OuterVolumeSpecName: "kube-api-access-7rnl9") pod "731750fa-408a-46ef-89bb-5491267222fb" (UID: "731750fa-408a-46ef-89bb-5491267222fb"). InnerVolumeSpecName "kube-api-access-7rnl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:04:17 crc kubenswrapper[4897]: I0214 19:04:17.583068 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/731750fa-408a-46ef-89bb-5491267222fb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "731750fa-408a-46ef-89bb-5491267222fb" (UID: "731750fa-408a-46ef-89bb-5491267222fb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:04:17 crc kubenswrapper[4897]: I0214 19:04:17.624405 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/731750fa-408a-46ef-89bb-5491267222fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "731750fa-408a-46ef-89bb-5491267222fb" (UID: "731750fa-408a-46ef-89bb-5491267222fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:04:17 crc kubenswrapper[4897]: I0214 19:04:17.629441 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/731750fa-408a-46ef-89bb-5491267222fb-config-data" (OuterVolumeSpecName: "config-data") pod "731750fa-408a-46ef-89bb-5491267222fb" (UID: "731750fa-408a-46ef-89bb-5491267222fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:04:17 crc kubenswrapper[4897]: I0214 19:04:17.666570 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/731750fa-408a-46ef-89bb-5491267222fb-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:17 crc kubenswrapper[4897]: I0214 19:04:17.666601 4897 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/731750fa-408a-46ef-89bb-5491267222fb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:17 crc kubenswrapper[4897]: I0214 19:04:17.666612 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rnl9\" (UniqueName: \"kubernetes.io/projected/731750fa-408a-46ef-89bb-5491267222fb-kube-api-access-7rnl9\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:17 crc kubenswrapper[4897]: I0214 19:04:17.666623 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/731750fa-408a-46ef-89bb-5491267222fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:17 crc kubenswrapper[4897]: I0214 19:04:17.751529 4897 generic.go:334] "Generic (PLEG): container finished" podID="3c77ebc2-8dc3-4b0f-8f95-b3208b853935" containerID="b7e89e968b6c9e7500105bf2ea44db741fd703c344c61e25bf12a9880f3ca86f" exitCode=0 Feb 14 19:04:17 crc kubenswrapper[4897]: I0214 19:04:17.751614 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3c77ebc2-8dc3-4b0f-8f95-b3208b853935","Type":"ContainerDied","Data":"b7e89e968b6c9e7500105bf2ea44db741fd703c344c61e25bf12a9880f3ca86f"} Feb 14 19:04:17 crc kubenswrapper[4897]: I0214 19:04:17.754015 4897 generic.go:334] "Generic (PLEG): container finished" podID="492cb897-bf24-4651-a7d7-21c8fd17ab79" containerID="feeee1a1023fc9d2570351435bae7e61fe2abb68a6fe3f502eec3ad2f4977819" exitCode=0 Feb 14 19:04:17 crc kubenswrapper[4897]: I0214 19:04:17.754282 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" event={"ID":"492cb897-bf24-4651-a7d7-21c8fd17ab79","Type":"ContainerDied","Data":"feeee1a1023fc9d2570351435bae7e61fe2abb68a6fe3f502eec3ad2f4977819"} Feb 14 19:04:17 crc kubenswrapper[4897]: I0214 19:04:17.754307 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" event={"ID":"492cb897-bf24-4651-a7d7-21c8fd17ab79","Type":"ContainerStarted","Data":"e175c554569803132318a2e421fea1339602b57633b4d8692e60311ea5d1d03c"} Feb 14 19:04:17 crc kubenswrapper[4897]: I0214 19:04:17.755785 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7gfbg" event={"ID":"731750fa-408a-46ef-89bb-5491267222fb","Type":"ContainerDied","Data":"7a44322ebb64834d3af613363cbdd550f935ed7107834c5d264cb285fd9208f9"} Feb 14 19:04:17 crc kubenswrapper[4897]: I0214 19:04:17.755805 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a44322ebb64834d3af613363cbdd550f935ed7107834c5d264cb285fd9208f9" Feb 14 19:04:17 crc kubenswrapper[4897]: I0214 19:04:17.755901 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7gfbg" Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.157006 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-8fpcf"] Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.185266 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-wwgp5"] Feb 14 19:04:18 crc kubenswrapper[4897]: E0214 19:04:18.185990 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="731750fa-408a-46ef-89bb-5491267222fb" containerName="glance-db-sync" Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.186014 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="731750fa-408a-46ef-89bb-5491267222fb" containerName="glance-db-sync" Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.186295 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="731750fa-408a-46ef-89bb-5491267222fb" containerName="glance-db-sync" Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.187829 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.208284 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-wwgp5"] Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.314794 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-wwgp5\" (UID: \"f691ef96-83d3-4da6-879d-63f6cdb753a4\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.314867 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-wwgp5\" (UID: \"f691ef96-83d3-4da6-879d-63f6cdb753a4\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.314931 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-config\") pod \"dnsmasq-dns-74f6bcbc87-wwgp5\" (UID: \"f691ef96-83d3-4da6-879d-63f6cdb753a4\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.314993 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj46w\" (UniqueName: \"kubernetes.io/projected/f691ef96-83d3-4da6-879d-63f6cdb753a4-kube-api-access-jj46w\") pod \"dnsmasq-dns-74f6bcbc87-wwgp5\" (UID: \"f691ef96-83d3-4da6-879d-63f6cdb753a4\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.315261 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-wwgp5\" (UID: \"f691ef96-83d3-4da6-879d-63f6cdb753a4\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.315451 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-wwgp5\" (UID: \"f691ef96-83d3-4da6-879d-63f6cdb753a4\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.417299 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj46w\" (UniqueName: \"kubernetes.io/projected/f691ef96-83d3-4da6-879d-63f6cdb753a4-kube-api-access-jj46w\") pod \"dnsmasq-dns-74f6bcbc87-wwgp5\" (UID: \"f691ef96-83d3-4da6-879d-63f6cdb753a4\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.417457 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-wwgp5\" (UID: \"f691ef96-83d3-4da6-879d-63f6cdb753a4\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.417514 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-wwgp5\" (UID: \"f691ef96-83d3-4da6-879d-63f6cdb753a4\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.417585 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-wwgp5\" (UID: \"f691ef96-83d3-4da6-879d-63f6cdb753a4\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.417620 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-wwgp5\" (UID: \"f691ef96-83d3-4da6-879d-63f6cdb753a4\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.417663 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-config\") pod \"dnsmasq-dns-74f6bcbc87-wwgp5\" (UID: \"f691ef96-83d3-4da6-879d-63f6cdb753a4\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.418524 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-config\") pod \"dnsmasq-dns-74f6bcbc87-wwgp5\" (UID: \"f691ef96-83d3-4da6-879d-63f6cdb753a4\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.418517 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-wwgp5\" (UID: \"f691ef96-83d3-4da6-879d-63f6cdb753a4\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.418575 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-wwgp5\" (UID: \"f691ef96-83d3-4da6-879d-63f6cdb753a4\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.418748 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-wwgp5\" (UID: \"f691ef96-83d3-4da6-879d-63f6cdb753a4\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.418869 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-wwgp5\" (UID: \"f691ef96-83d3-4da6-879d-63f6cdb753a4\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.452209 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj46w\" (UniqueName: \"kubernetes.io/projected/f691ef96-83d3-4da6-879d-63f6cdb753a4-kube-api-access-jj46w\") pod \"dnsmasq-dns-74f6bcbc87-wwgp5\" (UID: \"f691ef96-83d3-4da6-879d-63f6cdb753a4\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.511653 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.776281 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3c77ebc2-8dc3-4b0f-8f95-b3208b853935","Type":"ContainerStarted","Data":"d12e0897d798dc8c3b5c40ae78f7cf206b44977013c93f696aebbb7c4a83ef5f"} Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.784925 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" event={"ID":"492cb897-bf24-4651-a7d7-21c8fd17ab79","Type":"ContainerStarted","Data":"644b1008a2c627c18f96e9384a52967412a81511b05d4f0ee3e4cd95dbb6b380"} Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.786141 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" Feb 14 19:04:18 crc kubenswrapper[4897]: I0214 19:04:18.844727 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" podStartSLOduration=2.844707674 podStartE2EDuration="2.844707674s" podCreationTimestamp="2026-02-14 19:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:04:18.819047003 +0000 UTC m=+1311.795455496" watchObservedRunningTime="2026-02-14 19:04:18.844707674 +0000 UTC m=+1311.821116157" Feb 14 19:04:19 crc kubenswrapper[4897]: I0214 19:04:19.148684 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-wwgp5"] Feb 14 19:04:19 crc kubenswrapper[4897]: I0214 19:04:19.795412 4897 generic.go:334] "Generic (PLEG): container finished" podID="f691ef96-83d3-4da6-879d-63f6cdb753a4" containerID="fa08aef9083e07fdef1f76deb25a4d81ca13aa7a9e308f056c0cda17fa71cf38" exitCode=0 Feb 14 19:04:19 crc kubenswrapper[4897]: I0214 19:04:19.796214 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" podUID="492cb897-bf24-4651-a7d7-21c8fd17ab79" containerName="dnsmasq-dns" containerID="cri-o://644b1008a2c627c18f96e9384a52967412a81511b05d4f0ee3e4cd95dbb6b380" gracePeriod=10 Feb 14 19:04:19 crc kubenswrapper[4897]: I0214 19:04:19.811128 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" event={"ID":"f691ef96-83d3-4da6-879d-63f6cdb753a4","Type":"ContainerDied","Data":"fa08aef9083e07fdef1f76deb25a4d81ca13aa7a9e308f056c0cda17fa71cf38"} Feb 14 19:04:19 crc kubenswrapper[4897]: I0214 19:04:19.811566 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" event={"ID":"f691ef96-83d3-4da6-879d-63f6cdb753a4","Type":"ContainerStarted","Data":"9c371c5afcac3a4e24e3c6f0696e0c167822690025439f5372e08c28ba3b32ec"} Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.483283 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.663316 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-config\") pod \"492cb897-bf24-4651-a7d7-21c8fd17ab79\" (UID: \"492cb897-bf24-4651-a7d7-21c8fd17ab79\") " Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.663410 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-dns-svc\") pod \"492cb897-bf24-4651-a7d7-21c8fd17ab79\" (UID: \"492cb897-bf24-4651-a7d7-21c8fd17ab79\") " Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.663474 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-ovsdbserver-sb\") pod \"492cb897-bf24-4651-a7d7-21c8fd17ab79\" (UID: \"492cb897-bf24-4651-a7d7-21c8fd17ab79\") " Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.663544 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-ovsdbserver-nb\") pod \"492cb897-bf24-4651-a7d7-21c8fd17ab79\" (UID: \"492cb897-bf24-4651-a7d7-21c8fd17ab79\") " Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.663592 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-dns-swift-storage-0\") pod \"492cb897-bf24-4651-a7d7-21c8fd17ab79\" (UID: \"492cb897-bf24-4651-a7d7-21c8fd17ab79\") " Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.663701 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r927m\" (UniqueName: \"kubernetes.io/projected/492cb897-bf24-4651-a7d7-21c8fd17ab79-kube-api-access-r927m\") pod \"492cb897-bf24-4651-a7d7-21c8fd17ab79\" (UID: \"492cb897-bf24-4651-a7d7-21c8fd17ab79\") " Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.672174 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/492cb897-bf24-4651-a7d7-21c8fd17ab79-kube-api-access-r927m" (OuterVolumeSpecName: "kube-api-access-r927m") pod "492cb897-bf24-4651-a7d7-21c8fd17ab79" (UID: "492cb897-bf24-4651-a7d7-21c8fd17ab79"). InnerVolumeSpecName "kube-api-access-r927m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.729492 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "492cb897-bf24-4651-a7d7-21c8fd17ab79" (UID: "492cb897-bf24-4651-a7d7-21c8fd17ab79"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.745497 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-config" (OuterVolumeSpecName: "config") pod "492cb897-bf24-4651-a7d7-21c8fd17ab79" (UID: "492cb897-bf24-4651-a7d7-21c8fd17ab79"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.752746 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "492cb897-bf24-4651-a7d7-21c8fd17ab79" (UID: "492cb897-bf24-4651-a7d7-21c8fd17ab79"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.762204 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "492cb897-bf24-4651-a7d7-21c8fd17ab79" (UID: "492cb897-bf24-4651-a7d7-21c8fd17ab79"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.765478 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "492cb897-bf24-4651-a7d7-21c8fd17ab79" (UID: "492cb897-bf24-4651-a7d7-21c8fd17ab79"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.765583 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-dns-swift-storage-0\") pod \"492cb897-bf24-4651-a7d7-21c8fd17ab79\" (UID: \"492cb897-bf24-4651-a7d7-21c8fd17ab79\") " Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.766022 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r927m\" (UniqueName: \"kubernetes.io/projected/492cb897-bf24-4651-a7d7-21c8fd17ab79-kube-api-access-r927m\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.766054 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.766064 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.766076 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.766084 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:20 crc kubenswrapper[4897]: W0214 19:04:20.766135 4897 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/492cb897-bf24-4651-a7d7-21c8fd17ab79/volumes/kubernetes.io~configmap/dns-swift-storage-0 Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.766144 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "492cb897-bf24-4651-a7d7-21c8fd17ab79" (UID: "492cb897-bf24-4651-a7d7-21c8fd17ab79"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.811252 4897 generic.go:334] "Generic (PLEG): container finished" podID="492cb897-bf24-4651-a7d7-21c8fd17ab79" containerID="644b1008a2c627c18f96e9384a52967412a81511b05d4f0ee3e4cd95dbb6b380" exitCode=0 Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.811324 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" event={"ID":"492cb897-bf24-4651-a7d7-21c8fd17ab79","Type":"ContainerDied","Data":"644b1008a2c627c18f96e9384a52967412a81511b05d4f0ee3e4cd95dbb6b380"} Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.811351 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" event={"ID":"492cb897-bf24-4651-a7d7-21c8fd17ab79","Type":"ContainerDied","Data":"e175c554569803132318a2e421fea1339602b57633b4d8692e60311ea5d1d03c"} Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.811366 4897 scope.go:117] "RemoveContainer" containerID="644b1008a2c627c18f96e9384a52967412a81511b05d4f0ee3e4cd95dbb6b380" Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.811492 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-8fpcf" Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.819092 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" event={"ID":"f691ef96-83d3-4da6-879d-63f6cdb753a4","Type":"ContainerStarted","Data":"fa7a2b5fe0d9f19351d0ee6bbedbd6bebcbf47dea78d04a4038a74c2f7a9e737"} Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.819274 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.866720 4897 scope.go:117] "RemoveContainer" containerID="feeee1a1023fc9d2570351435bae7e61fe2abb68a6fe3f502eec3ad2f4977819" Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.874331 4897 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/492cb897-bf24-4651-a7d7-21c8fd17ab79-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.887194 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" podStartSLOduration=2.887175336 podStartE2EDuration="2.887175336s" podCreationTimestamp="2026-02-14 19:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:04:20.854241965 +0000 UTC m=+1313.830650468" watchObservedRunningTime="2026-02-14 19:04:20.887175336 +0000 UTC m=+1313.863583819" Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.889108 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-8fpcf"] Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.898206 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-8fpcf"] Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.907265 4897 scope.go:117] "RemoveContainer" containerID="644b1008a2c627c18f96e9384a52967412a81511b05d4f0ee3e4cd95dbb6b380" Feb 14 19:04:20 crc kubenswrapper[4897]: E0214 19:04:20.908128 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"644b1008a2c627c18f96e9384a52967412a81511b05d4f0ee3e4cd95dbb6b380\": container with ID starting with 644b1008a2c627c18f96e9384a52967412a81511b05d4f0ee3e4cd95dbb6b380 not found: ID does not exist" containerID="644b1008a2c627c18f96e9384a52967412a81511b05d4f0ee3e4cd95dbb6b380" Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.908181 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"644b1008a2c627c18f96e9384a52967412a81511b05d4f0ee3e4cd95dbb6b380"} err="failed to get container status \"644b1008a2c627c18f96e9384a52967412a81511b05d4f0ee3e4cd95dbb6b380\": rpc error: code = NotFound desc = could not find container \"644b1008a2c627c18f96e9384a52967412a81511b05d4f0ee3e4cd95dbb6b380\": container with ID starting with 644b1008a2c627c18f96e9384a52967412a81511b05d4f0ee3e4cd95dbb6b380 not found: ID does not exist" Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.908230 4897 scope.go:117] "RemoveContainer" containerID="feeee1a1023fc9d2570351435bae7e61fe2abb68a6fe3f502eec3ad2f4977819" Feb 14 19:04:20 crc kubenswrapper[4897]: E0214 19:04:20.908751 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feeee1a1023fc9d2570351435bae7e61fe2abb68a6fe3f502eec3ad2f4977819\": container with ID starting with feeee1a1023fc9d2570351435bae7e61fe2abb68a6fe3f502eec3ad2f4977819 not found: ID does not exist" containerID="feeee1a1023fc9d2570351435bae7e61fe2abb68a6fe3f502eec3ad2f4977819" Feb 14 19:04:20 crc kubenswrapper[4897]: I0214 19:04:20.908799 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feeee1a1023fc9d2570351435bae7e61fe2abb68a6fe3f502eec3ad2f4977819"} err="failed to get container status \"feeee1a1023fc9d2570351435bae7e61fe2abb68a6fe3f502eec3ad2f4977819\": rpc error: code = NotFound desc = could not find container \"feeee1a1023fc9d2570351435bae7e61fe2abb68a6fe3f502eec3ad2f4977819\": container with ID starting with feeee1a1023fc9d2570351435bae7e61fe2abb68a6fe3f502eec3ad2f4977819 not found: ID does not exist" Feb 14 19:04:21 crc kubenswrapper[4897]: I0214 19:04:21.805755 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="492cb897-bf24-4651-a7d7-21c8fd17ab79" path="/var/lib/kubelet/pods/492cb897-bf24-4651-a7d7-21c8fd17ab79/volumes" Feb 14 19:04:21 crc kubenswrapper[4897]: I0214 19:04:21.829133 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3c77ebc2-8dc3-4b0f-8f95-b3208b853935","Type":"ContainerStarted","Data":"e8466cba8a7d8ddad45ba20a508cc939420ada5793c74000676087914e5503a2"} Feb 14 19:04:21 crc kubenswrapper[4897]: I0214 19:04:21.829197 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3c77ebc2-8dc3-4b0f-8f95-b3208b853935","Type":"ContainerStarted","Data":"b2f74b722c282d7a92c86045256dfa7b04092f8c645180d3970b8ce8076c2620"} Feb 14 19:04:21 crc kubenswrapper[4897]: I0214 19:04:21.869188 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.869169537 podStartE2EDuration="17.869169537s" podCreationTimestamp="2026-02-14 19:04:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:04:21.859281185 +0000 UTC m=+1314.835689668" watchObservedRunningTime="2026-02-14 19:04:21.869169537 +0000 UTC m=+1314.845578020" Feb 14 19:04:23 crc kubenswrapper[4897]: I0214 19:04:23.519911 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 14 19:04:23 crc kubenswrapper[4897]: I0214 19:04:23.564578 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.060856 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.548956 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-shmjs"] Feb 14 19:04:25 crc kubenswrapper[4897]: E0214 19:04:25.549700 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="492cb897-bf24-4651-a7d7-21c8fd17ab79" containerName="init" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.549781 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="492cb897-bf24-4651-a7d7-21c8fd17ab79" containerName="init" Feb 14 19:04:25 crc kubenswrapper[4897]: E0214 19:04:25.549893 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="492cb897-bf24-4651-a7d7-21c8fd17ab79" containerName="dnsmasq-dns" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.550201 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="492cb897-bf24-4651-a7d7-21c8fd17ab79" containerName="dnsmasq-dns" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.550482 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="492cb897-bf24-4651-a7d7-21c8fd17ab79" containerName="dnsmasq-dns" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.551255 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-shmjs" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.557452 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-shmjs"] Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.659607 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-a45e-account-create-update-g6bl6"] Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.660974 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a45e-account-create-update-g6bl6" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.662713 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.671162 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14911dd9-fb30-4512-bfff-1e5acd6b0b50-operator-scripts\") pod \"cinder-db-create-shmjs\" (UID: \"14911dd9-fb30-4512-bfff-1e5acd6b0b50\") " pod="openstack/cinder-db-create-shmjs" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.671318 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64qzl\" (UniqueName: \"kubernetes.io/projected/14911dd9-fb30-4512-bfff-1e5acd6b0b50-kube-api-access-64qzl\") pod \"cinder-db-create-shmjs\" (UID: \"14911dd9-fb30-4512-bfff-1e5acd6b0b50\") " pod="openstack/cinder-db-create-shmjs" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.672493 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a45e-account-create-update-g6bl6"] Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.754323 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-2wcvc"] Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.755888 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-2wcvc" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.766325 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-2wcvc"] Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.773374 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14911dd9-fb30-4512-bfff-1e5acd6b0b50-operator-scripts\") pod \"cinder-db-create-shmjs\" (UID: \"14911dd9-fb30-4512-bfff-1e5acd6b0b50\") " pod="openstack/cinder-db-create-shmjs" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.773471 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88ea3e1a-f046-40c8-9af9-72a1fc228a7c-operator-scripts\") pod \"cinder-a45e-account-create-update-g6bl6\" (UID: \"88ea3e1a-f046-40c8-9af9-72a1fc228a7c\") " pod="openstack/cinder-a45e-account-create-update-g6bl6" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.773515 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64qzl\" (UniqueName: \"kubernetes.io/projected/14911dd9-fb30-4512-bfff-1e5acd6b0b50-kube-api-access-64qzl\") pod \"cinder-db-create-shmjs\" (UID: \"14911dd9-fb30-4512-bfff-1e5acd6b0b50\") " pod="openstack/cinder-db-create-shmjs" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.773577 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plhxv\" (UniqueName: \"kubernetes.io/projected/88ea3e1a-f046-40c8-9af9-72a1fc228a7c-kube-api-access-plhxv\") pod \"cinder-a45e-account-create-update-g6bl6\" (UID: \"88ea3e1a-f046-40c8-9af9-72a1fc228a7c\") " pod="openstack/cinder-a45e-account-create-update-g6bl6" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.774138 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14911dd9-fb30-4512-bfff-1e5acd6b0b50-operator-scripts\") pod \"cinder-db-create-shmjs\" (UID: \"14911dd9-fb30-4512-bfff-1e5acd6b0b50\") " pod="openstack/cinder-db-create-shmjs" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.796041 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64qzl\" (UniqueName: \"kubernetes.io/projected/14911dd9-fb30-4512-bfff-1e5acd6b0b50-kube-api-access-64qzl\") pod \"cinder-db-create-shmjs\" (UID: \"14911dd9-fb30-4512-bfff-1e5acd6b0b50\") " pod="openstack/cinder-db-create-shmjs" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.857803 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-f75f-account-create-update-8kqcq"] Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.859274 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-f75f-account-create-update-8kqcq" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.863822 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.873544 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-f75f-account-create-update-8kqcq"] Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.874827 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sdkp\" (UniqueName: \"kubernetes.io/projected/915fbcb5-f3cd-4597-a771-54c7ebae16a8-kube-api-access-6sdkp\") pod \"heat-db-create-2wcvc\" (UID: \"915fbcb5-f3cd-4597-a771-54c7ebae16a8\") " pod="openstack/heat-db-create-2wcvc" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.875634 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/915fbcb5-f3cd-4597-a771-54c7ebae16a8-operator-scripts\") pod \"heat-db-create-2wcvc\" (UID: \"915fbcb5-f3cd-4597-a771-54c7ebae16a8\") " pod="openstack/heat-db-create-2wcvc" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.875747 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88ea3e1a-f046-40c8-9af9-72a1fc228a7c-operator-scripts\") pod \"cinder-a45e-account-create-update-g6bl6\" (UID: \"88ea3e1a-f046-40c8-9af9-72a1fc228a7c\") " pod="openstack/cinder-a45e-account-create-update-g6bl6" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.875903 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plhxv\" (UniqueName: \"kubernetes.io/projected/88ea3e1a-f046-40c8-9af9-72a1fc228a7c-kube-api-access-plhxv\") pod \"cinder-a45e-account-create-update-g6bl6\" (UID: \"88ea3e1a-f046-40c8-9af9-72a1fc228a7c\") " pod="openstack/cinder-a45e-account-create-update-g6bl6" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.877767 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88ea3e1a-f046-40c8-9af9-72a1fc228a7c-operator-scripts\") pod \"cinder-a45e-account-create-update-g6bl6\" (UID: \"88ea3e1a-f046-40c8-9af9-72a1fc228a7c\") " pod="openstack/cinder-a45e-account-create-update-g6bl6" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.887172 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-shmjs" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.910362 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plhxv\" (UniqueName: \"kubernetes.io/projected/88ea3e1a-f046-40c8-9af9-72a1fc228a7c-kube-api-access-plhxv\") pod \"cinder-a45e-account-create-update-g6bl6\" (UID: \"88ea3e1a-f046-40c8-9af9-72a1fc228a7c\") " pod="openstack/cinder-a45e-account-create-update-g6bl6" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.961133 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-2zx2p"] Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.963545 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2zx2p" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.977865 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjrrw\" (UniqueName: \"kubernetes.io/projected/4d3ace23-df5c-40ae-a726-90ebe47317ac-kube-api-access-zjrrw\") pod \"heat-f75f-account-create-update-8kqcq\" (UID: \"4d3ace23-df5c-40ae-a726-90ebe47317ac\") " pod="openstack/heat-f75f-account-create-update-8kqcq" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.977905 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d3ace23-df5c-40ae-a726-90ebe47317ac-operator-scripts\") pod \"heat-f75f-account-create-update-8kqcq\" (UID: \"4d3ace23-df5c-40ae-a726-90ebe47317ac\") " pod="openstack/heat-f75f-account-create-update-8kqcq" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.978043 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sdkp\" (UniqueName: \"kubernetes.io/projected/915fbcb5-f3cd-4597-a771-54c7ebae16a8-kube-api-access-6sdkp\") pod \"heat-db-create-2wcvc\" (UID: \"915fbcb5-f3cd-4597-a771-54c7ebae16a8\") " pod="openstack/heat-db-create-2wcvc" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.978069 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/915fbcb5-f3cd-4597-a771-54c7ebae16a8-operator-scripts\") pod \"heat-db-create-2wcvc\" (UID: \"915fbcb5-f3cd-4597-a771-54c7ebae16a8\") " pod="openstack/heat-db-create-2wcvc" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.978809 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/915fbcb5-f3cd-4597-a771-54c7ebae16a8-operator-scripts\") pod \"heat-db-create-2wcvc\" (UID: \"915fbcb5-f3cd-4597-a771-54c7ebae16a8\") " pod="openstack/heat-db-create-2wcvc" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.979428 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a45e-account-create-update-g6bl6" Feb 14 19:04:25 crc kubenswrapper[4897]: I0214 19:04:25.988608 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-2zx2p"] Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.004132 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sdkp\" (UniqueName: \"kubernetes.io/projected/915fbcb5-f3cd-4597-a771-54c7ebae16a8-kube-api-access-6sdkp\") pod \"heat-db-create-2wcvc\" (UID: \"915fbcb5-f3cd-4597-a771-54c7ebae16a8\") " pod="openstack/heat-db-create-2wcvc" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.071654 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-2wcvc" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.081908 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlt48\" (UniqueName: \"kubernetes.io/projected/6566173c-4067-420f-8df0-ad21cab585fd-kube-api-access-mlt48\") pod \"barbican-db-create-2zx2p\" (UID: \"6566173c-4067-420f-8df0-ad21cab585fd\") " pod="openstack/barbican-db-create-2zx2p" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.082004 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjrrw\" (UniqueName: \"kubernetes.io/projected/4d3ace23-df5c-40ae-a726-90ebe47317ac-kube-api-access-zjrrw\") pod \"heat-f75f-account-create-update-8kqcq\" (UID: \"4d3ace23-df5c-40ae-a726-90ebe47317ac\") " pod="openstack/heat-f75f-account-create-update-8kqcq" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.082044 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d3ace23-df5c-40ae-a726-90ebe47317ac-operator-scripts\") pod \"heat-f75f-account-create-update-8kqcq\" (UID: \"4d3ace23-df5c-40ae-a726-90ebe47317ac\") " pod="openstack/heat-f75f-account-create-update-8kqcq" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.082114 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6566173c-4067-420f-8df0-ad21cab585fd-operator-scripts\") pod \"barbican-db-create-2zx2p\" (UID: \"6566173c-4067-420f-8df0-ad21cab585fd\") " pod="openstack/barbican-db-create-2zx2p" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.082791 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d3ace23-df5c-40ae-a726-90ebe47317ac-operator-scripts\") pod \"heat-f75f-account-create-update-8kqcq\" (UID: \"4d3ace23-df5c-40ae-a726-90ebe47317ac\") " pod="openstack/heat-f75f-account-create-update-8kqcq" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.097495 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-mgj9f"] Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.098905 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mgj9f" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.107136 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjrrw\" (UniqueName: \"kubernetes.io/projected/4d3ace23-df5c-40ae-a726-90ebe47317ac-kube-api-access-zjrrw\") pod \"heat-f75f-account-create-update-8kqcq\" (UID: \"4d3ace23-df5c-40ae-a726-90ebe47317ac\") " pod="openstack/heat-f75f-account-create-update-8kqcq" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.112324 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-c567-account-create-update-nzrhx"] Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.114174 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c567-account-create-update-nzrhx" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.116314 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.148017 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c567-account-create-update-nzrhx"] Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.179792 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mgj9f"] Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.184971 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6v2f\" (UniqueName: \"kubernetes.io/projected/83ba161a-cd6c-4998-8094-a4d05d9722d2-kube-api-access-m6v2f\") pod \"neutron-db-create-mgj9f\" (UID: \"83ba161a-cd6c-4998-8094-a4d05d9722d2\") " pod="openstack/neutron-db-create-mgj9f" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.185063 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlt48\" (UniqueName: \"kubernetes.io/projected/6566173c-4067-420f-8df0-ad21cab585fd-kube-api-access-mlt48\") pod \"barbican-db-create-2zx2p\" (UID: \"6566173c-4067-420f-8df0-ad21cab585fd\") " pod="openstack/barbican-db-create-2zx2p" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.185108 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/778f41f8-59c5-4a26-9dcc-409778b0bddd-operator-scripts\") pod \"barbican-c567-account-create-update-nzrhx\" (UID: \"778f41f8-59c5-4a26-9dcc-409778b0bddd\") " pod="openstack/barbican-c567-account-create-update-nzrhx" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.185154 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83ba161a-cd6c-4998-8094-a4d05d9722d2-operator-scripts\") pod \"neutron-db-create-mgj9f\" (UID: \"83ba161a-cd6c-4998-8094-a4d05d9722d2\") " pod="openstack/neutron-db-create-mgj9f" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.185229 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bdmx\" (UniqueName: \"kubernetes.io/projected/778f41f8-59c5-4a26-9dcc-409778b0bddd-kube-api-access-5bdmx\") pod \"barbican-c567-account-create-update-nzrhx\" (UID: \"778f41f8-59c5-4a26-9dcc-409778b0bddd\") " pod="openstack/barbican-c567-account-create-update-nzrhx" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.185270 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6566173c-4067-420f-8df0-ad21cab585fd-operator-scripts\") pod \"barbican-db-create-2zx2p\" (UID: \"6566173c-4067-420f-8df0-ad21cab585fd\") " pod="openstack/barbican-db-create-2zx2p" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.186134 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-f75f-account-create-update-8kqcq" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.187222 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6566173c-4067-420f-8df0-ad21cab585fd-operator-scripts\") pod \"barbican-db-create-2zx2p\" (UID: \"6566173c-4067-420f-8df0-ad21cab585fd\") " pod="openstack/barbican-db-create-2zx2p" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.208308 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlt48\" (UniqueName: \"kubernetes.io/projected/6566173c-4067-420f-8df0-ad21cab585fd-kube-api-access-mlt48\") pod \"barbican-db-create-2zx2p\" (UID: \"6566173c-4067-420f-8df0-ad21cab585fd\") " pod="openstack/barbican-db-create-2zx2p" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.259401 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-x7p8g"] Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.260897 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x7p8g" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.267142 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.267542 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-z9242" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.274827 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-x7p8g"] Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.288435 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.288682 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.309870 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-822c-account-create-update-wk5r5"] Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.311552 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt9vr\" (UniqueName: \"kubernetes.io/projected/03e7174e-f39e-41c4-8482-29f7d420c887-kube-api-access-nt9vr\") pod \"keystone-db-sync-x7p8g\" (UID: \"03e7174e-f39e-41c4-8482-29f7d420c887\") " pod="openstack/keystone-db-sync-x7p8g" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.311628 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6v2f\" (UniqueName: \"kubernetes.io/projected/83ba161a-cd6c-4998-8094-a4d05d9722d2-kube-api-access-m6v2f\") pod \"neutron-db-create-mgj9f\" (UID: \"83ba161a-cd6c-4998-8094-a4d05d9722d2\") " pod="openstack/neutron-db-create-mgj9f" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.311797 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e7174e-f39e-41c4-8482-29f7d420c887-combined-ca-bundle\") pod \"keystone-db-sync-x7p8g\" (UID: \"03e7174e-f39e-41c4-8482-29f7d420c887\") " pod="openstack/keystone-db-sync-x7p8g" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.311864 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03e7174e-f39e-41c4-8482-29f7d420c887-config-data\") pod \"keystone-db-sync-x7p8g\" (UID: \"03e7174e-f39e-41c4-8482-29f7d420c887\") " pod="openstack/keystone-db-sync-x7p8g" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.311892 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/778f41f8-59c5-4a26-9dcc-409778b0bddd-operator-scripts\") pod \"barbican-c567-account-create-update-nzrhx\" (UID: \"778f41f8-59c5-4a26-9dcc-409778b0bddd\") " pod="openstack/barbican-c567-account-create-update-nzrhx" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.312001 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83ba161a-cd6c-4998-8094-a4d05d9722d2-operator-scripts\") pod \"neutron-db-create-mgj9f\" (UID: \"83ba161a-cd6c-4998-8094-a4d05d9722d2\") " pod="openstack/neutron-db-create-mgj9f" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.312893 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/778f41f8-59c5-4a26-9dcc-409778b0bddd-operator-scripts\") pod \"barbican-c567-account-create-update-nzrhx\" (UID: \"778f41f8-59c5-4a26-9dcc-409778b0bddd\") " pod="openstack/barbican-c567-account-create-update-nzrhx" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.312192 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-822c-account-create-update-wk5r5" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.313985 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bdmx\" (UniqueName: \"kubernetes.io/projected/778f41f8-59c5-4a26-9dcc-409778b0bddd-kube-api-access-5bdmx\") pod \"barbican-c567-account-create-update-nzrhx\" (UID: \"778f41f8-59c5-4a26-9dcc-409778b0bddd\") " pod="openstack/barbican-c567-account-create-update-nzrhx" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.314568 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83ba161a-cd6c-4998-8094-a4d05d9722d2-operator-scripts\") pod \"neutron-db-create-mgj9f\" (UID: \"83ba161a-cd6c-4998-8094-a4d05d9722d2\") " pod="openstack/neutron-db-create-mgj9f" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.315206 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.331378 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6v2f\" (UniqueName: \"kubernetes.io/projected/83ba161a-cd6c-4998-8094-a4d05d9722d2-kube-api-access-m6v2f\") pod \"neutron-db-create-mgj9f\" (UID: \"83ba161a-cd6c-4998-8094-a4d05d9722d2\") " pod="openstack/neutron-db-create-mgj9f" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.340210 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bdmx\" (UniqueName: \"kubernetes.io/projected/778f41f8-59c5-4a26-9dcc-409778b0bddd-kube-api-access-5bdmx\") pod \"barbican-c567-account-create-update-nzrhx\" (UID: \"778f41f8-59c5-4a26-9dcc-409778b0bddd\") " pod="openstack/barbican-c567-account-create-update-nzrhx" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.344337 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-822c-account-create-update-wk5r5"] Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.416854 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e7174e-f39e-41c4-8482-29f7d420c887-combined-ca-bundle\") pod \"keystone-db-sync-x7p8g\" (UID: \"03e7174e-f39e-41c4-8482-29f7d420c887\") " pod="openstack/keystone-db-sync-x7p8g" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.417140 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03e7174e-f39e-41c4-8482-29f7d420c887-config-data\") pod \"keystone-db-sync-x7p8g\" (UID: \"03e7174e-f39e-41c4-8482-29f7d420c887\") " pod="openstack/keystone-db-sync-x7p8g" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.417234 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/265faf60-c453-4644-9b9a-bb4d6d53cb74-operator-scripts\") pod \"neutron-822c-account-create-update-wk5r5\" (UID: \"265faf60-c453-4644-9b9a-bb4d6d53cb74\") " pod="openstack/neutron-822c-account-create-update-wk5r5" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.417294 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt9vr\" (UniqueName: \"kubernetes.io/projected/03e7174e-f39e-41c4-8482-29f7d420c887-kube-api-access-nt9vr\") pod \"keystone-db-sync-x7p8g\" (UID: \"03e7174e-f39e-41c4-8482-29f7d420c887\") " pod="openstack/keystone-db-sync-x7p8g" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.417327 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdvzc\" (UniqueName: \"kubernetes.io/projected/265faf60-c453-4644-9b9a-bb4d6d53cb74-kube-api-access-pdvzc\") pod \"neutron-822c-account-create-update-wk5r5\" (UID: \"265faf60-c453-4644-9b9a-bb4d6d53cb74\") " pod="openstack/neutron-822c-account-create-update-wk5r5" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.421076 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e7174e-f39e-41c4-8482-29f7d420c887-combined-ca-bundle\") pod \"keystone-db-sync-x7p8g\" (UID: \"03e7174e-f39e-41c4-8482-29f7d420c887\") " pod="openstack/keystone-db-sync-x7p8g" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.423616 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03e7174e-f39e-41c4-8482-29f7d420c887-config-data\") pod \"keystone-db-sync-x7p8g\" (UID: \"03e7174e-f39e-41c4-8482-29f7d420c887\") " pod="openstack/keystone-db-sync-x7p8g" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.446481 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt9vr\" (UniqueName: \"kubernetes.io/projected/03e7174e-f39e-41c4-8482-29f7d420c887-kube-api-access-nt9vr\") pod \"keystone-db-sync-x7p8g\" (UID: \"03e7174e-f39e-41c4-8482-29f7d420c887\") " pod="openstack/keystone-db-sync-x7p8g" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.446859 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2zx2p" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.473416 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-shmjs"] Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.474678 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mgj9f" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.484999 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c567-account-create-update-nzrhx" Feb 14 19:04:26 crc kubenswrapper[4897]: W0214 19:04:26.488907 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14911dd9_fb30_4512_bfff_1e5acd6b0b50.slice/crio-ecb19c61d6db4a79343caf89cae4d501b2778b026c8dce0072c88e03215ed114 WatchSource:0}: Error finding container ecb19c61d6db4a79343caf89cae4d501b2778b026c8dce0072c88e03215ed114: Status 404 returned error can't find the container with id ecb19c61d6db4a79343caf89cae4d501b2778b026c8dce0072c88e03215ed114 Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.519512 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdvzc\" (UniqueName: \"kubernetes.io/projected/265faf60-c453-4644-9b9a-bb4d6d53cb74-kube-api-access-pdvzc\") pod \"neutron-822c-account-create-update-wk5r5\" (UID: \"265faf60-c453-4644-9b9a-bb4d6d53cb74\") " pod="openstack/neutron-822c-account-create-update-wk5r5" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.519668 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/265faf60-c453-4644-9b9a-bb4d6d53cb74-operator-scripts\") pod \"neutron-822c-account-create-update-wk5r5\" (UID: \"265faf60-c453-4644-9b9a-bb4d6d53cb74\") " pod="openstack/neutron-822c-account-create-update-wk5r5" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.520537 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/265faf60-c453-4644-9b9a-bb4d6d53cb74-operator-scripts\") pod \"neutron-822c-account-create-update-wk5r5\" (UID: \"265faf60-c453-4644-9b9a-bb4d6d53cb74\") " pod="openstack/neutron-822c-account-create-update-wk5r5" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.538116 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdvzc\" (UniqueName: \"kubernetes.io/projected/265faf60-c453-4644-9b9a-bb4d6d53cb74-kube-api-access-pdvzc\") pod \"neutron-822c-account-create-update-wk5r5\" (UID: \"265faf60-c453-4644-9b9a-bb4d6d53cb74\") " pod="openstack/neutron-822c-account-create-update-wk5r5" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.613790 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x7p8g" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.645906 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-822c-account-create-update-wk5r5" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.745481 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-2wcvc"] Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.767088 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a45e-account-create-update-g6bl6"] Feb 14 19:04:26 crc kubenswrapper[4897]: W0214 19:04:26.769843 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod915fbcb5_f3cd_4597_a771_54c7ebae16a8.slice/crio-44e40fd3aef405f9844dc0b71e29a5ad819cbce2745bcd7ac50cf1317e520d94 WatchSource:0}: Error finding container 44e40fd3aef405f9844dc0b71e29a5ad819cbce2745bcd7ac50cf1317e520d94: Status 404 returned error can't find the container with id 44e40fd3aef405f9844dc0b71e29a5ad819cbce2745bcd7ac50cf1317e520d94 Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.898790 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-shmjs" event={"ID":"14911dd9-fb30-4512-bfff-1e5acd6b0b50","Type":"ContainerStarted","Data":"3b4bd92a7c76eecd3e4ea8a0eb8b9ffc335763f835cb28820e37b5104f0d8ef8"} Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.898835 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-shmjs" event={"ID":"14911dd9-fb30-4512-bfff-1e5acd6b0b50","Type":"ContainerStarted","Data":"ecb19c61d6db4a79343caf89cae4d501b2778b026c8dce0072c88e03215ed114"} Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.946598 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-f75f-account-create-update-8kqcq"] Feb 14 19:04:26 crc kubenswrapper[4897]: W0214 19:04:26.957176 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d3ace23_df5c_40ae_a726_90ebe47317ac.slice/crio-993971ed4f3df28c2226912ac606bb049a34d4c6059d97be5670af8347080262 WatchSource:0}: Error finding container 993971ed4f3df28c2226912ac606bb049a34d4c6059d97be5670af8347080262: Status 404 returned error can't find the container with id 993971ed4f3df28c2226912ac606bb049a34d4c6059d97be5670af8347080262 Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.958861 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-2wcvc" event={"ID":"915fbcb5-f3cd-4597-a771-54c7ebae16a8","Type":"ContainerStarted","Data":"44e40fd3aef405f9844dc0b71e29a5ad819cbce2745bcd7ac50cf1317e520d94"} Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.960459 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-shmjs" podStartSLOduration=1.960445123 podStartE2EDuration="1.960445123s" podCreationTimestamp="2026-02-14 19:04:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:04:26.959137352 +0000 UTC m=+1319.935545835" watchObservedRunningTime="2026-02-14 19:04:26.960445123 +0000 UTC m=+1319.936853606" Feb 14 19:04:26 crc kubenswrapper[4897]: I0214 19:04:26.975517 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a45e-account-create-update-g6bl6" event={"ID":"88ea3e1a-f046-40c8-9af9-72a1fc228a7c","Type":"ContainerStarted","Data":"6a6a55a8e8df58bdfac17276f77559fbc39e72a147beabb6e6342a6105404f9a"} Feb 14 19:04:27 crc kubenswrapper[4897]: I0214 19:04:27.423001 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c567-account-create-update-nzrhx"] Feb 14 19:04:27 crc kubenswrapper[4897]: I0214 19:04:27.444619 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-2zx2p"] Feb 14 19:04:27 crc kubenswrapper[4897]: I0214 19:04:27.475558 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mgj9f"] Feb 14 19:04:27 crc kubenswrapper[4897]: I0214 19:04:27.579730 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-x7p8g"] Feb 14 19:04:27 crc kubenswrapper[4897]: I0214 19:04:27.738274 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-822c-account-create-update-wk5r5"] Feb 14 19:04:27 crc kubenswrapper[4897]: I0214 19:04:27.909427 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 14 19:04:27 crc kubenswrapper[4897]: I0214 19:04:27.991457 4897 generic.go:334] "Generic (PLEG): container finished" podID="915fbcb5-f3cd-4597-a771-54c7ebae16a8" containerID="3fc86ce73213728dfb4597851f035174890d5122dc4df74304b7dae14943da93" exitCode=0 Feb 14 19:04:27 crc kubenswrapper[4897]: I0214 19:04:27.991639 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-2wcvc" event={"ID":"915fbcb5-f3cd-4597-a771-54c7ebae16a8","Type":"ContainerDied","Data":"3fc86ce73213728dfb4597851f035174890d5122dc4df74304b7dae14943da93"} Feb 14 19:04:27 crc kubenswrapper[4897]: I0214 19:04:27.994987 4897 generic.go:334] "Generic (PLEG): container finished" podID="88ea3e1a-f046-40c8-9af9-72a1fc228a7c" containerID="abd6d69c6c9760f3ab2587009eaa4590199627c35874477a70c22f833c6f384c" exitCode=0 Feb 14 19:04:27 crc kubenswrapper[4897]: I0214 19:04:27.995067 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a45e-account-create-update-g6bl6" event={"ID":"88ea3e1a-f046-40c8-9af9-72a1fc228a7c","Type":"ContainerDied","Data":"abd6d69c6c9760f3ab2587009eaa4590199627c35874477a70c22f833c6f384c"} Feb 14 19:04:27 crc kubenswrapper[4897]: I0214 19:04:27.998336 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c567-account-create-update-nzrhx" event={"ID":"778f41f8-59c5-4a26-9dcc-409778b0bddd","Type":"ContainerStarted","Data":"22f630311c7bec2728ed93f90741d6afc03d046c71dcdeb1a0dae600ea5f579a"} Feb 14 19:04:27 crc kubenswrapper[4897]: I0214 19:04:27.998384 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c567-account-create-update-nzrhx" event={"ID":"778f41f8-59c5-4a26-9dcc-409778b0bddd","Type":"ContainerStarted","Data":"f5717e0e10b4cfd6efb1ea0e7034750d2a13c4d1531f48a53c57563e237bda00"} Feb 14 19:04:28 crc kubenswrapper[4897]: I0214 19:04:28.006234 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-822c-account-create-update-wk5r5" event={"ID":"265faf60-c453-4644-9b9a-bb4d6d53cb74","Type":"ContainerStarted","Data":"e384e2ca5bee6cc11b04fc43fccf0695e953eaecc2e155398a31481296101257"} Feb 14 19:04:28 crc kubenswrapper[4897]: I0214 19:04:28.007782 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2zx2p" event={"ID":"6566173c-4067-420f-8df0-ad21cab585fd","Type":"ContainerStarted","Data":"eb5ef38e4e5417c9868a36bd645cef67b11fa1102dc361c744d7d29092a7d455"} Feb 14 19:04:28 crc kubenswrapper[4897]: I0214 19:04:28.007811 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2zx2p" event={"ID":"6566173c-4067-420f-8df0-ad21cab585fd","Type":"ContainerStarted","Data":"10543d27bb902d016cbe5d2d5e8e3de24017e0687d7da9fc0d9e865d6ec49939"} Feb 14 19:04:28 crc kubenswrapper[4897]: I0214 19:04:28.010075 4897 generic.go:334] "Generic (PLEG): container finished" podID="14911dd9-fb30-4512-bfff-1e5acd6b0b50" containerID="3b4bd92a7c76eecd3e4ea8a0eb8b9ffc335763f835cb28820e37b5104f0d8ef8" exitCode=0 Feb 14 19:04:28 crc kubenswrapper[4897]: I0214 19:04:28.010261 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-shmjs" event={"ID":"14911dd9-fb30-4512-bfff-1e5acd6b0b50","Type":"ContainerDied","Data":"3b4bd92a7c76eecd3e4ea8a0eb8b9ffc335763f835cb28820e37b5104f0d8ef8"} Feb 14 19:04:28 crc kubenswrapper[4897]: I0214 19:04:28.013936 4897 generic.go:334] "Generic (PLEG): container finished" podID="4d3ace23-df5c-40ae-a726-90ebe47317ac" containerID="67cac65508d4a33fe9a215b3ce18d5580367f6d0223cce05773a5e4995929415" exitCode=0 Feb 14 19:04:28 crc kubenswrapper[4897]: I0214 19:04:28.014024 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-f75f-account-create-update-8kqcq" event={"ID":"4d3ace23-df5c-40ae-a726-90ebe47317ac","Type":"ContainerDied","Data":"67cac65508d4a33fe9a215b3ce18d5580367f6d0223cce05773a5e4995929415"} Feb 14 19:04:28 crc kubenswrapper[4897]: I0214 19:04:28.014065 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-f75f-account-create-update-8kqcq" event={"ID":"4d3ace23-df5c-40ae-a726-90ebe47317ac","Type":"ContainerStarted","Data":"993971ed4f3df28c2226912ac606bb049a34d4c6059d97be5670af8347080262"} Feb 14 19:04:28 crc kubenswrapper[4897]: I0214 19:04:28.017752 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x7p8g" event={"ID":"03e7174e-f39e-41c4-8482-29f7d420c887","Type":"ContainerStarted","Data":"1ee8cb7bd0e5f71e238e7c8363ea54f0a5828f3badce910d94190cd6b0095e9d"} Feb 14 19:04:28 crc kubenswrapper[4897]: I0214 19:04:28.025298 4897 generic.go:334] "Generic (PLEG): container finished" podID="83ba161a-cd6c-4998-8094-a4d05d9722d2" containerID="3e2962ffb0dcd6e9b3ecfe98106f4eb414eeecdc4cba132c04589a87453f174c" exitCode=0 Feb 14 19:04:28 crc kubenswrapper[4897]: I0214 19:04:28.025337 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mgj9f" event={"ID":"83ba161a-cd6c-4998-8094-a4d05d9722d2","Type":"ContainerDied","Data":"3e2962ffb0dcd6e9b3ecfe98106f4eb414eeecdc4cba132c04589a87453f174c"} Feb 14 19:04:28 crc kubenswrapper[4897]: I0214 19:04:28.025357 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mgj9f" event={"ID":"83ba161a-cd6c-4998-8094-a4d05d9722d2","Type":"ContainerStarted","Data":"21f2af91eb87a0e5b64e02db68b90148b89b8a2aa9afeef314d0f2d5e862df93"} Feb 14 19:04:28 crc kubenswrapper[4897]: I0214 19:04:28.034392 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-c567-account-create-update-nzrhx" podStartSLOduration=2.03437033 podStartE2EDuration="2.03437033s" podCreationTimestamp="2026-02-14 19:04:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:04:28.025446578 +0000 UTC m=+1321.001855071" watchObservedRunningTime="2026-02-14 19:04:28.03437033 +0000 UTC m=+1321.010778813" Feb 14 19:04:28 crc kubenswrapper[4897]: I0214 19:04:28.513222 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" Feb 14 19:04:28 crc kubenswrapper[4897]: I0214 19:04:28.631009 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7sjdq"] Feb 14 19:04:28 crc kubenswrapper[4897]: I0214 19:04:28.631894 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-7sjdq" podUID="44adf1d8-e13a-4851-8dc7-6939ef2aa45b" containerName="dnsmasq-dns" containerID="cri-o://dcfad339552725f49966586667622fe50d8a17978d89e13288aee810e5dd908c" gracePeriod=10 Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.042577 4897 generic.go:334] "Generic (PLEG): container finished" podID="778f41f8-59c5-4a26-9dcc-409778b0bddd" containerID="22f630311c7bec2728ed93f90741d6afc03d046c71dcdeb1a0dae600ea5f579a" exitCode=0 Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.043158 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c567-account-create-update-nzrhx" event={"ID":"778f41f8-59c5-4a26-9dcc-409778b0bddd","Type":"ContainerDied","Data":"22f630311c7bec2728ed93f90741d6afc03d046c71dcdeb1a0dae600ea5f579a"} Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.047057 4897 generic.go:334] "Generic (PLEG): container finished" podID="6566173c-4067-420f-8df0-ad21cab585fd" containerID="eb5ef38e4e5417c9868a36bd645cef67b11fa1102dc361c744d7d29092a7d455" exitCode=0 Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.047146 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2zx2p" event={"ID":"6566173c-4067-420f-8df0-ad21cab585fd","Type":"ContainerDied","Data":"eb5ef38e4e5417c9868a36bd645cef67b11fa1102dc361c744d7d29092a7d455"} Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.052221 4897 generic.go:334] "Generic (PLEG): container finished" podID="265faf60-c453-4644-9b9a-bb4d6d53cb74" containerID="604f633fa0066f23160306e68341239037e1969dd6a9950b99139041efd51728" exitCode=0 Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.052293 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-822c-account-create-update-wk5r5" event={"ID":"265faf60-c453-4644-9b9a-bb4d6d53cb74","Type":"ContainerDied","Data":"604f633fa0066f23160306e68341239037e1969dd6a9950b99139041efd51728"} Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.072074 4897 generic.go:334] "Generic (PLEG): container finished" podID="44adf1d8-e13a-4851-8dc7-6939ef2aa45b" containerID="dcfad339552725f49966586667622fe50d8a17978d89e13288aee810e5dd908c" exitCode=0 Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.072402 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7sjdq" event={"ID":"44adf1d8-e13a-4851-8dc7-6939ef2aa45b","Type":"ContainerDied","Data":"dcfad339552725f49966586667622fe50d8a17978d89e13288aee810e5dd908c"} Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.254185 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7sjdq" Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.303346 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-ovsdbserver-nb\") pod \"44adf1d8-e13a-4851-8dc7-6939ef2aa45b\" (UID: \"44adf1d8-e13a-4851-8dc7-6939ef2aa45b\") " Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.303475 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-ovsdbserver-sb\") pod \"44adf1d8-e13a-4851-8dc7-6939ef2aa45b\" (UID: \"44adf1d8-e13a-4851-8dc7-6939ef2aa45b\") " Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.303595 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-dns-svc\") pod \"44adf1d8-e13a-4851-8dc7-6939ef2aa45b\" (UID: \"44adf1d8-e13a-4851-8dc7-6939ef2aa45b\") " Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.303823 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-config\") pod \"44adf1d8-e13a-4851-8dc7-6939ef2aa45b\" (UID: \"44adf1d8-e13a-4851-8dc7-6939ef2aa45b\") " Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.303872 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nhn2\" (UniqueName: \"kubernetes.io/projected/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-kube-api-access-5nhn2\") pod \"44adf1d8-e13a-4851-8dc7-6939ef2aa45b\" (UID: \"44adf1d8-e13a-4851-8dc7-6939ef2aa45b\") " Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.311279 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-kube-api-access-5nhn2" (OuterVolumeSpecName: "kube-api-access-5nhn2") pod "44adf1d8-e13a-4851-8dc7-6939ef2aa45b" (UID: "44adf1d8-e13a-4851-8dc7-6939ef2aa45b"). InnerVolumeSpecName "kube-api-access-5nhn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.377189 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-config" (OuterVolumeSpecName: "config") pod "44adf1d8-e13a-4851-8dc7-6939ef2aa45b" (UID: "44adf1d8-e13a-4851-8dc7-6939ef2aa45b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.386967 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "44adf1d8-e13a-4851-8dc7-6939ef2aa45b" (UID: "44adf1d8-e13a-4851-8dc7-6939ef2aa45b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.394831 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "44adf1d8-e13a-4851-8dc7-6939ef2aa45b" (UID: "44adf1d8-e13a-4851-8dc7-6939ef2aa45b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.406768 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.406797 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nhn2\" (UniqueName: \"kubernetes.io/projected/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-kube-api-access-5nhn2\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.406806 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.406815 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.409775 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "44adf1d8-e13a-4851-8dc7-6939ef2aa45b" (UID: "44adf1d8-e13a-4851-8dc7-6939ef2aa45b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.509285 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44adf1d8-e13a-4851-8dc7-6939ef2aa45b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.521267 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2zx2p" Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.617738 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6566173c-4067-420f-8df0-ad21cab585fd-operator-scripts\") pod \"6566173c-4067-420f-8df0-ad21cab585fd\" (UID: \"6566173c-4067-420f-8df0-ad21cab585fd\") " Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.617922 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlt48\" (UniqueName: \"kubernetes.io/projected/6566173c-4067-420f-8df0-ad21cab585fd-kube-api-access-mlt48\") pod \"6566173c-4067-420f-8df0-ad21cab585fd\" (UID: \"6566173c-4067-420f-8df0-ad21cab585fd\") " Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.618604 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6566173c-4067-420f-8df0-ad21cab585fd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6566173c-4067-420f-8df0-ad21cab585fd" (UID: "6566173c-4067-420f-8df0-ad21cab585fd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.618824 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6566173c-4067-420f-8df0-ad21cab585fd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.623392 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6566173c-4067-420f-8df0-ad21cab585fd-kube-api-access-mlt48" (OuterVolumeSpecName: "kube-api-access-mlt48") pod "6566173c-4067-420f-8df0-ad21cab585fd" (UID: "6566173c-4067-420f-8df0-ad21cab585fd"). InnerVolumeSpecName "kube-api-access-mlt48". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:04:29 crc kubenswrapper[4897]: I0214 19:04:29.721297 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlt48\" (UniqueName: \"kubernetes.io/projected/6566173c-4067-420f-8df0-ad21cab585fd-kube-api-access-mlt48\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.084502 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a45e-account-create-update-g6bl6" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.093185 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mgj9f" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.099070 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a45e-account-create-update-g6bl6" event={"ID":"88ea3e1a-f046-40c8-9af9-72a1fc228a7c","Type":"ContainerDied","Data":"6a6a55a8e8df58bdfac17276f77559fbc39e72a147beabb6e6342a6105404f9a"} Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.099101 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a6a55a8e8df58bdfac17276f77559fbc39e72a147beabb6e6342a6105404f9a" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.099145 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a45e-account-create-update-g6bl6" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.101905 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-f75f-account-create-update-8kqcq" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.118420 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2zx2p" event={"ID":"6566173c-4067-420f-8df0-ad21cab585fd","Type":"ContainerDied","Data":"10543d27bb902d016cbe5d2d5e8e3de24017e0687d7da9fc0d9e865d6ec49939"} Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.118475 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10543d27bb902d016cbe5d2d5e8e3de24017e0687d7da9fc0d9e865d6ec49939" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.118597 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2zx2p" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.132589 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7sjdq" event={"ID":"44adf1d8-e13a-4851-8dc7-6939ef2aa45b","Type":"ContainerDied","Data":"ceff3a7ea486078fc4342813e62c319302f74d447a06a93758656e2767edb77f"} Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.132667 4897 scope.go:117] "RemoveContainer" containerID="dcfad339552725f49966586667622fe50d8a17978d89e13288aee810e5dd908c" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.132993 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7sjdq" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.147547 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-2wcvc" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.155979 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88ea3e1a-f046-40c8-9af9-72a1fc228a7c-operator-scripts\") pod \"88ea3e1a-f046-40c8-9af9-72a1fc228a7c\" (UID: \"88ea3e1a-f046-40c8-9af9-72a1fc228a7c\") " Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.156048 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plhxv\" (UniqueName: \"kubernetes.io/projected/88ea3e1a-f046-40c8-9af9-72a1fc228a7c-kube-api-access-plhxv\") pod \"88ea3e1a-f046-40c8-9af9-72a1fc228a7c\" (UID: \"88ea3e1a-f046-40c8-9af9-72a1fc228a7c\") " Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.157659 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88ea3e1a-f046-40c8-9af9-72a1fc228a7c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88ea3e1a-f046-40c8-9af9-72a1fc228a7c" (UID: "88ea3e1a-f046-40c8-9af9-72a1fc228a7c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.173137 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88ea3e1a-f046-40c8-9af9-72a1fc228a7c-kube-api-access-plhxv" (OuterVolumeSpecName: "kube-api-access-plhxv") pod "88ea3e1a-f046-40c8-9af9-72a1fc228a7c" (UID: "88ea3e1a-f046-40c8-9af9-72a1fc228a7c"). InnerVolumeSpecName "kube-api-access-plhxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.264808 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d3ace23-df5c-40ae-a726-90ebe47317ac-operator-scripts\") pod \"4d3ace23-df5c-40ae-a726-90ebe47317ac\" (UID: \"4d3ace23-df5c-40ae-a726-90ebe47317ac\") " Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.278048 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83ba161a-cd6c-4998-8094-a4d05d9722d2-operator-scripts\") pod \"83ba161a-cd6c-4998-8094-a4d05d9722d2\" (UID: \"83ba161a-cd6c-4998-8094-a4d05d9722d2\") " Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.278560 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sdkp\" (UniqueName: \"kubernetes.io/projected/915fbcb5-f3cd-4597-a771-54c7ebae16a8-kube-api-access-6sdkp\") pod \"915fbcb5-f3cd-4597-a771-54c7ebae16a8\" (UID: \"915fbcb5-f3cd-4597-a771-54c7ebae16a8\") " Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.278656 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6v2f\" (UniqueName: \"kubernetes.io/projected/83ba161a-cd6c-4998-8094-a4d05d9722d2-kube-api-access-m6v2f\") pod \"83ba161a-cd6c-4998-8094-a4d05d9722d2\" (UID: \"83ba161a-cd6c-4998-8094-a4d05d9722d2\") " Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.278940 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjrrw\" (UniqueName: \"kubernetes.io/projected/4d3ace23-df5c-40ae-a726-90ebe47317ac-kube-api-access-zjrrw\") pod \"4d3ace23-df5c-40ae-a726-90ebe47317ac\" (UID: \"4d3ace23-df5c-40ae-a726-90ebe47317ac\") " Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.265018 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-shmjs" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.279232 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/915fbcb5-f3cd-4597-a771-54c7ebae16a8-operator-scripts\") pod \"915fbcb5-f3cd-4597-a771-54c7ebae16a8\" (UID: \"915fbcb5-f3cd-4597-a771-54c7ebae16a8\") " Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.275328 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d3ace23-df5c-40ae-a726-90ebe47317ac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d3ace23-df5c-40ae-a726-90ebe47317ac" (UID: "4d3ace23-df5c-40ae-a726-90ebe47317ac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.278508 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83ba161a-cd6c-4998-8094-a4d05d9722d2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83ba161a-cd6c-4998-8094-a4d05d9722d2" (UID: "83ba161a-cd6c-4998-8094-a4d05d9722d2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.280648 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88ea3e1a-f046-40c8-9af9-72a1fc228a7c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.280762 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plhxv\" (UniqueName: \"kubernetes.io/projected/88ea3e1a-f046-40c8-9af9-72a1fc228a7c-kube-api-access-plhxv\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.282670 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d3ace23-df5c-40ae-a726-90ebe47317ac-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.282780 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83ba161a-cd6c-4998-8094-a4d05d9722d2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.280663 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/915fbcb5-f3cd-4597-a771-54c7ebae16a8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "915fbcb5-f3cd-4597-a771-54c7ebae16a8" (UID: "915fbcb5-f3cd-4597-a771-54c7ebae16a8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.282200 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ba161a-cd6c-4998-8094-a4d05d9722d2-kube-api-access-m6v2f" (OuterVolumeSpecName: "kube-api-access-m6v2f") pod "83ba161a-cd6c-4998-8094-a4d05d9722d2" (UID: "83ba161a-cd6c-4998-8094-a4d05d9722d2"). InnerVolumeSpecName "kube-api-access-m6v2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.285579 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/915fbcb5-f3cd-4597-a771-54c7ebae16a8-kube-api-access-6sdkp" (OuterVolumeSpecName: "kube-api-access-6sdkp") pod "915fbcb5-f3cd-4597-a771-54c7ebae16a8" (UID: "915fbcb5-f3cd-4597-a771-54c7ebae16a8"). InnerVolumeSpecName "kube-api-access-6sdkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.296958 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d3ace23-df5c-40ae-a726-90ebe47317ac-kube-api-access-zjrrw" (OuterVolumeSpecName: "kube-api-access-zjrrw") pod "4d3ace23-df5c-40ae-a726-90ebe47317ac" (UID: "4d3ace23-df5c-40ae-a726-90ebe47317ac"). InnerVolumeSpecName "kube-api-access-zjrrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.299136 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7sjdq"] Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.319019 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7sjdq"] Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.326063 4897 scope.go:117] "RemoveContainer" containerID="b233d2b5a7cc405f3917a1e17bfad0c495758eda8b5064577300ca62448da2b0" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.409794 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64qzl\" (UniqueName: \"kubernetes.io/projected/14911dd9-fb30-4512-bfff-1e5acd6b0b50-kube-api-access-64qzl\") pod \"14911dd9-fb30-4512-bfff-1e5acd6b0b50\" (UID: \"14911dd9-fb30-4512-bfff-1e5acd6b0b50\") " Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.410053 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14911dd9-fb30-4512-bfff-1e5acd6b0b50-operator-scripts\") pod \"14911dd9-fb30-4512-bfff-1e5acd6b0b50\" (UID: \"14911dd9-fb30-4512-bfff-1e5acd6b0b50\") " Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.410827 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sdkp\" (UniqueName: \"kubernetes.io/projected/915fbcb5-f3cd-4597-a771-54c7ebae16a8-kube-api-access-6sdkp\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.410851 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6v2f\" (UniqueName: \"kubernetes.io/projected/83ba161a-cd6c-4998-8094-a4d05d9722d2-kube-api-access-m6v2f\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.410863 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjrrw\" (UniqueName: \"kubernetes.io/projected/4d3ace23-df5c-40ae-a726-90ebe47317ac-kube-api-access-zjrrw\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.410876 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/915fbcb5-f3cd-4597-a771-54c7ebae16a8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.413552 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14911dd9-fb30-4512-bfff-1e5acd6b0b50-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "14911dd9-fb30-4512-bfff-1e5acd6b0b50" (UID: "14911dd9-fb30-4512-bfff-1e5acd6b0b50"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.418821 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14911dd9-fb30-4512-bfff-1e5acd6b0b50-kube-api-access-64qzl" (OuterVolumeSpecName: "kube-api-access-64qzl") pod "14911dd9-fb30-4512-bfff-1e5acd6b0b50" (UID: "14911dd9-fb30-4512-bfff-1e5acd6b0b50"). InnerVolumeSpecName "kube-api-access-64qzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.478215 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-822c-account-create-update-wk5r5" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.511698 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/265faf60-c453-4644-9b9a-bb4d6d53cb74-operator-scripts\") pod \"265faf60-c453-4644-9b9a-bb4d6d53cb74\" (UID: \"265faf60-c453-4644-9b9a-bb4d6d53cb74\") " Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.511859 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdvzc\" (UniqueName: \"kubernetes.io/projected/265faf60-c453-4644-9b9a-bb4d6d53cb74-kube-api-access-pdvzc\") pod \"265faf60-c453-4644-9b9a-bb4d6d53cb74\" (UID: \"265faf60-c453-4644-9b9a-bb4d6d53cb74\") " Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.512079 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/265faf60-c453-4644-9b9a-bb4d6d53cb74-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "265faf60-c453-4644-9b9a-bb4d6d53cb74" (UID: "265faf60-c453-4644-9b9a-bb4d6d53cb74"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.513407 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/265faf60-c453-4644-9b9a-bb4d6d53cb74-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.513424 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14911dd9-fb30-4512-bfff-1e5acd6b0b50-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.513436 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64qzl\" (UniqueName: \"kubernetes.io/projected/14911dd9-fb30-4512-bfff-1e5acd6b0b50-kube-api-access-64qzl\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.516260 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/265faf60-c453-4644-9b9a-bb4d6d53cb74-kube-api-access-pdvzc" (OuterVolumeSpecName: "kube-api-access-pdvzc") pod "265faf60-c453-4644-9b9a-bb4d6d53cb74" (UID: "265faf60-c453-4644-9b9a-bb4d6d53cb74"). InnerVolumeSpecName "kube-api-access-pdvzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.605132 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c567-account-create-update-nzrhx" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.615387 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdvzc\" (UniqueName: \"kubernetes.io/projected/265faf60-c453-4644-9b9a-bb4d6d53cb74-kube-api-access-pdvzc\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.716821 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/778f41f8-59c5-4a26-9dcc-409778b0bddd-operator-scripts\") pod \"778f41f8-59c5-4a26-9dcc-409778b0bddd\" (UID: \"778f41f8-59c5-4a26-9dcc-409778b0bddd\") " Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.716969 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bdmx\" (UniqueName: \"kubernetes.io/projected/778f41f8-59c5-4a26-9dcc-409778b0bddd-kube-api-access-5bdmx\") pod \"778f41f8-59c5-4a26-9dcc-409778b0bddd\" (UID: \"778f41f8-59c5-4a26-9dcc-409778b0bddd\") " Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.717451 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/778f41f8-59c5-4a26-9dcc-409778b0bddd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "778f41f8-59c5-4a26-9dcc-409778b0bddd" (UID: "778f41f8-59c5-4a26-9dcc-409778b0bddd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.720096 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/778f41f8-59c5-4a26-9dcc-409778b0bddd-kube-api-access-5bdmx" (OuterVolumeSpecName: "kube-api-access-5bdmx") pod "778f41f8-59c5-4a26-9dcc-409778b0bddd" (UID: "778f41f8-59c5-4a26-9dcc-409778b0bddd"). InnerVolumeSpecName "kube-api-access-5bdmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.819831 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/778f41f8-59c5-4a26-9dcc-409778b0bddd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:30 crc kubenswrapper[4897]: I0214 19:04:30.819864 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bdmx\" (UniqueName: \"kubernetes.io/projected/778f41f8-59c5-4a26-9dcc-409778b0bddd-kube-api-access-5bdmx\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:31 crc kubenswrapper[4897]: I0214 19:04:31.150476 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c567-account-create-update-nzrhx" event={"ID":"778f41f8-59c5-4a26-9dcc-409778b0bddd","Type":"ContainerDied","Data":"f5717e0e10b4cfd6efb1ea0e7034750d2a13c4d1531f48a53c57563e237bda00"} Feb 14 19:04:31 crc kubenswrapper[4897]: I0214 19:04:31.150512 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5717e0e10b4cfd6efb1ea0e7034750d2a13c4d1531f48a53c57563e237bda00" Feb 14 19:04:31 crc kubenswrapper[4897]: I0214 19:04:31.150512 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c567-account-create-update-nzrhx" Feb 14 19:04:31 crc kubenswrapper[4897]: I0214 19:04:31.153599 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mgj9f" event={"ID":"83ba161a-cd6c-4998-8094-a4d05d9722d2","Type":"ContainerDied","Data":"21f2af91eb87a0e5b64e02db68b90148b89b8a2aa9afeef314d0f2d5e862df93"} Feb 14 19:04:31 crc kubenswrapper[4897]: I0214 19:04:31.153636 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21f2af91eb87a0e5b64e02db68b90148b89b8a2aa9afeef314d0f2d5e862df93" Feb 14 19:04:31 crc kubenswrapper[4897]: I0214 19:04:31.153681 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mgj9f" Feb 14 19:04:31 crc kubenswrapper[4897]: I0214 19:04:31.169249 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-822c-account-create-update-wk5r5" event={"ID":"265faf60-c453-4644-9b9a-bb4d6d53cb74","Type":"ContainerDied","Data":"e384e2ca5bee6cc11b04fc43fccf0695e953eaecc2e155398a31481296101257"} Feb 14 19:04:31 crc kubenswrapper[4897]: I0214 19:04:31.169539 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e384e2ca5bee6cc11b04fc43fccf0695e953eaecc2e155398a31481296101257" Feb 14 19:04:31 crc kubenswrapper[4897]: I0214 19:04:31.169608 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-822c-account-create-update-wk5r5" Feb 14 19:04:31 crc kubenswrapper[4897]: I0214 19:04:31.172446 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-shmjs" event={"ID":"14911dd9-fb30-4512-bfff-1e5acd6b0b50","Type":"ContainerDied","Data":"ecb19c61d6db4a79343caf89cae4d501b2778b026c8dce0072c88e03215ed114"} Feb 14 19:04:31 crc kubenswrapper[4897]: I0214 19:04:31.172474 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecb19c61d6db4a79343caf89cae4d501b2778b026c8dce0072c88e03215ed114" Feb 14 19:04:31 crc kubenswrapper[4897]: I0214 19:04:31.172527 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-shmjs" Feb 14 19:04:31 crc kubenswrapper[4897]: I0214 19:04:31.183285 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-f75f-account-create-update-8kqcq" event={"ID":"4d3ace23-df5c-40ae-a726-90ebe47317ac","Type":"ContainerDied","Data":"993971ed4f3df28c2226912ac606bb049a34d4c6059d97be5670af8347080262"} Feb 14 19:04:31 crc kubenswrapper[4897]: I0214 19:04:31.183331 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="993971ed4f3df28c2226912ac606bb049a34d4c6059d97be5670af8347080262" Feb 14 19:04:31 crc kubenswrapper[4897]: I0214 19:04:31.183503 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-f75f-account-create-update-8kqcq" Feb 14 19:04:31 crc kubenswrapper[4897]: I0214 19:04:31.198753 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-2wcvc" event={"ID":"915fbcb5-f3cd-4597-a771-54c7ebae16a8","Type":"ContainerDied","Data":"44e40fd3aef405f9844dc0b71e29a5ad819cbce2745bcd7ac50cf1317e520d94"} Feb 14 19:04:31 crc kubenswrapper[4897]: I0214 19:04:31.198790 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44e40fd3aef405f9844dc0b71e29a5ad819cbce2745bcd7ac50cf1317e520d94" Feb 14 19:04:31 crc kubenswrapper[4897]: I0214 19:04:31.199363 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-2wcvc" Feb 14 19:04:31 crc kubenswrapper[4897]: I0214 19:04:31.819686 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44adf1d8-e13a-4851-8dc7-6939ef2aa45b" path="/var/lib/kubelet/pods/44adf1d8-e13a-4851-8dc7-6939ef2aa45b/volumes" Feb 14 19:04:35 crc kubenswrapper[4897]: I0214 19:04:35.059936 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:35 crc kubenswrapper[4897]: I0214 19:04:35.065348 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:35 crc kubenswrapper[4897]: I0214 19:04:35.252637 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x7p8g" event={"ID":"03e7174e-f39e-41c4-8482-29f7d420c887","Type":"ContainerStarted","Data":"888db2ff379959435c61dc26127d1e91334587a3e7b84df3b25c50f03facd53b"} Feb 14 19:04:35 crc kubenswrapper[4897]: I0214 19:04:35.266451 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 14 19:04:35 crc kubenswrapper[4897]: I0214 19:04:35.287186 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-x7p8g" podStartSLOduration=2.712772444 podStartE2EDuration="9.287164211s" podCreationTimestamp="2026-02-14 19:04:26 +0000 UTC" firstStartedPulling="2026-02-14 19:04:27.606112533 +0000 UTC m=+1320.582521016" lastFinishedPulling="2026-02-14 19:04:34.18050429 +0000 UTC m=+1327.156912783" observedRunningTime="2026-02-14 19:04:35.284945281 +0000 UTC m=+1328.261353764" watchObservedRunningTime="2026-02-14 19:04:35.287164211 +0000 UTC m=+1328.263572704" Feb 14 19:04:38 crc kubenswrapper[4897]: I0214 19:04:38.300083 4897 generic.go:334] "Generic (PLEG): container finished" podID="03e7174e-f39e-41c4-8482-29f7d420c887" containerID="888db2ff379959435c61dc26127d1e91334587a3e7b84df3b25c50f03facd53b" exitCode=0 Feb 14 19:04:38 crc kubenswrapper[4897]: I0214 19:04:38.300218 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x7p8g" event={"ID":"03e7174e-f39e-41c4-8482-29f7d420c887","Type":"ContainerDied","Data":"888db2ff379959435c61dc26127d1e91334587a3e7b84df3b25c50f03facd53b"} Feb 14 19:04:39 crc kubenswrapper[4897]: I0214 19:04:39.802342 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x7p8g" Feb 14 19:04:39 crc kubenswrapper[4897]: I0214 19:04:39.926295 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e7174e-f39e-41c4-8482-29f7d420c887-combined-ca-bundle\") pod \"03e7174e-f39e-41c4-8482-29f7d420c887\" (UID: \"03e7174e-f39e-41c4-8482-29f7d420c887\") " Feb 14 19:04:39 crc kubenswrapper[4897]: I0214 19:04:39.926701 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt9vr\" (UniqueName: \"kubernetes.io/projected/03e7174e-f39e-41c4-8482-29f7d420c887-kube-api-access-nt9vr\") pod \"03e7174e-f39e-41c4-8482-29f7d420c887\" (UID: \"03e7174e-f39e-41c4-8482-29f7d420c887\") " Feb 14 19:04:39 crc kubenswrapper[4897]: I0214 19:04:39.926784 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03e7174e-f39e-41c4-8482-29f7d420c887-config-data\") pod \"03e7174e-f39e-41c4-8482-29f7d420c887\" (UID: \"03e7174e-f39e-41c4-8482-29f7d420c887\") " Feb 14 19:04:39 crc kubenswrapper[4897]: I0214 19:04:39.932828 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03e7174e-f39e-41c4-8482-29f7d420c887-kube-api-access-nt9vr" (OuterVolumeSpecName: "kube-api-access-nt9vr") pod "03e7174e-f39e-41c4-8482-29f7d420c887" (UID: "03e7174e-f39e-41c4-8482-29f7d420c887"). InnerVolumeSpecName "kube-api-access-nt9vr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:04:39 crc kubenswrapper[4897]: I0214 19:04:39.953945 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03e7174e-f39e-41c4-8482-29f7d420c887-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03e7174e-f39e-41c4-8482-29f7d420c887" (UID: "03e7174e-f39e-41c4-8482-29f7d420c887"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:04:39 crc kubenswrapper[4897]: I0214 19:04:39.989799 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03e7174e-f39e-41c4-8482-29f7d420c887-config-data" (OuterVolumeSpecName: "config-data") pod "03e7174e-f39e-41c4-8482-29f7d420c887" (UID: "03e7174e-f39e-41c4-8482-29f7d420c887"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.029525 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt9vr\" (UniqueName: \"kubernetes.io/projected/03e7174e-f39e-41c4-8482-29f7d420c887-kube-api-access-nt9vr\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.029865 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03e7174e-f39e-41c4-8482-29f7d420c887-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.029878 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e7174e-f39e-41c4-8482-29f7d420c887-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.327396 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x7p8g" event={"ID":"03e7174e-f39e-41c4-8482-29f7d420c887","Type":"ContainerDied","Data":"1ee8cb7bd0e5f71e238e7c8363ea54f0a5828f3badce910d94190cd6b0095e9d"} Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.327427 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x7p8g" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.327441 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ee8cb7bd0e5f71e238e7c8363ea54f0a5828f3badce910d94190cd6b0095e9d" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.596111 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-79vks"] Feb 14 19:04:40 crc kubenswrapper[4897]: E0214 19:04:40.596652 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44adf1d8-e13a-4851-8dc7-6939ef2aa45b" containerName="dnsmasq-dns" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.596675 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="44adf1d8-e13a-4851-8dc7-6939ef2aa45b" containerName="dnsmasq-dns" Feb 14 19:04:40 crc kubenswrapper[4897]: E0214 19:04:40.596691 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88ea3e1a-f046-40c8-9af9-72a1fc228a7c" containerName="mariadb-account-create-update" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.596700 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="88ea3e1a-f046-40c8-9af9-72a1fc228a7c" containerName="mariadb-account-create-update" Feb 14 19:04:40 crc kubenswrapper[4897]: E0214 19:04:40.596711 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e7174e-f39e-41c4-8482-29f7d420c887" containerName="keystone-db-sync" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.596719 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e7174e-f39e-41c4-8482-29f7d420c887" containerName="keystone-db-sync" Feb 14 19:04:40 crc kubenswrapper[4897]: E0214 19:04:40.596738 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="265faf60-c453-4644-9b9a-bb4d6d53cb74" containerName="mariadb-account-create-update" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.596748 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="265faf60-c453-4644-9b9a-bb4d6d53cb74" containerName="mariadb-account-create-update" Feb 14 19:04:40 crc kubenswrapper[4897]: E0214 19:04:40.596762 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="915fbcb5-f3cd-4597-a771-54c7ebae16a8" containerName="mariadb-database-create" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.596771 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="915fbcb5-f3cd-4597-a771-54c7ebae16a8" containerName="mariadb-database-create" Feb 14 19:04:40 crc kubenswrapper[4897]: E0214 19:04:40.596787 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44adf1d8-e13a-4851-8dc7-6939ef2aa45b" containerName="init" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.596795 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="44adf1d8-e13a-4851-8dc7-6939ef2aa45b" containerName="init" Feb 14 19:04:40 crc kubenswrapper[4897]: E0214 19:04:40.596810 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14911dd9-fb30-4512-bfff-1e5acd6b0b50" containerName="mariadb-database-create" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.596819 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="14911dd9-fb30-4512-bfff-1e5acd6b0b50" containerName="mariadb-database-create" Feb 14 19:04:40 crc kubenswrapper[4897]: E0214 19:04:40.596853 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6566173c-4067-420f-8df0-ad21cab585fd" containerName="mariadb-database-create" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.596862 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="6566173c-4067-420f-8df0-ad21cab585fd" containerName="mariadb-database-create" Feb 14 19:04:40 crc kubenswrapper[4897]: E0214 19:04:40.596872 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="778f41f8-59c5-4a26-9dcc-409778b0bddd" containerName="mariadb-account-create-update" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.596880 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="778f41f8-59c5-4a26-9dcc-409778b0bddd" containerName="mariadb-account-create-update" Feb 14 19:04:40 crc kubenswrapper[4897]: E0214 19:04:40.596895 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ba161a-cd6c-4998-8094-a4d05d9722d2" containerName="mariadb-database-create" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.596902 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ba161a-cd6c-4998-8094-a4d05d9722d2" containerName="mariadb-database-create" Feb 14 19:04:40 crc kubenswrapper[4897]: E0214 19:04:40.596943 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d3ace23-df5c-40ae-a726-90ebe47317ac" containerName="mariadb-account-create-update" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.596952 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d3ace23-df5c-40ae-a726-90ebe47317ac" containerName="mariadb-account-create-update" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.597211 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="265faf60-c453-4644-9b9a-bb4d6d53cb74" containerName="mariadb-account-create-update" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.597228 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="44adf1d8-e13a-4851-8dc7-6939ef2aa45b" containerName="dnsmasq-dns" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.597247 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="14911dd9-fb30-4512-bfff-1e5acd6b0b50" containerName="mariadb-database-create" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.597267 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="778f41f8-59c5-4a26-9dcc-409778b0bddd" containerName="mariadb-account-create-update" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.597281 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="83ba161a-cd6c-4998-8094-a4d05d9722d2" containerName="mariadb-database-create" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.597294 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d3ace23-df5c-40ae-a726-90ebe47317ac" containerName="mariadb-account-create-update" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.597309 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="915fbcb5-f3cd-4597-a771-54c7ebae16a8" containerName="mariadb-database-create" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.597319 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="6566173c-4067-420f-8df0-ad21cab585fd" containerName="mariadb-database-create" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.597328 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="03e7174e-f39e-41c4-8482-29f7d420c887" containerName="keystone-db-sync" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.597341 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="88ea3e1a-f046-40c8-9af9-72a1fc228a7c" containerName="mariadb-account-create-update" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.598705 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-79vks" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.608427 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-79vks"] Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.648421 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-z9bhn"] Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.649762 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-z9bhn" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.656571 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.657098 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-z9242" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.657339 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.657582 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.657812 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.661121 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-z9bhn"] Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.742749 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-dns-svc\") pod \"dnsmasq-dns-847c4cc679-79vks\" (UID: \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\") " pod="openstack/dnsmasq-dns-847c4cc679-79vks" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.742807 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-credential-keys\") pod \"keystone-bootstrap-z9bhn\" (UID: \"75686e6d-4bdc-4b28-836a-c7261b28ae81\") " pod="openstack/keystone-bootstrap-z9bhn" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.742846 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-scripts\") pod \"keystone-bootstrap-z9bhn\" (UID: \"75686e6d-4bdc-4b28-836a-c7261b28ae81\") " pod="openstack/keystone-bootstrap-z9bhn" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.742873 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9xqp\" (UniqueName: \"kubernetes.io/projected/4c3d022e-0d67-46e1-9723-7a603cf88d0f-kube-api-access-j9xqp\") pod \"dnsmasq-dns-847c4cc679-79vks\" (UID: \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\") " pod="openstack/dnsmasq-dns-847c4cc679-79vks" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.742901 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-fernet-keys\") pod \"keystone-bootstrap-z9bhn\" (UID: \"75686e6d-4bdc-4b28-836a-c7261b28ae81\") " pod="openstack/keystone-bootstrap-z9bhn" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.742916 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttq2h\" (UniqueName: \"kubernetes.io/projected/75686e6d-4bdc-4b28-836a-c7261b28ae81-kube-api-access-ttq2h\") pod \"keystone-bootstrap-z9bhn\" (UID: \"75686e6d-4bdc-4b28-836a-c7261b28ae81\") " pod="openstack/keystone-bootstrap-z9bhn" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.742945 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-79vks\" (UID: \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\") " pod="openstack/dnsmasq-dns-847c4cc679-79vks" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.742977 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-config-data\") pod \"keystone-bootstrap-z9bhn\" (UID: \"75686e6d-4bdc-4b28-836a-c7261b28ae81\") " pod="openstack/keystone-bootstrap-z9bhn" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.742998 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-79vks\" (UID: \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\") " pod="openstack/dnsmasq-dns-847c4cc679-79vks" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.743113 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-combined-ca-bundle\") pod \"keystone-bootstrap-z9bhn\" (UID: \"75686e6d-4bdc-4b28-836a-c7261b28ae81\") " pod="openstack/keystone-bootstrap-z9bhn" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.743132 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-config\") pod \"dnsmasq-dns-847c4cc679-79vks\" (UID: \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\") " pod="openstack/dnsmasq-dns-847c4cc679-79vks" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.743156 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-79vks\" (UID: \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\") " pod="openstack/dnsmasq-dns-847c4cc679-79vks" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.751881 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-2rjdz"] Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.753153 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-2rjdz" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.765696 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.766262 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-vv8rd" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.767235 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-2rjdz"] Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.848016 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-combined-ca-bundle\") pod \"keystone-bootstrap-z9bhn\" (UID: \"75686e6d-4bdc-4b28-836a-c7261b28ae81\") " pod="openstack/keystone-bootstrap-z9bhn" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.848085 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-222vr\" (UniqueName: \"kubernetes.io/projected/c17a810c-7598-46ab-93c3-c480c175ca61-kube-api-access-222vr\") pod \"heat-db-sync-2rjdz\" (UID: \"c17a810c-7598-46ab-93c3-c480c175ca61\") " pod="openstack/heat-db-sync-2rjdz" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.848113 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-config\") pod \"dnsmasq-dns-847c4cc679-79vks\" (UID: \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\") " pod="openstack/dnsmasq-dns-847c4cc679-79vks" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.848146 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-79vks\" (UID: \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\") " pod="openstack/dnsmasq-dns-847c4cc679-79vks" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.848187 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-dns-svc\") pod \"dnsmasq-dns-847c4cc679-79vks\" (UID: \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\") " pod="openstack/dnsmasq-dns-847c4cc679-79vks" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.848212 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-credential-keys\") pod \"keystone-bootstrap-z9bhn\" (UID: \"75686e6d-4bdc-4b28-836a-c7261b28ae81\") " pod="openstack/keystone-bootstrap-z9bhn" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.848254 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-scripts\") pod \"keystone-bootstrap-z9bhn\" (UID: \"75686e6d-4bdc-4b28-836a-c7261b28ae81\") " pod="openstack/keystone-bootstrap-z9bhn" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.848277 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17a810c-7598-46ab-93c3-c480c175ca61-combined-ca-bundle\") pod \"heat-db-sync-2rjdz\" (UID: \"c17a810c-7598-46ab-93c3-c480c175ca61\") " pod="openstack/heat-db-sync-2rjdz" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.848309 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9xqp\" (UniqueName: \"kubernetes.io/projected/4c3d022e-0d67-46e1-9723-7a603cf88d0f-kube-api-access-j9xqp\") pod \"dnsmasq-dns-847c4cc679-79vks\" (UID: \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\") " pod="openstack/dnsmasq-dns-847c4cc679-79vks" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.848340 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-fernet-keys\") pod \"keystone-bootstrap-z9bhn\" (UID: \"75686e6d-4bdc-4b28-836a-c7261b28ae81\") " pod="openstack/keystone-bootstrap-z9bhn" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.848363 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttq2h\" (UniqueName: \"kubernetes.io/projected/75686e6d-4bdc-4b28-836a-c7261b28ae81-kube-api-access-ttq2h\") pod \"keystone-bootstrap-z9bhn\" (UID: \"75686e6d-4bdc-4b28-836a-c7261b28ae81\") " pod="openstack/keystone-bootstrap-z9bhn" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.848410 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-79vks\" (UID: \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\") " pod="openstack/dnsmasq-dns-847c4cc679-79vks" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.854923 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-config-data\") pod \"keystone-bootstrap-z9bhn\" (UID: \"75686e6d-4bdc-4b28-836a-c7261b28ae81\") " pod="openstack/keystone-bootstrap-z9bhn" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.858301 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-79vks\" (UID: \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\") " pod="openstack/dnsmasq-dns-847c4cc679-79vks" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.858525 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c17a810c-7598-46ab-93c3-c480c175ca61-config-data\") pod \"heat-db-sync-2rjdz\" (UID: \"c17a810c-7598-46ab-93c3-c480c175ca61\") " pod="openstack/heat-db-sync-2rjdz" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.859984 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-79vks\" (UID: \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\") " pod="openstack/dnsmasq-dns-847c4cc679-79vks" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.860562 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-config\") pod \"dnsmasq-dns-847c4cc679-79vks\" (UID: \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\") " pod="openstack/dnsmasq-dns-847c4cc679-79vks" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.862084 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-79vks\" (UID: \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\") " pod="openstack/dnsmasq-dns-847c4cc679-79vks" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.865127 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-79vks\" (UID: \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\") " pod="openstack/dnsmasq-dns-847c4cc679-79vks" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.865682 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-dns-svc\") pod \"dnsmasq-dns-847c4cc679-79vks\" (UID: \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\") " pod="openstack/dnsmasq-dns-847c4cc679-79vks" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.893926 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-credential-keys\") pod \"keystone-bootstrap-z9bhn\" (UID: \"75686e6d-4bdc-4b28-836a-c7261b28ae81\") " pod="openstack/keystone-bootstrap-z9bhn" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.903166 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-combined-ca-bundle\") pod \"keystone-bootstrap-z9bhn\" (UID: \"75686e6d-4bdc-4b28-836a-c7261b28ae81\") " pod="openstack/keystone-bootstrap-z9bhn" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.905787 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-scripts\") pod \"keystone-bootstrap-z9bhn\" (UID: \"75686e6d-4bdc-4b28-836a-c7261b28ae81\") " pod="openstack/keystone-bootstrap-z9bhn" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.906605 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-config-data\") pod \"keystone-bootstrap-z9bhn\" (UID: \"75686e6d-4bdc-4b28-836a-c7261b28ae81\") " pod="openstack/keystone-bootstrap-z9bhn" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.908329 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-fernet-keys\") pod \"keystone-bootstrap-z9bhn\" (UID: \"75686e6d-4bdc-4b28-836a-c7261b28ae81\") " pod="openstack/keystone-bootstrap-z9bhn" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.934788 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9xqp\" (UniqueName: \"kubernetes.io/projected/4c3d022e-0d67-46e1-9723-7a603cf88d0f-kube-api-access-j9xqp\") pod \"dnsmasq-dns-847c4cc679-79vks\" (UID: \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\") " pod="openstack/dnsmasq-dns-847c4cc679-79vks" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.936127 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttq2h\" (UniqueName: \"kubernetes.io/projected/75686e6d-4bdc-4b28-836a-c7261b28ae81-kube-api-access-ttq2h\") pod \"keystone-bootstrap-z9bhn\" (UID: \"75686e6d-4bdc-4b28-836a-c7261b28ae81\") " pod="openstack/keystone-bootstrap-z9bhn" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.964216 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17a810c-7598-46ab-93c3-c480c175ca61-combined-ca-bundle\") pod \"heat-db-sync-2rjdz\" (UID: \"c17a810c-7598-46ab-93c3-c480c175ca61\") " pod="openstack/heat-db-sync-2rjdz" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.964380 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c17a810c-7598-46ab-93c3-c480c175ca61-config-data\") pod \"heat-db-sync-2rjdz\" (UID: \"c17a810c-7598-46ab-93c3-c480c175ca61\") " pod="openstack/heat-db-sync-2rjdz" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.964444 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-222vr\" (UniqueName: \"kubernetes.io/projected/c17a810c-7598-46ab-93c3-c480c175ca61-kube-api-access-222vr\") pod \"heat-db-sync-2rjdz\" (UID: \"c17a810c-7598-46ab-93c3-c480c175ca61\") " pod="openstack/heat-db-sync-2rjdz" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.981469 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-z9bhn" Feb 14 19:04:40 crc kubenswrapper[4897]: I0214 19:04:40.982499 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17a810c-7598-46ab-93c3-c480c175ca61-combined-ca-bundle\") pod \"heat-db-sync-2rjdz\" (UID: \"c17a810c-7598-46ab-93c3-c480c175ca61\") " pod="openstack/heat-db-sync-2rjdz" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.000916 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-577t2"] Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.002943 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c17a810c-7598-46ab-93c3-c480c175ca61-config-data\") pod \"heat-db-sync-2rjdz\" (UID: \"c17a810c-7598-46ab-93c3-c480c175ca61\") " pod="openstack/heat-db-sync-2rjdz" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.003185 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-577t2" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.007484 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.031392 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-79vks"] Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.036518 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-8fgns" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.037187 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.039940 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-222vr\" (UniqueName: \"kubernetes.io/projected/c17a810c-7598-46ab-93c3-c480c175ca61-kube-api-access-222vr\") pod \"heat-db-sync-2rjdz\" (UID: \"c17a810c-7598-46ab-93c3-c480c175ca61\") " pod="openstack/heat-db-sync-2rjdz" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.046913 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-79vks" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.058723 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-qnrpp"] Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.060439 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qnrpp" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.066604 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-combined-ca-bundle\") pod \"cinder-db-sync-577t2\" (UID: \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\") " pod="openstack/cinder-db-sync-577t2" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.066687 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-etc-machine-id\") pod \"cinder-db-sync-577t2\" (UID: \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\") " pod="openstack/cinder-db-sync-577t2" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.066731 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-config-data\") pod \"cinder-db-sync-577t2\" (UID: \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\") " pod="openstack/cinder-db-sync-577t2" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.066752 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdhqw\" (UniqueName: \"kubernetes.io/projected/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-kube-api-access-qdhqw\") pod \"cinder-db-sync-577t2\" (UID: \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\") " pod="openstack/cinder-db-sync-577t2" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.066785 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-scripts\") pod \"cinder-db-sync-577t2\" (UID: \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\") " pod="openstack/cinder-db-sync-577t2" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.066862 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-db-sync-config-data\") pod \"cinder-db-sync-577t2\" (UID: \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\") " pod="openstack/cinder-db-sync-577t2" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.071950 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.072265 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fs4lc" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.072387 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.085595 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-2rjdz" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.111145 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-577t2"] Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.126989 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qnrpp"] Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.150306 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-9l57t"] Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.151938 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9l57t" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.153978 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-gbbpq" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.154439 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.163330 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-9l57t"] Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.168319 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-config-data\") pod \"cinder-db-sync-577t2\" (UID: \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\") " pod="openstack/cinder-db-sync-577t2" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.168378 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdhqw\" (UniqueName: \"kubernetes.io/projected/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-kube-api-access-qdhqw\") pod \"cinder-db-sync-577t2\" (UID: \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\") " pod="openstack/cinder-db-sync-577t2" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.168435 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-scripts\") pod \"cinder-db-sync-577t2\" (UID: \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\") " pod="openstack/cinder-db-sync-577t2" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.168464 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec-combined-ca-bundle\") pod \"neutron-db-sync-qnrpp\" (UID: \"a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec\") " pod="openstack/neutron-db-sync-qnrpp" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.168572 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec-config\") pod \"neutron-db-sync-qnrpp\" (UID: \"a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec\") " pod="openstack/neutron-db-sync-qnrpp" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.168611 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-db-sync-config-data\") pod \"cinder-db-sync-577t2\" (UID: \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\") " pod="openstack/cinder-db-sync-577t2" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.168652 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khkp5\" (UniqueName: \"kubernetes.io/projected/a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec-kube-api-access-khkp5\") pod \"neutron-db-sync-qnrpp\" (UID: \"a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec\") " pod="openstack/neutron-db-sync-qnrpp" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.168685 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-combined-ca-bundle\") pod \"cinder-db-sync-577t2\" (UID: \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\") " pod="openstack/cinder-db-sync-577t2" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.168741 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-etc-machine-id\") pod \"cinder-db-sync-577t2\" (UID: \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\") " pod="openstack/cinder-db-sync-577t2" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.168869 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-etc-machine-id\") pod \"cinder-db-sync-577t2\" (UID: \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\") " pod="openstack/cinder-db-sync-577t2" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.176708 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-db-sync-config-data\") pod \"cinder-db-sync-577t2\" (UID: \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\") " pod="openstack/cinder-db-sync-577t2" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.176901 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-scripts\") pod \"cinder-db-sync-577t2\" (UID: \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\") " pod="openstack/cinder-db-sync-577t2" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.178291 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.179618 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-config-data\") pod \"cinder-db-sync-577t2\" (UID: \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\") " pod="openstack/cinder-db-sync-577t2" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.189276 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-combined-ca-bundle\") pod \"cinder-db-sync-577t2\" (UID: \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\") " pod="openstack/cinder-db-sync-577t2" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.201632 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-zhsbx"] Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.207521 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.212548 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdhqw\" (UniqueName: \"kubernetes.io/projected/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-kube-api-access-qdhqw\") pod \"cinder-db-sync-577t2\" (UID: \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\") " pod="openstack/cinder-db-sync-577t2" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.239941 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-zhsbx"] Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.257519 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-j2sgf"] Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.259159 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-j2sgf" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.262240 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-wlqtt" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.262553 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.270875 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-zhsbx\" (UID: \"73b306f6-bde9-4e5c-9466-1601184571d6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.271002 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2w59\" (UniqueName: \"kubernetes.io/projected/efcb9cd7-17f6-4705-96e9-40a25d718a72-kube-api-access-b2w59\") pod \"placement-db-sync-9l57t\" (UID: \"efcb9cd7-17f6-4705-96e9-40a25d718a72\") " pod="openstack/placement-db-sync-9l57t" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.271045 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-zhsbx\" (UID: \"73b306f6-bde9-4e5c-9466-1601184571d6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.271196 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efcb9cd7-17f6-4705-96e9-40a25d718a72-scripts\") pod \"placement-db-sync-9l57t\" (UID: \"efcb9cd7-17f6-4705-96e9-40a25d718a72\") " pod="openstack/placement-db-sync-9l57t" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.271320 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efcb9cd7-17f6-4705-96e9-40a25d718a72-config-data\") pod \"placement-db-sync-9l57t\" (UID: \"efcb9cd7-17f6-4705-96e9-40a25d718a72\") " pod="openstack/placement-db-sync-9l57t" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.271384 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec-combined-ca-bundle\") pod \"neutron-db-sync-qnrpp\" (UID: \"a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec\") " pod="openstack/neutron-db-sync-qnrpp" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.271526 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfmrv\" (UniqueName: \"kubernetes.io/projected/73b306f6-bde9-4e5c-9466-1601184571d6-kube-api-access-kfmrv\") pod \"dnsmasq-dns-785d8bcb8c-zhsbx\" (UID: \"73b306f6-bde9-4e5c-9466-1601184571d6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.271551 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-zhsbx\" (UID: \"73b306f6-bde9-4e5c-9466-1601184571d6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.271594 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-config\") pod \"dnsmasq-dns-785d8bcb8c-zhsbx\" (UID: \"73b306f6-bde9-4e5c-9466-1601184571d6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.271625 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec-config\") pod \"neutron-db-sync-qnrpp\" (UID: \"a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec\") " pod="openstack/neutron-db-sync-qnrpp" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.271967 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efcb9cd7-17f6-4705-96e9-40a25d718a72-logs\") pod \"placement-db-sync-9l57t\" (UID: \"efcb9cd7-17f6-4705-96e9-40a25d718a72\") " pod="openstack/placement-db-sync-9l57t" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.272071 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efcb9cd7-17f6-4705-96e9-40a25d718a72-combined-ca-bundle\") pod \"placement-db-sync-9l57t\" (UID: \"efcb9cd7-17f6-4705-96e9-40a25d718a72\") " pod="openstack/placement-db-sync-9l57t" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.272098 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khkp5\" (UniqueName: \"kubernetes.io/projected/a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec-kube-api-access-khkp5\") pod \"neutron-db-sync-qnrpp\" (UID: \"a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec\") " pod="openstack/neutron-db-sync-qnrpp" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.272132 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-zhsbx\" (UID: \"73b306f6-bde9-4e5c-9466-1601184571d6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.276960 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-j2sgf"] Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.284228 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec-combined-ca-bundle\") pod \"neutron-db-sync-qnrpp\" (UID: \"a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec\") " pod="openstack/neutron-db-sync-qnrpp" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.288449 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec-config\") pod \"neutron-db-sync-qnrpp\" (UID: \"a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec\") " pod="openstack/neutron-db-sync-qnrpp" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.291259 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.294010 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.299564 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khkp5\" (UniqueName: \"kubernetes.io/projected/a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec-kube-api-access-khkp5\") pod \"neutron-db-sync-qnrpp\" (UID: \"a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec\") " pod="openstack/neutron-db-sync-qnrpp" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.310322 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.310662 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.311670 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.377152 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-scripts\") pod \"ceilometer-0\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " pod="openstack/ceilometer-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.377216 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-run-httpd\") pod \"ceilometer-0\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " pod="openstack/ceilometer-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.377287 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfmrv\" (UniqueName: \"kubernetes.io/projected/73b306f6-bde9-4e5c-9466-1601184571d6-kube-api-access-kfmrv\") pod \"dnsmasq-dns-785d8bcb8c-zhsbx\" (UID: \"73b306f6-bde9-4e5c-9466-1601184571d6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.377307 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-log-httpd\") pod \"ceilometer-0\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " pod="openstack/ceilometer-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.377328 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-zhsbx\" (UID: \"73b306f6-bde9-4e5c-9466-1601184571d6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.377373 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-config\") pod \"dnsmasq-dns-785d8bcb8c-zhsbx\" (UID: \"73b306f6-bde9-4e5c-9466-1601184571d6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.377392 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-config-data\") pod \"ceilometer-0\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " pod="openstack/ceilometer-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.377436 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46sfb\" (UniqueName: \"kubernetes.io/projected/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-kube-api-access-46sfb\") pod \"ceilometer-0\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " pod="openstack/ceilometer-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.377454 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efcb9cd7-17f6-4705-96e9-40a25d718a72-logs\") pod \"placement-db-sync-9l57t\" (UID: \"efcb9cd7-17f6-4705-96e9-40a25d718a72\") " pod="openstack/placement-db-sync-9l57t" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.377480 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efcb9cd7-17f6-4705-96e9-40a25d718a72-combined-ca-bundle\") pod \"placement-db-sync-9l57t\" (UID: \"efcb9cd7-17f6-4705-96e9-40a25d718a72\") " pod="openstack/placement-db-sync-9l57t" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.377518 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-zhsbx\" (UID: \"73b306f6-bde9-4e5c-9466-1601184571d6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.377537 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-zhsbx\" (UID: \"73b306f6-bde9-4e5c-9466-1601184571d6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.377553 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4cf787d-aa82-449b-917e-b5863b11b429-combined-ca-bundle\") pod \"barbican-db-sync-j2sgf\" (UID: \"e4cf787d-aa82-449b-917e-b5863b11b429\") " pod="openstack/barbican-db-sync-j2sgf" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.377619 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2w59\" (UniqueName: \"kubernetes.io/projected/efcb9cd7-17f6-4705-96e9-40a25d718a72-kube-api-access-b2w59\") pod \"placement-db-sync-9l57t\" (UID: \"efcb9cd7-17f6-4705-96e9-40a25d718a72\") " pod="openstack/placement-db-sync-9l57t" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.377636 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ztcd\" (UniqueName: \"kubernetes.io/projected/e4cf787d-aa82-449b-917e-b5863b11b429-kube-api-access-9ztcd\") pod \"barbican-db-sync-j2sgf\" (UID: \"e4cf787d-aa82-449b-917e-b5863b11b429\") " pod="openstack/barbican-db-sync-j2sgf" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.377668 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-zhsbx\" (UID: \"73b306f6-bde9-4e5c-9466-1601184571d6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.377696 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " pod="openstack/ceilometer-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.377713 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " pod="openstack/ceilometer-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.377753 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efcb9cd7-17f6-4705-96e9-40a25d718a72-scripts\") pod \"placement-db-sync-9l57t\" (UID: \"efcb9cd7-17f6-4705-96e9-40a25d718a72\") " pod="openstack/placement-db-sync-9l57t" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.377780 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efcb9cd7-17f6-4705-96e9-40a25d718a72-config-data\") pod \"placement-db-sync-9l57t\" (UID: \"efcb9cd7-17f6-4705-96e9-40a25d718a72\") " pod="openstack/placement-db-sync-9l57t" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.377803 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e4cf787d-aa82-449b-917e-b5863b11b429-db-sync-config-data\") pod \"barbican-db-sync-j2sgf\" (UID: \"e4cf787d-aa82-449b-917e-b5863b11b429\") " pod="openstack/barbican-db-sync-j2sgf" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.377938 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efcb9cd7-17f6-4705-96e9-40a25d718a72-logs\") pod \"placement-db-sync-9l57t\" (UID: \"efcb9cd7-17f6-4705-96e9-40a25d718a72\") " pod="openstack/placement-db-sync-9l57t" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.378385 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-zhsbx\" (UID: \"73b306f6-bde9-4e5c-9466-1601184571d6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.378592 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-config\") pod \"dnsmasq-dns-785d8bcb8c-zhsbx\" (UID: \"73b306f6-bde9-4e5c-9466-1601184571d6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.378632 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-zhsbx\" (UID: \"73b306f6-bde9-4e5c-9466-1601184571d6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.379166 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-zhsbx\" (UID: \"73b306f6-bde9-4e5c-9466-1601184571d6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.379193 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-zhsbx\" (UID: \"73b306f6-bde9-4e5c-9466-1601184571d6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.416855 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efcb9cd7-17f6-4705-96e9-40a25d718a72-scripts\") pod \"placement-db-sync-9l57t\" (UID: \"efcb9cd7-17f6-4705-96e9-40a25d718a72\") " pod="openstack/placement-db-sync-9l57t" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.418094 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efcb9cd7-17f6-4705-96e9-40a25d718a72-config-data\") pod \"placement-db-sync-9l57t\" (UID: \"efcb9cd7-17f6-4705-96e9-40a25d718a72\") " pod="openstack/placement-db-sync-9l57t" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.421233 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efcb9cd7-17f6-4705-96e9-40a25d718a72-combined-ca-bundle\") pod \"placement-db-sync-9l57t\" (UID: \"efcb9cd7-17f6-4705-96e9-40a25d718a72\") " pod="openstack/placement-db-sync-9l57t" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.448877 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-577t2" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.451776 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2w59\" (UniqueName: \"kubernetes.io/projected/efcb9cd7-17f6-4705-96e9-40a25d718a72-kube-api-access-b2w59\") pod \"placement-db-sync-9l57t\" (UID: \"efcb9cd7-17f6-4705-96e9-40a25d718a72\") " pod="openstack/placement-db-sync-9l57t" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.451852 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfmrv\" (UniqueName: \"kubernetes.io/projected/73b306f6-bde9-4e5c-9466-1601184571d6-kube-api-access-kfmrv\") pod \"dnsmasq-dns-785d8bcb8c-zhsbx\" (UID: \"73b306f6-bde9-4e5c-9466-1601184571d6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.481242 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4cf787d-aa82-449b-917e-b5863b11b429-combined-ca-bundle\") pod \"barbican-db-sync-j2sgf\" (UID: \"e4cf787d-aa82-449b-917e-b5863b11b429\") " pod="openstack/barbican-db-sync-j2sgf" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.481320 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ztcd\" (UniqueName: \"kubernetes.io/projected/e4cf787d-aa82-449b-917e-b5863b11b429-kube-api-access-9ztcd\") pod \"barbican-db-sync-j2sgf\" (UID: \"e4cf787d-aa82-449b-917e-b5863b11b429\") " pod="openstack/barbican-db-sync-j2sgf" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.481352 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " pod="openstack/ceilometer-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.481369 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " pod="openstack/ceilometer-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.481412 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e4cf787d-aa82-449b-917e-b5863b11b429-db-sync-config-data\") pod \"barbican-db-sync-j2sgf\" (UID: \"e4cf787d-aa82-449b-917e-b5863b11b429\") " pod="openstack/barbican-db-sync-j2sgf" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.481443 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-scripts\") pod \"ceilometer-0\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " pod="openstack/ceilometer-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.481462 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-run-httpd\") pod \"ceilometer-0\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " pod="openstack/ceilometer-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.481501 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-log-httpd\") pod \"ceilometer-0\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " pod="openstack/ceilometer-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.481528 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-config-data\") pod \"ceilometer-0\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " pod="openstack/ceilometer-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.481557 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46sfb\" (UniqueName: \"kubernetes.io/projected/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-kube-api-access-46sfb\") pod \"ceilometer-0\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " pod="openstack/ceilometer-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.485602 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-log-httpd\") pod \"ceilometer-0\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " pod="openstack/ceilometer-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.485905 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-run-httpd\") pod \"ceilometer-0\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " pod="openstack/ceilometer-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.488512 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qnrpp" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.498056 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e4cf787d-aa82-449b-917e-b5863b11b429-db-sync-config-data\") pod \"barbican-db-sync-j2sgf\" (UID: \"e4cf787d-aa82-449b-917e-b5863b11b429\") " pod="openstack/barbican-db-sync-j2sgf" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.502805 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " pod="openstack/ceilometer-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.503679 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-config-data\") pod \"ceilometer-0\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " pod="openstack/ceilometer-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.503913 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-scripts\") pod \"ceilometer-0\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " pod="openstack/ceilometer-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.504199 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4cf787d-aa82-449b-917e-b5863b11b429-combined-ca-bundle\") pod \"barbican-db-sync-j2sgf\" (UID: \"e4cf787d-aa82-449b-917e-b5863b11b429\") " pod="openstack/barbican-db-sync-j2sgf" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.507466 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " pod="openstack/ceilometer-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.521831 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ztcd\" (UniqueName: \"kubernetes.io/projected/e4cf787d-aa82-449b-917e-b5863b11b429-kube-api-access-9ztcd\") pod \"barbican-db-sync-j2sgf\" (UID: \"e4cf787d-aa82-449b-917e-b5863b11b429\") " pod="openstack/barbican-db-sync-j2sgf" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.524679 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46sfb\" (UniqueName: \"kubernetes.io/projected/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-kube-api-access-46sfb\") pod \"ceilometer-0\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " pod="openstack/ceilometer-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.528439 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9l57t" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.553544 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.555974 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.587917 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-j2sgf" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.844848 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.846716 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.846798 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.850485 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.850664 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.850769 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wcdfs" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.850899 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.893570 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.893681 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.893746 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.893767 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-logs\") pod \"glance-default-internal-api-0\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.893812 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.893865 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz7fz\" (UniqueName: \"kubernetes.io/projected/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-kube-api-access-jz7fz\") pod \"glance-default-internal-api-0\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.893898 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.893928 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6b289847-29c6-4db3-8215-32600f200b4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b289847-29c6-4db3-8215-32600f200b4c\") pod \"glance-default-internal-api-0\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.951464 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.968395 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.968784 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.970440 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.970876 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.996025 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.996101 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6b289847-29c6-4db3-8215-32600f200b4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b289847-29c6-4db3-8215-32600f200b4c\") pod \"glance-default-internal-api-0\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.996149 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.996210 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.996259 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.996279 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-logs\") pod \"glance-default-internal-api-0\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.996324 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.996363 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz7fz\" (UniqueName: \"kubernetes.io/projected/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-kube-api-access-jz7fz\") pod \"glance-default-internal-api-0\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.996928 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-logs\") pod \"glance-default-internal-api-0\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:04:41 crc kubenswrapper[4897]: I0214 19:04:41.997240 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.000791 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.001841 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.001875 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6b289847-29c6-4db3-8215-32600f200b4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b289847-29c6-4db3-8215-32600f200b4c\") pod \"glance-default-internal-api-0\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e62bbcc549b1e49eee9b1b5ff653b97ed37b658653a03b79e94b1d5ec308d580/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.005803 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.006900 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.032547 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz7fz\" (UniqueName: \"kubernetes.io/projected/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-kube-api-access-jz7fz\") pod \"glance-default-internal-api-0\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.054757 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.098334 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f76f37de-eeac-44b1-afea-f790bea1e327-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " pod="openstack/glance-default-external-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.104018 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76f37de-eeac-44b1-afea-f790bea1e327-config-data\") pod \"glance-default-external-api-0\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " pod="openstack/glance-default-external-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.104353 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p4zw\" (UniqueName: \"kubernetes.io/projected/f76f37de-eeac-44b1-afea-f790bea1e327-kube-api-access-4p4zw\") pod \"glance-default-external-api-0\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " pod="openstack/glance-default-external-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.105345 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c2c4846d-e178-48b1-80da-0604a66e3200\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c4846d-e178-48b1-80da-0604a66e3200\") pod \"glance-default-external-api-0\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " pod="openstack/glance-default-external-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.105481 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76f37de-eeac-44b1-afea-f790bea1e327-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " pod="openstack/glance-default-external-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.105574 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f76f37de-eeac-44b1-afea-f790bea1e327-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " pod="openstack/glance-default-external-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.105760 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f76f37de-eeac-44b1-afea-f790bea1e327-scripts\") pod \"glance-default-external-api-0\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " pod="openstack/glance-default-external-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.105845 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f76f37de-eeac-44b1-afea-f790bea1e327-logs\") pod \"glance-default-external-api-0\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " pod="openstack/glance-default-external-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.146451 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6b289847-29c6-4db3-8215-32600f200b4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b289847-29c6-4db3-8215-32600f200b4c\") pod \"glance-default-internal-api-0\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.208494 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c2c4846d-e178-48b1-80da-0604a66e3200\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c4846d-e178-48b1-80da-0604a66e3200\") pod \"glance-default-external-api-0\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " pod="openstack/glance-default-external-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.208561 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76f37de-eeac-44b1-afea-f790bea1e327-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " pod="openstack/glance-default-external-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.208585 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f76f37de-eeac-44b1-afea-f790bea1e327-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " pod="openstack/glance-default-external-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.208642 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f76f37de-eeac-44b1-afea-f790bea1e327-scripts\") pod \"glance-default-external-api-0\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " pod="openstack/glance-default-external-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.208662 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f76f37de-eeac-44b1-afea-f790bea1e327-logs\") pod \"glance-default-external-api-0\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " pod="openstack/glance-default-external-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.208719 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f76f37de-eeac-44b1-afea-f790bea1e327-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " pod="openstack/glance-default-external-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.208738 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76f37de-eeac-44b1-afea-f790bea1e327-config-data\") pod \"glance-default-external-api-0\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " pod="openstack/glance-default-external-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.208790 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p4zw\" (UniqueName: \"kubernetes.io/projected/f76f37de-eeac-44b1-afea-f790bea1e327-kube-api-access-4p4zw\") pod \"glance-default-external-api-0\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " pod="openstack/glance-default-external-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.210054 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f76f37de-eeac-44b1-afea-f790bea1e327-logs\") pod \"glance-default-external-api-0\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " pod="openstack/glance-default-external-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.210689 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f76f37de-eeac-44b1-afea-f790bea1e327-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " pod="openstack/glance-default-external-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.212648 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.212700 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c2c4846d-e178-48b1-80da-0604a66e3200\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c4846d-e178-48b1-80da-0604a66e3200\") pod \"glance-default-external-api-0\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c60ca6e58b7228eda216e886c2f088869a9fd33844e5fbdaaee4673098f90fe3/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.213169 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f76f37de-eeac-44b1-afea-f790bea1e327-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " pod="openstack/glance-default-external-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.214140 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76f37de-eeac-44b1-afea-f790bea1e327-config-data\") pod \"glance-default-external-api-0\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " pod="openstack/glance-default-external-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.217934 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f76f37de-eeac-44b1-afea-f790bea1e327-scripts\") pod \"glance-default-external-api-0\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " pod="openstack/glance-default-external-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.218190 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76f37de-eeac-44b1-afea-f790bea1e327-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " pod="openstack/glance-default-external-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.231550 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p4zw\" (UniqueName: \"kubernetes.io/projected/f76f37de-eeac-44b1-afea-f790bea1e327-kube-api-access-4p4zw\") pod \"glance-default-external-api-0\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " pod="openstack/glance-default-external-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: W0214 19:04:42.284918 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c3d022e_0d67_46e1_9723_7a603cf88d0f.slice/crio-e18e093b6a8cbd5f22d665f2e7ecf617c0f1530df273ac6537e6d9cde7641dc6 WatchSource:0}: Error finding container e18e093b6a8cbd5f22d665f2e7ecf617c0f1530df273ac6537e6d9cde7641dc6: Status 404 returned error can't find the container with id e18e093b6a8cbd5f22d665f2e7ecf617c0f1530df273ac6537e6d9cde7641dc6 Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.302887 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-z9bhn"] Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.331011 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-79vks"] Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.344966 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c2c4846d-e178-48b1-80da-0604a66e3200\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c4846d-e178-48b1-80da-0604a66e3200\") pod \"glance-default-external-api-0\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " pod="openstack/glance-default-external-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.390910 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-z9bhn" event={"ID":"75686e6d-4bdc-4b28-836a-c7261b28ae81","Type":"ContainerStarted","Data":"683fb6104d0709e04bc368ce88411eb9a30a7297cc9921cad80e43eecd57d88d"} Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.396344 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-79vks" event={"ID":"4c3d022e-0d67-46e1-9723-7a603cf88d0f","Type":"ContainerStarted","Data":"e18e093b6a8cbd5f22d665f2e7ecf617c0f1530df273ac6537e6d9cde7641dc6"} Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.409818 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.425382 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-2rjdz"] Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.432568 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.868754 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-9l57t"] Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.914288 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-zhsbx"] Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.969163 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qnrpp"] Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.985108 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-j2sgf"] Feb 14 19:04:42 crc kubenswrapper[4897]: I0214 19:04:42.995408 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 19:04:43 crc kubenswrapper[4897]: I0214 19:04:43.128763 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:04:43 crc kubenswrapper[4897]: I0214 19:04:43.135672 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-577t2"] Feb 14 19:04:43 crc kubenswrapper[4897]: I0214 19:04:43.163727 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 19:04:43 crc kubenswrapper[4897]: I0214 19:04:43.323663 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:04:43 crc kubenswrapper[4897]: I0214 19:04:43.434165 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-j2sgf" event={"ID":"e4cf787d-aa82-449b-917e-b5863b11b429","Type":"ContainerStarted","Data":"e30cc4ace98bbb8806a8057897081dd11c7b59e588d5fc4f82b3a004f6e6db93"} Feb 14 19:04:43 crc kubenswrapper[4897]: I0214 19:04:43.438654 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"744ddf55-9af8-4c94-8a91-4280fd9c8d6c","Type":"ContainerStarted","Data":"05587ed4ba696b5e71e10829f7c77636251ff15708457c5f4916154b99fe1d29"} Feb 14 19:04:43 crc kubenswrapper[4897]: I0214 19:04:43.438762 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 19:04:43 crc kubenswrapper[4897]: I0214 19:04:43.440223 4897 generic.go:334] "Generic (PLEG): container finished" podID="4c3d022e-0d67-46e1-9723-7a603cf88d0f" containerID="2b759161f3269223a75ef5cb00279eb3e0990cd56b8aabf312876087fa3168de" exitCode=0 Feb 14 19:04:43 crc kubenswrapper[4897]: I0214 19:04:43.440276 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-79vks" event={"ID":"4c3d022e-0d67-46e1-9723-7a603cf88d0f","Type":"ContainerDied","Data":"2b759161f3269223a75ef5cb00279eb3e0990cd56b8aabf312876087fa3168de"} Feb 14 19:04:43 crc kubenswrapper[4897]: I0214 19:04:43.443555 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qnrpp" event={"ID":"a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec","Type":"ContainerStarted","Data":"a378e17ae0874d53855434093e5097afb540694529af263653e557e5f2441b47"} Feb 14 19:04:43 crc kubenswrapper[4897]: I0214 19:04:43.443585 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qnrpp" event={"ID":"a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec","Type":"ContainerStarted","Data":"01b30e661d750d6cedebefe648b1f5b6f1e588ea62b0b039deb63813b20f90be"} Feb 14 19:04:43 crc kubenswrapper[4897]: I0214 19:04:43.453762 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" event={"ID":"73b306f6-bde9-4e5c-9466-1601184571d6","Type":"ContainerStarted","Data":"07c6c9f5c1baa4fb697e541d6214319ed06fc898098562b0ca9718bb15435aaf"} Feb 14 19:04:43 crc kubenswrapper[4897]: I0214 19:04:43.458571 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-z9bhn" event={"ID":"75686e6d-4bdc-4b28-836a-c7261b28ae81","Type":"ContainerStarted","Data":"5f2587ec5324386036de347b4ccf04d694c83f6a3d608ca0151c373a8dd5dd21"} Feb 14 19:04:43 crc kubenswrapper[4897]: I0214 19:04:43.466620 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-2rjdz" event={"ID":"c17a810c-7598-46ab-93c3-c480c175ca61","Type":"ContainerStarted","Data":"dfb3b57d7b891e300ad76c5a8744de110daa830d14ea87557ee595f70db57434"} Feb 14 19:04:43 crc kubenswrapper[4897]: I0214 19:04:43.471850 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9l57t" event={"ID":"efcb9cd7-17f6-4705-96e9-40a25d718a72","Type":"ContainerStarted","Data":"2626c8260359568bc2e3591b7aff8eb3f5cc81ae8b6eeaa985d0c8490dd1710f"} Feb 14 19:04:43 crc kubenswrapper[4897]: I0214 19:04:43.473408 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-577t2" event={"ID":"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1","Type":"ContainerStarted","Data":"52c594c4b0a814706f3038551703f21b2a3aec9032fa649794b4ddbfc803dbaa"} Feb 14 19:04:43 crc kubenswrapper[4897]: I0214 19:04:43.521313 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-qnrpp" podStartSLOduration=3.521295532 podStartE2EDuration="3.521295532s" podCreationTimestamp="2026-02-14 19:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:04:43.50382667 +0000 UTC m=+1336.480235153" watchObservedRunningTime="2026-02-14 19:04:43.521295532 +0000 UTC m=+1336.497704015" Feb 14 19:04:43 crc kubenswrapper[4897]: I0214 19:04:43.532473 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-z9bhn" podStartSLOduration=3.532455735 podStartE2EDuration="3.532455735s" podCreationTimestamp="2026-02-14 19:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:04:43.52248942 +0000 UTC m=+1336.498897903" watchObservedRunningTime="2026-02-14 19:04:43.532455735 +0000 UTC m=+1336.508864218" Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.178089 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-79vks" Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.199164 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.343930 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-dns-svc\") pod \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\" (UID: \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\") " Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.344063 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-ovsdbserver-sb\") pod \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\" (UID: \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\") " Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.344192 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-config\") pod \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\" (UID: \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\") " Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.344262 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9xqp\" (UniqueName: \"kubernetes.io/projected/4c3d022e-0d67-46e1-9723-7a603cf88d0f-kube-api-access-j9xqp\") pod \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\" (UID: \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\") " Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.344287 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-ovsdbserver-nb\") pod \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\" (UID: \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\") " Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.344354 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-dns-swift-storage-0\") pod \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\" (UID: \"4c3d022e-0d67-46e1-9723-7a603cf88d0f\") " Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.360322 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c3d022e-0d67-46e1-9723-7a603cf88d0f-kube-api-access-j9xqp" (OuterVolumeSpecName: "kube-api-access-j9xqp") pod "4c3d022e-0d67-46e1-9723-7a603cf88d0f" (UID: "4c3d022e-0d67-46e1-9723-7a603cf88d0f"). InnerVolumeSpecName "kube-api-access-j9xqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.374046 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-config" (OuterVolumeSpecName: "config") pod "4c3d022e-0d67-46e1-9723-7a603cf88d0f" (UID: "4c3d022e-0d67-46e1-9723-7a603cf88d0f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.392075 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4c3d022e-0d67-46e1-9723-7a603cf88d0f" (UID: "4c3d022e-0d67-46e1-9723-7a603cf88d0f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.392022 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4c3d022e-0d67-46e1-9723-7a603cf88d0f" (UID: "4c3d022e-0d67-46e1-9723-7a603cf88d0f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.397478 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4c3d022e-0d67-46e1-9723-7a603cf88d0f" (UID: "4c3d022e-0d67-46e1-9723-7a603cf88d0f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.414288 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4c3d022e-0d67-46e1-9723-7a603cf88d0f" (UID: "4c3d022e-0d67-46e1-9723-7a603cf88d0f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.446586 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.446620 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9xqp\" (UniqueName: \"kubernetes.io/projected/4c3d022e-0d67-46e1-9723-7a603cf88d0f-kube-api-access-j9xqp\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.446634 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.446642 4897 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.446653 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.446661 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c3d022e-0d67-46e1-9723-7a603cf88d0f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.492908 4897 generic.go:334] "Generic (PLEG): container finished" podID="73b306f6-bde9-4e5c-9466-1601184571d6" containerID="55f4a73444829a6bbe40afd8cf3f296f9d44202b74048e9ddbd87c050687f487" exitCode=0 Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.492988 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" event={"ID":"73b306f6-bde9-4e5c-9466-1601184571d6","Type":"ContainerDied","Data":"55f4a73444829a6bbe40afd8cf3f296f9d44202b74048e9ddbd87c050687f487"} Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.493061 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" event={"ID":"73b306f6-bde9-4e5c-9466-1601184571d6","Type":"ContainerStarted","Data":"f308b96003cdc1e738b4a178bc436e8a82cc4d36d3f6f8af113b055df77ef3e8"} Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.493079 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.494642 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8","Type":"ContainerStarted","Data":"272c23fcff7e4fae9865eeb6dfe465e7d6d40edc58f9eb1f7b06172ae42cd3e9"} Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.497223 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f76f37de-eeac-44b1-afea-f790bea1e327","Type":"ContainerStarted","Data":"bfcf765f22056780b5e8bda1d5002a1efafb00e4670afaed3da03578ad3d1ff0"} Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.518055 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-79vks" event={"ID":"4c3d022e-0d67-46e1-9723-7a603cf88d0f","Type":"ContainerDied","Data":"e18e093b6a8cbd5f22d665f2e7ecf617c0f1530df273ac6537e6d9cde7641dc6"} Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.518108 4897 scope.go:117] "RemoveContainer" containerID="2b759161f3269223a75ef5cb00279eb3e0990cd56b8aabf312876087fa3168de" Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.518127 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-79vks" Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.525750 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" podStartSLOduration=3.525729582 podStartE2EDuration="3.525729582s" podCreationTimestamp="2026-02-14 19:04:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:04:44.517516432 +0000 UTC m=+1337.493924935" watchObservedRunningTime="2026-02-14 19:04:44.525729582 +0000 UTC m=+1337.502138065" Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.615059 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-79vks"] Feb 14 19:04:44 crc kubenswrapper[4897]: I0214 19:04:44.628123 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-79vks"] Feb 14 19:04:45 crc kubenswrapper[4897]: I0214 19:04:45.570922 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f76f37de-eeac-44b1-afea-f790bea1e327","Type":"ContainerStarted","Data":"9354b7227b857fa823732bdf0eec323739b2134c3eaa0f6f3c4f85c8a121411d"} Feb 14 19:04:45 crc kubenswrapper[4897]: I0214 19:04:45.577383 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8","Type":"ContainerStarted","Data":"dc32f229372da76a38e49e4b0cde8240e44fbabaf5a31d70a7c496a9bbe8b6f6"} Feb 14 19:04:45 crc kubenswrapper[4897]: I0214 19:04:45.819330 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c3d022e-0d67-46e1-9723-7a603cf88d0f" path="/var/lib/kubelet/pods/4c3d022e-0d67-46e1-9723-7a603cf88d0f/volumes" Feb 14 19:04:46 crc kubenswrapper[4897]: I0214 19:04:46.599767 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f76f37de-eeac-44b1-afea-f790bea1e327","Type":"ContainerStarted","Data":"4740568f7321364de6c0eeacd2622c7ca3e5742c452c59a24119037066f72b77"} Feb 14 19:04:46 crc kubenswrapper[4897]: I0214 19:04:46.600536 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f76f37de-eeac-44b1-afea-f790bea1e327" containerName="glance-log" containerID="cri-o://9354b7227b857fa823732bdf0eec323739b2134c3eaa0f6f3c4f85c8a121411d" gracePeriod=30 Feb 14 19:04:46 crc kubenswrapper[4897]: I0214 19:04:46.600626 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f76f37de-eeac-44b1-afea-f790bea1e327" containerName="glance-httpd" containerID="cri-o://4740568f7321364de6c0eeacd2622c7ca3e5742c452c59a24119037066f72b77" gracePeriod=30 Feb 14 19:04:46 crc kubenswrapper[4897]: I0214 19:04:46.604894 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8","Type":"ContainerStarted","Data":"0112ffd128a948449293403599fd7c5ceebb645e1cb45bd65cb7a7c46f7da623"} Feb 14 19:04:46 crc kubenswrapper[4897]: I0214 19:04:46.605017 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="419d0ff7-1ce7-4ad4-a02e-12aa69a172b8" containerName="glance-log" containerID="cri-o://dc32f229372da76a38e49e4b0cde8240e44fbabaf5a31d70a7c496a9bbe8b6f6" gracePeriod=30 Feb 14 19:04:46 crc kubenswrapper[4897]: I0214 19:04:46.605147 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="419d0ff7-1ce7-4ad4-a02e-12aa69a172b8" containerName="glance-httpd" containerID="cri-o://0112ffd128a948449293403599fd7c5ceebb645e1cb45bd65cb7a7c46f7da623" gracePeriod=30 Feb 14 19:04:46 crc kubenswrapper[4897]: I0214 19:04:46.633069 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.633009984 podStartE2EDuration="6.633009984s" podCreationTimestamp="2026-02-14 19:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:04:46.626395214 +0000 UTC m=+1339.602803697" watchObservedRunningTime="2026-02-14 19:04:46.633009984 +0000 UTC m=+1339.609418457" Feb 14 19:04:46 crc kubenswrapper[4897]: I0214 19:04:46.656749 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.656718603 podStartE2EDuration="6.656718603s" podCreationTimestamp="2026-02-14 19:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:04:46.645781757 +0000 UTC m=+1339.622190240" watchObservedRunningTime="2026-02-14 19:04:46.656718603 +0000 UTC m=+1339.633127086" Feb 14 19:04:47 crc kubenswrapper[4897]: I0214 19:04:47.620676 4897 generic.go:334] "Generic (PLEG): container finished" podID="f76f37de-eeac-44b1-afea-f790bea1e327" containerID="4740568f7321364de6c0eeacd2622c7ca3e5742c452c59a24119037066f72b77" exitCode=0 Feb 14 19:04:47 crc kubenswrapper[4897]: I0214 19:04:47.621014 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f76f37de-eeac-44b1-afea-f790bea1e327","Type":"ContainerDied","Data":"4740568f7321364de6c0eeacd2622c7ca3e5742c452c59a24119037066f72b77"} Feb 14 19:04:47 crc kubenswrapper[4897]: I0214 19:04:47.621112 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f76f37de-eeac-44b1-afea-f790bea1e327","Type":"ContainerDied","Data":"9354b7227b857fa823732bdf0eec323739b2134c3eaa0f6f3c4f85c8a121411d"} Feb 14 19:04:47 crc kubenswrapper[4897]: I0214 19:04:47.621076 4897 generic.go:334] "Generic (PLEG): container finished" podID="f76f37de-eeac-44b1-afea-f790bea1e327" containerID="9354b7227b857fa823732bdf0eec323739b2134c3eaa0f6f3c4f85c8a121411d" exitCode=143 Feb 14 19:04:47 crc kubenswrapper[4897]: I0214 19:04:47.623922 4897 generic.go:334] "Generic (PLEG): container finished" podID="75686e6d-4bdc-4b28-836a-c7261b28ae81" containerID="5f2587ec5324386036de347b4ccf04d694c83f6a3d608ca0151c373a8dd5dd21" exitCode=0 Feb 14 19:04:47 crc kubenswrapper[4897]: I0214 19:04:47.623997 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-z9bhn" event={"ID":"75686e6d-4bdc-4b28-836a-c7261b28ae81","Type":"ContainerDied","Data":"5f2587ec5324386036de347b4ccf04d694c83f6a3d608ca0151c373a8dd5dd21"} Feb 14 19:04:47 crc kubenswrapper[4897]: I0214 19:04:47.628271 4897 generic.go:334] "Generic (PLEG): container finished" podID="419d0ff7-1ce7-4ad4-a02e-12aa69a172b8" containerID="0112ffd128a948449293403599fd7c5ceebb645e1cb45bd65cb7a7c46f7da623" exitCode=0 Feb 14 19:04:47 crc kubenswrapper[4897]: I0214 19:04:47.628300 4897 generic.go:334] "Generic (PLEG): container finished" podID="419d0ff7-1ce7-4ad4-a02e-12aa69a172b8" containerID="dc32f229372da76a38e49e4b0cde8240e44fbabaf5a31d70a7c496a9bbe8b6f6" exitCode=143 Feb 14 19:04:47 crc kubenswrapper[4897]: I0214 19:04:47.628380 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8","Type":"ContainerDied","Data":"0112ffd128a948449293403599fd7c5ceebb645e1cb45bd65cb7a7c46f7da623"} Feb 14 19:04:47 crc kubenswrapper[4897]: I0214 19:04:47.628409 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8","Type":"ContainerDied","Data":"dc32f229372da76a38e49e4b0cde8240e44fbabaf5a31d70a7c496a9bbe8b6f6"} Feb 14 19:04:51 crc kubenswrapper[4897]: I0214 19:04:51.556828 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" Feb 14 19:04:51 crc kubenswrapper[4897]: I0214 19:04:51.642777 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-wwgp5"] Feb 14 19:04:51 crc kubenswrapper[4897]: I0214 19:04:51.643401 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" podUID="f691ef96-83d3-4da6-879d-63f6cdb753a4" containerName="dnsmasq-dns" containerID="cri-o://fa7a2b5fe0d9f19351d0ee6bbedbd6bebcbf47dea78d04a4038a74c2f7a9e737" gracePeriod=10 Feb 14 19:04:52 crc kubenswrapper[4897]: I0214 19:04:52.687582 4897 generic.go:334] "Generic (PLEG): container finished" podID="f691ef96-83d3-4da6-879d-63f6cdb753a4" containerID="fa7a2b5fe0d9f19351d0ee6bbedbd6bebcbf47dea78d04a4038a74c2f7a9e737" exitCode=0 Feb 14 19:04:52 crc kubenswrapper[4897]: I0214 19:04:52.687674 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" event={"ID":"f691ef96-83d3-4da6-879d-63f6cdb753a4","Type":"ContainerDied","Data":"fa7a2b5fe0d9f19351d0ee6bbedbd6bebcbf47dea78d04a4038a74c2f7a9e737"} Feb 14 19:04:53 crc kubenswrapper[4897]: I0214 19:04:53.513363 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" podUID="f691ef96-83d3-4da6-879d-63f6cdb753a4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: connect: connection refused" Feb 14 19:04:58 crc kubenswrapper[4897]: I0214 19:04:58.513096 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" podUID="f691ef96-83d3-4da6-879d-63f6cdb753a4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: connect: connection refused" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.375591 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-z9bhn" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.387920 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.449976 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p4zw\" (UniqueName: \"kubernetes.io/projected/f76f37de-eeac-44b1-afea-f790bea1e327-kube-api-access-4p4zw\") pod \"f76f37de-eeac-44b1-afea-f790bea1e327\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.450059 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-combined-ca-bundle\") pod \"75686e6d-4bdc-4b28-836a-c7261b28ae81\" (UID: \"75686e6d-4bdc-4b28-836a-c7261b28ae81\") " Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.450131 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f76f37de-eeac-44b1-afea-f790bea1e327-scripts\") pod \"f76f37de-eeac-44b1-afea-f790bea1e327\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.450210 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f76f37de-eeac-44b1-afea-f790bea1e327-logs\") pod \"f76f37de-eeac-44b1-afea-f790bea1e327\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.450263 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f76f37de-eeac-44b1-afea-f790bea1e327-httpd-run\") pod \"f76f37de-eeac-44b1-afea-f790bea1e327\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.450289 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-scripts\") pod \"75686e6d-4bdc-4b28-836a-c7261b28ae81\" (UID: \"75686e6d-4bdc-4b28-836a-c7261b28ae81\") " Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.450324 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttq2h\" (UniqueName: \"kubernetes.io/projected/75686e6d-4bdc-4b28-836a-c7261b28ae81-kube-api-access-ttq2h\") pod \"75686e6d-4bdc-4b28-836a-c7261b28ae81\" (UID: \"75686e6d-4bdc-4b28-836a-c7261b28ae81\") " Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.450511 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c4846d-e178-48b1-80da-0604a66e3200\") pod \"f76f37de-eeac-44b1-afea-f790bea1e327\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.450540 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f76f37de-eeac-44b1-afea-f790bea1e327-public-tls-certs\") pod \"f76f37de-eeac-44b1-afea-f790bea1e327\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.450568 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76f37de-eeac-44b1-afea-f790bea1e327-combined-ca-bundle\") pod \"f76f37de-eeac-44b1-afea-f790bea1e327\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.450619 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-config-data\") pod \"75686e6d-4bdc-4b28-836a-c7261b28ae81\" (UID: \"75686e6d-4bdc-4b28-836a-c7261b28ae81\") " Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.450670 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-credential-keys\") pod \"75686e6d-4bdc-4b28-836a-c7261b28ae81\" (UID: \"75686e6d-4bdc-4b28-836a-c7261b28ae81\") " Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.450740 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-fernet-keys\") pod \"75686e6d-4bdc-4b28-836a-c7261b28ae81\" (UID: \"75686e6d-4bdc-4b28-836a-c7261b28ae81\") " Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.450775 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76f37de-eeac-44b1-afea-f790bea1e327-config-data\") pod \"f76f37de-eeac-44b1-afea-f790bea1e327\" (UID: \"f76f37de-eeac-44b1-afea-f790bea1e327\") " Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.451321 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f76f37de-eeac-44b1-afea-f790bea1e327-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f76f37de-eeac-44b1-afea-f790bea1e327" (UID: "f76f37de-eeac-44b1-afea-f790bea1e327"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.457274 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f76f37de-eeac-44b1-afea-f790bea1e327-scripts" (OuterVolumeSpecName: "scripts") pod "f76f37de-eeac-44b1-afea-f790bea1e327" (UID: "f76f37de-eeac-44b1-afea-f790bea1e327"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.457792 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f76f37de-eeac-44b1-afea-f790bea1e327-logs" (OuterVolumeSpecName: "logs") pod "f76f37de-eeac-44b1-afea-f790bea1e327" (UID: "f76f37de-eeac-44b1-afea-f790bea1e327"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.464547 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75686e6d-4bdc-4b28-836a-c7261b28ae81-kube-api-access-ttq2h" (OuterVolumeSpecName: "kube-api-access-ttq2h") pod "75686e6d-4bdc-4b28-836a-c7261b28ae81" (UID: "75686e6d-4bdc-4b28-836a-c7261b28ae81"). InnerVolumeSpecName "kube-api-access-ttq2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.479242 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f76f37de-eeac-44b1-afea-f790bea1e327-kube-api-access-4p4zw" (OuterVolumeSpecName: "kube-api-access-4p4zw") pod "f76f37de-eeac-44b1-afea-f790bea1e327" (UID: "f76f37de-eeac-44b1-afea-f790bea1e327"). InnerVolumeSpecName "kube-api-access-4p4zw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.488253 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "75686e6d-4bdc-4b28-836a-c7261b28ae81" (UID: "75686e6d-4bdc-4b28-836a-c7261b28ae81"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.488882 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "75686e6d-4bdc-4b28-836a-c7261b28ae81" (UID: "75686e6d-4bdc-4b28-836a-c7261b28ae81"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.501859 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-scripts" (OuterVolumeSpecName: "scripts") pod "75686e6d-4bdc-4b28-836a-c7261b28ae81" (UID: "75686e6d-4bdc-4b28-836a-c7261b28ae81"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.529782 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c4846d-e178-48b1-80da-0604a66e3200" (OuterVolumeSpecName: "glance") pod "f76f37de-eeac-44b1-afea-f790bea1e327" (UID: "f76f37de-eeac-44b1-afea-f790bea1e327"). InnerVolumeSpecName "pvc-c2c4846d-e178-48b1-80da-0604a66e3200". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.536578 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f76f37de-eeac-44b1-afea-f790bea1e327-config-data" (OuterVolumeSpecName: "config-data") pod "f76f37de-eeac-44b1-afea-f790bea1e327" (UID: "f76f37de-eeac-44b1-afea-f790bea1e327"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.548176 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f76f37de-eeac-44b1-afea-f790bea1e327-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f76f37de-eeac-44b1-afea-f790bea1e327" (UID: "f76f37de-eeac-44b1-afea-f790bea1e327"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.549747 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f76f37de-eeac-44b1-afea-f790bea1e327-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f76f37de-eeac-44b1-afea-f790bea1e327" (UID: "f76f37de-eeac-44b1-afea-f790bea1e327"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.552195 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f76f37de-eeac-44b1-afea-f790bea1e327-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.552218 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f76f37de-eeac-44b1-afea-f790bea1e327-logs\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.552226 4897 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f76f37de-eeac-44b1-afea-f790bea1e327-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.552236 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.552245 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttq2h\" (UniqueName: \"kubernetes.io/projected/75686e6d-4bdc-4b28-836a-c7261b28ae81-kube-api-access-ttq2h\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.552271 4897 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c2c4846d-e178-48b1-80da-0604a66e3200\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c4846d-e178-48b1-80da-0604a66e3200\") on node \"crc\" " Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.552280 4897 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f76f37de-eeac-44b1-afea-f790bea1e327-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.552290 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76f37de-eeac-44b1-afea-f790bea1e327-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.552298 4897 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.552305 4897 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.552314 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76f37de-eeac-44b1-afea-f790bea1e327-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.552322 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4p4zw\" (UniqueName: \"kubernetes.io/projected/f76f37de-eeac-44b1-afea-f790bea1e327-kube-api-access-4p4zw\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.552908 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75686e6d-4bdc-4b28-836a-c7261b28ae81" (UID: "75686e6d-4bdc-4b28-836a-c7261b28ae81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.556670 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-config-data" (OuterVolumeSpecName: "config-data") pod "75686e6d-4bdc-4b28-836a-c7261b28ae81" (UID: "75686e6d-4bdc-4b28-836a-c7261b28ae81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.578582 4897 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.578717 4897 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c2c4846d-e178-48b1-80da-0604a66e3200" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c4846d-e178-48b1-80da-0604a66e3200") on node "crc" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.653917 4897 reconciler_common.go:293] "Volume detached for volume \"pvc-c2c4846d-e178-48b1-80da-0604a66e3200\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c4846d-e178-48b1-80da-0604a66e3200\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.653952 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.653965 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75686e6d-4bdc-4b28-836a-c7261b28ae81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.818818 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f76f37de-eeac-44b1-afea-f790bea1e327","Type":"ContainerDied","Data":"bfcf765f22056780b5e8bda1d5002a1efafb00e4670afaed3da03578ad3d1ff0"} Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.818834 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.819401 4897 scope.go:117] "RemoveContainer" containerID="4740568f7321364de6c0eeacd2622c7ca3e5742c452c59a24119037066f72b77" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.820831 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-z9bhn" event={"ID":"75686e6d-4bdc-4b28-836a-c7261b28ae81","Type":"ContainerDied","Data":"683fb6104d0709e04bc368ce88411eb9a30a7297cc9921cad80e43eecd57d88d"} Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.820869 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="683fb6104d0709e04bc368ce88411eb9a30a7297cc9921cad80e43eecd57d88d" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.821492 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-z9bhn" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.861215 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.870871 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.942167 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 19:05:00 crc kubenswrapper[4897]: E0214 19:05:00.942610 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c3d022e-0d67-46e1-9723-7a603cf88d0f" containerName="init" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.942628 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c3d022e-0d67-46e1-9723-7a603cf88d0f" containerName="init" Feb 14 19:05:00 crc kubenswrapper[4897]: E0214 19:05:00.942643 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75686e6d-4bdc-4b28-836a-c7261b28ae81" containerName="keystone-bootstrap" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.942651 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="75686e6d-4bdc-4b28-836a-c7261b28ae81" containerName="keystone-bootstrap" Feb 14 19:05:00 crc kubenswrapper[4897]: E0214 19:05:00.942693 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f76f37de-eeac-44b1-afea-f790bea1e327" containerName="glance-log" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.942699 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f76f37de-eeac-44b1-afea-f790bea1e327" containerName="glance-log" Feb 14 19:05:00 crc kubenswrapper[4897]: E0214 19:05:00.942712 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f76f37de-eeac-44b1-afea-f790bea1e327" containerName="glance-httpd" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.942718 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f76f37de-eeac-44b1-afea-f790bea1e327" containerName="glance-httpd" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.943006 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f76f37de-eeac-44b1-afea-f790bea1e327" containerName="glance-log" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.943044 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c3d022e-0d67-46e1-9723-7a603cf88d0f" containerName="init" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.943057 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f76f37de-eeac-44b1-afea-f790bea1e327" containerName="glance-httpd" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.943064 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="75686e6d-4bdc-4b28-836a-c7261b28ae81" containerName="keystone-bootstrap" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.944244 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.947102 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.947205 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 14 19:05:00 crc kubenswrapper[4897]: I0214 19:05:00.976495 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.066343 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " pod="openstack/glance-default-external-api-0" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.066423 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " pod="openstack/glance-default-external-api-0" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.066639 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-config-data\") pod \"glance-default-external-api-0\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " pod="openstack/glance-default-external-api-0" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.066812 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-scripts\") pod \"glance-default-external-api-0\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " pod="openstack/glance-default-external-api-0" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.066942 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-logs\") pod \"glance-default-external-api-0\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " pod="openstack/glance-default-external-api-0" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.067153 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjqr6\" (UniqueName: \"kubernetes.io/projected/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-kube-api-access-gjqr6\") pod \"glance-default-external-api-0\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " pod="openstack/glance-default-external-api-0" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.067222 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " pod="openstack/glance-default-external-api-0" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.067250 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c2c4846d-e178-48b1-80da-0604a66e3200\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c4846d-e178-48b1-80da-0604a66e3200\") pod \"glance-default-external-api-0\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " pod="openstack/glance-default-external-api-0" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.169659 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " pod="openstack/glance-default-external-api-0" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.169716 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c2c4846d-e178-48b1-80da-0604a66e3200\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c4846d-e178-48b1-80da-0604a66e3200\") pod \"glance-default-external-api-0\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " pod="openstack/glance-default-external-api-0" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.169769 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " pod="openstack/glance-default-external-api-0" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.169812 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " pod="openstack/glance-default-external-api-0" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.169879 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-config-data\") pod \"glance-default-external-api-0\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " pod="openstack/glance-default-external-api-0" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.169941 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-scripts\") pod \"glance-default-external-api-0\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " pod="openstack/glance-default-external-api-0" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.169997 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-logs\") pod \"glance-default-external-api-0\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " pod="openstack/glance-default-external-api-0" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.170097 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjqr6\" (UniqueName: \"kubernetes.io/projected/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-kube-api-access-gjqr6\") pod \"glance-default-external-api-0\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " pod="openstack/glance-default-external-api-0" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.170295 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " pod="openstack/glance-default-external-api-0" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.170763 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-logs\") pod \"glance-default-external-api-0\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " pod="openstack/glance-default-external-api-0" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.174688 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.174729 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c2c4846d-e178-48b1-80da-0604a66e3200\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c4846d-e178-48b1-80da-0604a66e3200\") pod \"glance-default-external-api-0\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c60ca6e58b7228eda216e886c2f088869a9fd33844e5fbdaaee4673098f90fe3/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.174897 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " pod="openstack/glance-default-external-api-0" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.174958 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " pod="openstack/glance-default-external-api-0" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.175745 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-config-data\") pod \"glance-default-external-api-0\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " pod="openstack/glance-default-external-api-0" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.176381 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-scripts\") pod \"glance-default-external-api-0\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " pod="openstack/glance-default-external-api-0" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.188938 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjqr6\" (UniqueName: \"kubernetes.io/projected/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-kube-api-access-gjqr6\") pod \"glance-default-external-api-0\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " pod="openstack/glance-default-external-api-0" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.232768 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c2c4846d-e178-48b1-80da-0604a66e3200\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c4846d-e178-48b1-80da-0604a66e3200\") pod \"glance-default-external-api-0\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " pod="openstack/glance-default-external-api-0" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.274913 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.465105 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-z9bhn"] Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.473769 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-z9bhn"] Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.562716 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jsr6q"] Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.564391 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jsr6q" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.567102 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.567156 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.567414 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.569358 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-z9242" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.569886 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.591308 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jsr6q"] Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.681938 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-fernet-keys\") pod \"keystone-bootstrap-jsr6q\" (UID: \"d1a362ef-bc82-43d1-93d2-81806d08bd50\") " pod="openstack/keystone-bootstrap-jsr6q" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.682442 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-credential-keys\") pod \"keystone-bootstrap-jsr6q\" (UID: \"d1a362ef-bc82-43d1-93d2-81806d08bd50\") " pod="openstack/keystone-bootstrap-jsr6q" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.682641 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q7hb\" (UniqueName: \"kubernetes.io/projected/d1a362ef-bc82-43d1-93d2-81806d08bd50-kube-api-access-6q7hb\") pod \"keystone-bootstrap-jsr6q\" (UID: \"d1a362ef-bc82-43d1-93d2-81806d08bd50\") " pod="openstack/keystone-bootstrap-jsr6q" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.682686 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-combined-ca-bundle\") pod \"keystone-bootstrap-jsr6q\" (UID: \"d1a362ef-bc82-43d1-93d2-81806d08bd50\") " pod="openstack/keystone-bootstrap-jsr6q" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.683015 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-scripts\") pod \"keystone-bootstrap-jsr6q\" (UID: \"d1a362ef-bc82-43d1-93d2-81806d08bd50\") " pod="openstack/keystone-bootstrap-jsr6q" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.683199 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-config-data\") pod \"keystone-bootstrap-jsr6q\" (UID: \"d1a362ef-bc82-43d1-93d2-81806d08bd50\") " pod="openstack/keystone-bootstrap-jsr6q" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.785147 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-scripts\") pod \"keystone-bootstrap-jsr6q\" (UID: \"d1a362ef-bc82-43d1-93d2-81806d08bd50\") " pod="openstack/keystone-bootstrap-jsr6q" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.785270 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-config-data\") pod \"keystone-bootstrap-jsr6q\" (UID: \"d1a362ef-bc82-43d1-93d2-81806d08bd50\") " pod="openstack/keystone-bootstrap-jsr6q" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.785393 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-fernet-keys\") pod \"keystone-bootstrap-jsr6q\" (UID: \"d1a362ef-bc82-43d1-93d2-81806d08bd50\") " pod="openstack/keystone-bootstrap-jsr6q" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.785923 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-credential-keys\") pod \"keystone-bootstrap-jsr6q\" (UID: \"d1a362ef-bc82-43d1-93d2-81806d08bd50\") " pod="openstack/keystone-bootstrap-jsr6q" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.785982 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q7hb\" (UniqueName: \"kubernetes.io/projected/d1a362ef-bc82-43d1-93d2-81806d08bd50-kube-api-access-6q7hb\") pod \"keystone-bootstrap-jsr6q\" (UID: \"d1a362ef-bc82-43d1-93d2-81806d08bd50\") " pod="openstack/keystone-bootstrap-jsr6q" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.786016 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-combined-ca-bundle\") pod \"keystone-bootstrap-jsr6q\" (UID: \"d1a362ef-bc82-43d1-93d2-81806d08bd50\") " pod="openstack/keystone-bootstrap-jsr6q" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.789179 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-scripts\") pod \"keystone-bootstrap-jsr6q\" (UID: \"d1a362ef-bc82-43d1-93d2-81806d08bd50\") " pod="openstack/keystone-bootstrap-jsr6q" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.789884 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-combined-ca-bundle\") pod \"keystone-bootstrap-jsr6q\" (UID: \"d1a362ef-bc82-43d1-93d2-81806d08bd50\") " pod="openstack/keystone-bootstrap-jsr6q" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.790116 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-credential-keys\") pod \"keystone-bootstrap-jsr6q\" (UID: \"d1a362ef-bc82-43d1-93d2-81806d08bd50\") " pod="openstack/keystone-bootstrap-jsr6q" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.790223 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-config-data\") pod \"keystone-bootstrap-jsr6q\" (UID: \"d1a362ef-bc82-43d1-93d2-81806d08bd50\") " pod="openstack/keystone-bootstrap-jsr6q" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.801020 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-fernet-keys\") pod \"keystone-bootstrap-jsr6q\" (UID: \"d1a362ef-bc82-43d1-93d2-81806d08bd50\") " pod="openstack/keystone-bootstrap-jsr6q" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.815665 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75686e6d-4bdc-4b28-836a-c7261b28ae81" path="/var/lib/kubelet/pods/75686e6d-4bdc-4b28-836a-c7261b28ae81/volumes" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.816452 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f76f37de-eeac-44b1-afea-f790bea1e327" path="/var/lib/kubelet/pods/f76f37de-eeac-44b1-afea-f790bea1e327/volumes" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.816552 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q7hb\" (UniqueName: \"kubernetes.io/projected/d1a362ef-bc82-43d1-93d2-81806d08bd50-kube-api-access-6q7hb\") pod \"keystone-bootstrap-jsr6q\" (UID: \"d1a362ef-bc82-43d1-93d2-81806d08bd50\") " pod="openstack/keystone-bootstrap-jsr6q" Feb 14 19:05:01 crc kubenswrapper[4897]: I0214 19:05:01.884320 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jsr6q" Feb 14 19:05:06 crc kubenswrapper[4897]: I0214 19:05:06.889812 4897 generic.go:334] "Generic (PLEG): container finished" podID="a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec" containerID="a378e17ae0874d53855434093e5097afb540694529af263653e557e5f2441b47" exitCode=0 Feb 14 19:05:06 crc kubenswrapper[4897]: I0214 19:05:06.889948 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qnrpp" event={"ID":"a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec","Type":"ContainerDied","Data":"a378e17ae0874d53855434093e5097afb540694529af263653e557e5f2441b47"} Feb 14 19:05:08 crc kubenswrapper[4897]: I0214 19:05:08.512897 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" podUID="f691ef96-83d3-4da6-879d-63f6cdb753a4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: i/o timeout" Feb 14 19:05:08 crc kubenswrapper[4897]: I0214 19:05:08.513605 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" Feb 14 19:05:12 crc kubenswrapper[4897]: I0214 19:05:12.412499 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 14 19:05:12 crc kubenswrapper[4897]: I0214 19:05:12.413247 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.094984 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.294507 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-httpd-run\") pod \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.294880 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b289847-29c6-4db3-8215-32600f200b4c\") pod \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.294913 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz7fz\" (UniqueName: \"kubernetes.io/projected/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-kube-api-access-jz7fz\") pod \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.294944 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-combined-ca-bundle\") pod \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.294983 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-internal-tls-certs\") pod \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.295193 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-scripts\") pod \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.295272 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-logs\") pod \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.295310 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-config-data\") pod \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\" (UID: \"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8\") " Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.301226 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "419d0ff7-1ce7-4ad4-a02e-12aa69a172b8" (UID: "419d0ff7-1ce7-4ad4-a02e-12aa69a172b8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.301468 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-logs" (OuterVolumeSpecName: "logs") pod "419d0ff7-1ce7-4ad4-a02e-12aa69a172b8" (UID: "419d0ff7-1ce7-4ad4-a02e-12aa69a172b8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.304576 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-kube-api-access-jz7fz" (OuterVolumeSpecName: "kube-api-access-jz7fz") pod "419d0ff7-1ce7-4ad4-a02e-12aa69a172b8" (UID: "419d0ff7-1ce7-4ad4-a02e-12aa69a172b8"). InnerVolumeSpecName "kube-api-access-jz7fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.316046 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-scripts" (OuterVolumeSpecName: "scripts") pod "419d0ff7-1ce7-4ad4-a02e-12aa69a172b8" (UID: "419d0ff7-1ce7-4ad4-a02e-12aa69a172b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.328470 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b289847-29c6-4db3-8215-32600f200b4c" (OuterVolumeSpecName: "glance") pod "419d0ff7-1ce7-4ad4-a02e-12aa69a172b8" (UID: "419d0ff7-1ce7-4ad4-a02e-12aa69a172b8"). InnerVolumeSpecName "pvc-6b289847-29c6-4db3-8215-32600f200b4c". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.337334 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "419d0ff7-1ce7-4ad4-a02e-12aa69a172b8" (UID: "419d0ff7-1ce7-4ad4-a02e-12aa69a172b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.359051 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-config-data" (OuterVolumeSpecName: "config-data") pod "419d0ff7-1ce7-4ad4-a02e-12aa69a172b8" (UID: "419d0ff7-1ce7-4ad4-a02e-12aa69a172b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.366176 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "419d0ff7-1ce7-4ad4-a02e-12aa69a172b8" (UID: "419d0ff7-1ce7-4ad4-a02e-12aa69a172b8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.397582 4897 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.397671 4897 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6b289847-29c6-4db3-8215-32600f200b4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b289847-29c6-4db3-8215-32600f200b4c\") on node \"crc\" " Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.397687 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz7fz\" (UniqueName: \"kubernetes.io/projected/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-kube-api-access-jz7fz\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.397699 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.397707 4897 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.397715 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.397724 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-logs\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.397732 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.424187 4897 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.424475 4897 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6b289847-29c6-4db3-8215-32600f200b4c" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b289847-29c6-4db3-8215-32600f200b4c") on node "crc" Feb 14 19:05:13 crc kubenswrapper[4897]: E0214 19:05:13.483745 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 14 19:05:13 crc kubenswrapper[4897]: E0214 19:05:13.483980 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n665h5d4h576h65fh576h668h97h7dh5d4h66bhd9h679h556h644hc8h668h5fbhf6h8fh564hc9h5bbhcch6fh86h656h656h5dbh544h65ch596h644q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-46sfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(744ddf55-9af8-4c94-8a91-4280fd9c8d6c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.499915 4897 reconciler_common.go:293] "Volume detached for volume \"pvc-6b289847-29c6-4db3-8215-32600f200b4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b289847-29c6-4db3-8215-32600f200b4c\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.513560 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" podUID="f691ef96-83d3-4da6-879d-63f6cdb753a4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: i/o timeout" Feb 14 19:05:13 crc kubenswrapper[4897]: E0214 19:05:13.787484 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Feb 14 19:05:13 crc kubenswrapper[4897]: E0214 19:05:13.787741 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-222vr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-2rjdz_openstack(c17a810c-7598-46ab-93c3-c480c175ca61): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 19:05:13 crc kubenswrapper[4897]: E0214 19:05:13.788967 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-2rjdz" podUID="c17a810c-7598-46ab-93c3-c480c175ca61" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.798345 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.804955 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qnrpp" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.907878 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj46w\" (UniqueName: \"kubernetes.io/projected/f691ef96-83d3-4da6-879d-63f6cdb753a4-kube-api-access-jj46w\") pod \"f691ef96-83d3-4da6-879d-63f6cdb753a4\" (UID: \"f691ef96-83d3-4da6-879d-63f6cdb753a4\") " Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.907971 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec-config\") pod \"a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec\" (UID: \"a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec\") " Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.907994 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-dns-swift-storage-0\") pod \"f691ef96-83d3-4da6-879d-63f6cdb753a4\" (UID: \"f691ef96-83d3-4da6-879d-63f6cdb753a4\") " Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.908102 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-dns-svc\") pod \"f691ef96-83d3-4da6-879d-63f6cdb753a4\" (UID: \"f691ef96-83d3-4da6-879d-63f6cdb753a4\") " Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.908122 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-ovsdbserver-nb\") pod \"f691ef96-83d3-4da6-879d-63f6cdb753a4\" (UID: \"f691ef96-83d3-4da6-879d-63f6cdb753a4\") " Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.908192 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-config\") pod \"f691ef96-83d3-4da6-879d-63f6cdb753a4\" (UID: \"f691ef96-83d3-4da6-879d-63f6cdb753a4\") " Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.908266 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khkp5\" (UniqueName: \"kubernetes.io/projected/a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec-kube-api-access-khkp5\") pod \"a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec\" (UID: \"a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec\") " Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.908301 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-ovsdbserver-sb\") pod \"f691ef96-83d3-4da6-879d-63f6cdb753a4\" (UID: \"f691ef96-83d3-4da6-879d-63f6cdb753a4\") " Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.908331 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec-combined-ca-bundle\") pod \"a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec\" (UID: \"a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec\") " Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.913597 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f691ef96-83d3-4da6-879d-63f6cdb753a4-kube-api-access-jj46w" (OuterVolumeSpecName: "kube-api-access-jj46w") pod "f691ef96-83d3-4da6-879d-63f6cdb753a4" (UID: "f691ef96-83d3-4da6-879d-63f6cdb753a4"). InnerVolumeSpecName "kube-api-access-jj46w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.916195 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec-kube-api-access-khkp5" (OuterVolumeSpecName: "kube-api-access-khkp5") pod "a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec" (UID: "a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec"). InnerVolumeSpecName "kube-api-access-khkp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.944195 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec-config" (OuterVolumeSpecName: "config") pod "a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec" (UID: "a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.949941 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec" (UID: "a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.957886 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f691ef96-83d3-4da6-879d-63f6cdb753a4" (UID: "f691ef96-83d3-4da6-879d-63f6cdb753a4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.962902 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f691ef96-83d3-4da6-879d-63f6cdb753a4" (UID: "f691ef96-83d3-4da6-879d-63f6cdb753a4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.971156 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f691ef96-83d3-4da6-879d-63f6cdb753a4" (UID: "f691ef96-83d3-4da6-879d-63f6cdb753a4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.987831 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f691ef96-83d3-4da6-879d-63f6cdb753a4" (UID: "f691ef96-83d3-4da6-879d-63f6cdb753a4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.987839 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-config" (OuterVolumeSpecName: "config") pod "f691ef96-83d3-4da6-879d-63f6cdb753a4" (UID: "f691ef96-83d3-4da6-879d-63f6cdb753a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.998234 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qnrpp" event={"ID":"a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec","Type":"ContainerDied","Data":"01b30e661d750d6cedebefe648b1f5b6f1e588ea62b0b039deb63813b20f90be"} Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.998272 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01b30e661d750d6cedebefe648b1f5b6f1e588ea62b0b039deb63813b20f90be" Feb 14 19:05:13 crc kubenswrapper[4897]: I0214 19:05:13.998331 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qnrpp" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.003315 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"419d0ff7-1ce7-4ad4-a02e-12aa69a172b8","Type":"ContainerDied","Data":"272c23fcff7e4fae9865eeb6dfe465e7d6d40edc58f9eb1f7b06172ae42cd3e9"} Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.003335 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.007592 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.007815 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" event={"ID":"f691ef96-83d3-4da6-879d-63f6cdb753a4","Type":"ContainerDied","Data":"9c371c5afcac3a4e24e3c6f0696e0c167822690025439f5372e08c28ba3b32ec"} Feb 14 19:05:14 crc kubenswrapper[4897]: E0214 19:05:14.008815 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-2rjdz" podUID="c17a810c-7598-46ab-93c3-c480c175ca61" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.010357 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.010386 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khkp5\" (UniqueName: \"kubernetes.io/projected/a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec-kube-api-access-khkp5\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.010398 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.010407 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.010418 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj46w\" (UniqueName: \"kubernetes.io/projected/f691ef96-83d3-4da6-879d-63f6cdb753a4-kube-api-access-jj46w\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.010432 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.010444 4897 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.010456 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.010466 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f691ef96-83d3-4da6-879d-63f6cdb753a4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.157727 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-wwgp5"] Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.172781 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-wwgp5"] Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.203181 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.221394 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.244888 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 19:05:14 crc kubenswrapper[4897]: E0214 19:05:14.245447 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f691ef96-83d3-4da6-879d-63f6cdb753a4" containerName="init" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.245490 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f691ef96-83d3-4da6-879d-63f6cdb753a4" containerName="init" Feb 14 19:05:14 crc kubenswrapper[4897]: E0214 19:05:14.245511 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="419d0ff7-1ce7-4ad4-a02e-12aa69a172b8" containerName="glance-log" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.245520 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="419d0ff7-1ce7-4ad4-a02e-12aa69a172b8" containerName="glance-log" Feb 14 19:05:14 crc kubenswrapper[4897]: E0214 19:05:14.245554 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec" containerName="neutron-db-sync" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.245564 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec" containerName="neutron-db-sync" Feb 14 19:05:14 crc kubenswrapper[4897]: E0214 19:05:14.245580 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f691ef96-83d3-4da6-879d-63f6cdb753a4" containerName="dnsmasq-dns" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.245588 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f691ef96-83d3-4da6-879d-63f6cdb753a4" containerName="dnsmasq-dns" Feb 14 19:05:14 crc kubenswrapper[4897]: E0214 19:05:14.245604 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="419d0ff7-1ce7-4ad4-a02e-12aa69a172b8" containerName="glance-httpd" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.245613 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="419d0ff7-1ce7-4ad4-a02e-12aa69a172b8" containerName="glance-httpd" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.245899 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f691ef96-83d3-4da6-879d-63f6cdb753a4" containerName="dnsmasq-dns" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.245916 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="419d0ff7-1ce7-4ad4-a02e-12aa69a172b8" containerName="glance-log" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.245932 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec" containerName="neutron-db-sync" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.245960 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="419d0ff7-1ce7-4ad4-a02e-12aa69a172b8" containerName="glance-httpd" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.247623 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.269939 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.276360 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.276599 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.419635 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.419702 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.419755 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m57r\" (UniqueName: \"kubernetes.io/projected/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-kube-api-access-9m57r\") pod \"glance-default-internal-api-0\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.419782 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.419966 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.420151 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6b289847-29c6-4db3-8215-32600f200b4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b289847-29c6-4db3-8215-32600f200b4c\") pod \"glance-default-internal-api-0\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.420199 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.420274 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-logs\") pod \"glance-default-internal-api-0\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.521959 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-logs\") pod \"glance-default-internal-api-0\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.522090 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.522147 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.522179 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m57r\" (UniqueName: \"kubernetes.io/projected/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-kube-api-access-9m57r\") pod \"glance-default-internal-api-0\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.522220 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.522283 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.522327 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6b289847-29c6-4db3-8215-32600f200b4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b289847-29c6-4db3-8215-32600f200b4c\") pod \"glance-default-internal-api-0\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.522365 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.522474 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-logs\") pod \"glance-default-internal-api-0\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.522799 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.527500 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.527573 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6b289847-29c6-4db3-8215-32600f200b4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b289847-29c6-4db3-8215-32600f200b4c\") pod \"glance-default-internal-api-0\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e62bbcc549b1e49eee9b1b5ff653b97ed37b658653a03b79e94b1d5ec308d580/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.528690 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.533568 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.540422 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.542741 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.558517 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m57r\" (UniqueName: \"kubernetes.io/projected/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-kube-api-access-9m57r\") pod \"glance-default-internal-api-0\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.582528 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6b289847-29c6-4db3-8215-32600f200b4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b289847-29c6-4db3-8215-32600f200b4c\") pod \"glance-default-internal-api-0\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:05:14 crc kubenswrapper[4897]: I0214 19:05:14.890108 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.127718 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-5mhmn"] Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.131862 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.173584 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-5mhmn"] Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.222782 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5f78bcb6c6-95jr5"] Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.224699 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f78bcb6c6-95jr5" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.229296 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.229614 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.229755 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.230592 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fs4lc" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.238663 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f78bcb6c6-95jr5"] Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.239103 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-dns-svc\") pod \"dnsmasq-dns-55f844cf75-5mhmn\" (UID: \"ec91b42f-9953-4bf3-b120-48ea5599b459\") " pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.239150 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-5mhmn\" (UID: \"ec91b42f-9953-4bf3-b120-48ea5599b459\") " pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.239201 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-926d7\" (UniqueName: \"kubernetes.io/projected/ec91b42f-9953-4bf3-b120-48ea5599b459-kube-api-access-926d7\") pod \"dnsmasq-dns-55f844cf75-5mhmn\" (UID: \"ec91b42f-9953-4bf3-b120-48ea5599b459\") " pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.239238 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-5mhmn\" (UID: \"ec91b42f-9953-4bf3-b120-48ea5599b459\") " pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.239312 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-5mhmn\" (UID: \"ec91b42f-9953-4bf3-b120-48ea5599b459\") " pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.239394 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-config\") pod \"dnsmasq-dns-55f844cf75-5mhmn\" (UID: \"ec91b42f-9953-4bf3-b120-48ea5599b459\") " pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.341202 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-926d7\" (UniqueName: \"kubernetes.io/projected/ec91b42f-9953-4bf3-b120-48ea5599b459-kube-api-access-926d7\") pod \"dnsmasq-dns-55f844cf75-5mhmn\" (UID: \"ec91b42f-9953-4bf3-b120-48ea5599b459\") " pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.341256 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f02df6db-894f-46ff-9bdc-53559271efcc-httpd-config\") pod \"neutron-5f78bcb6c6-95jr5\" (UID: \"f02df6db-894f-46ff-9bdc-53559271efcc\") " pod="openstack/neutron-5f78bcb6c6-95jr5" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.341293 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f02df6db-894f-46ff-9bdc-53559271efcc-ovndb-tls-certs\") pod \"neutron-5f78bcb6c6-95jr5\" (UID: \"f02df6db-894f-46ff-9bdc-53559271efcc\") " pod="openstack/neutron-5f78bcb6c6-95jr5" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.341325 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-5mhmn\" (UID: \"ec91b42f-9953-4bf3-b120-48ea5599b459\") " pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.341399 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl9gr\" (UniqueName: \"kubernetes.io/projected/f02df6db-894f-46ff-9bdc-53559271efcc-kube-api-access-wl9gr\") pod \"neutron-5f78bcb6c6-95jr5\" (UID: \"f02df6db-894f-46ff-9bdc-53559271efcc\") " pod="openstack/neutron-5f78bcb6c6-95jr5" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.341467 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-5mhmn\" (UID: \"ec91b42f-9953-4bf3-b120-48ea5599b459\") " pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.341716 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f02df6db-894f-46ff-9bdc-53559271efcc-combined-ca-bundle\") pod \"neutron-5f78bcb6c6-95jr5\" (UID: \"f02df6db-894f-46ff-9bdc-53559271efcc\") " pod="openstack/neutron-5f78bcb6c6-95jr5" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.341863 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f02df6db-894f-46ff-9bdc-53559271efcc-config\") pod \"neutron-5f78bcb6c6-95jr5\" (UID: \"f02df6db-894f-46ff-9bdc-53559271efcc\") " pod="openstack/neutron-5f78bcb6c6-95jr5" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.341903 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-config\") pod \"dnsmasq-dns-55f844cf75-5mhmn\" (UID: \"ec91b42f-9953-4bf3-b120-48ea5599b459\") " pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.341926 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-dns-svc\") pod \"dnsmasq-dns-55f844cf75-5mhmn\" (UID: \"ec91b42f-9953-4bf3-b120-48ea5599b459\") " pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.341967 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-5mhmn\" (UID: \"ec91b42f-9953-4bf3-b120-48ea5599b459\") " pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.342573 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-5mhmn\" (UID: \"ec91b42f-9953-4bf3-b120-48ea5599b459\") " pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.342640 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-5mhmn\" (UID: \"ec91b42f-9953-4bf3-b120-48ea5599b459\") " pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.342902 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-5mhmn\" (UID: \"ec91b42f-9953-4bf3-b120-48ea5599b459\") " pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.343495 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-config\") pod \"dnsmasq-dns-55f844cf75-5mhmn\" (UID: \"ec91b42f-9953-4bf3-b120-48ea5599b459\") " pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.344319 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-dns-svc\") pod \"dnsmasq-dns-55f844cf75-5mhmn\" (UID: \"ec91b42f-9953-4bf3-b120-48ea5599b459\") " pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.367840 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-926d7\" (UniqueName: \"kubernetes.io/projected/ec91b42f-9953-4bf3-b120-48ea5599b459-kube-api-access-926d7\") pod \"dnsmasq-dns-55f844cf75-5mhmn\" (UID: \"ec91b42f-9953-4bf3-b120-48ea5599b459\") " pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.444732 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f02df6db-894f-46ff-9bdc-53559271efcc-httpd-config\") pod \"neutron-5f78bcb6c6-95jr5\" (UID: \"f02df6db-894f-46ff-9bdc-53559271efcc\") " pod="openstack/neutron-5f78bcb6c6-95jr5" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.444802 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f02df6db-894f-46ff-9bdc-53559271efcc-ovndb-tls-certs\") pod \"neutron-5f78bcb6c6-95jr5\" (UID: \"f02df6db-894f-46ff-9bdc-53559271efcc\") " pod="openstack/neutron-5f78bcb6c6-95jr5" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.444874 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl9gr\" (UniqueName: \"kubernetes.io/projected/f02df6db-894f-46ff-9bdc-53559271efcc-kube-api-access-wl9gr\") pod \"neutron-5f78bcb6c6-95jr5\" (UID: \"f02df6db-894f-46ff-9bdc-53559271efcc\") " pod="openstack/neutron-5f78bcb6c6-95jr5" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.445016 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f02df6db-894f-46ff-9bdc-53559271efcc-combined-ca-bundle\") pod \"neutron-5f78bcb6c6-95jr5\" (UID: \"f02df6db-894f-46ff-9bdc-53559271efcc\") " pod="openstack/neutron-5f78bcb6c6-95jr5" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.445073 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f02df6db-894f-46ff-9bdc-53559271efcc-config\") pod \"neutron-5f78bcb6c6-95jr5\" (UID: \"f02df6db-894f-46ff-9bdc-53559271efcc\") " pod="openstack/neutron-5f78bcb6c6-95jr5" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.448784 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f02df6db-894f-46ff-9bdc-53559271efcc-ovndb-tls-certs\") pod \"neutron-5f78bcb6c6-95jr5\" (UID: \"f02df6db-894f-46ff-9bdc-53559271efcc\") " pod="openstack/neutron-5f78bcb6c6-95jr5" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.449790 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f02df6db-894f-46ff-9bdc-53559271efcc-config\") pod \"neutron-5f78bcb6c6-95jr5\" (UID: \"f02df6db-894f-46ff-9bdc-53559271efcc\") " pod="openstack/neutron-5f78bcb6c6-95jr5" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.450587 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f02df6db-894f-46ff-9bdc-53559271efcc-httpd-config\") pod \"neutron-5f78bcb6c6-95jr5\" (UID: \"f02df6db-894f-46ff-9bdc-53559271efcc\") " pod="openstack/neutron-5f78bcb6c6-95jr5" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.450862 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f02df6db-894f-46ff-9bdc-53559271efcc-combined-ca-bundle\") pod \"neutron-5f78bcb6c6-95jr5\" (UID: \"f02df6db-894f-46ff-9bdc-53559271efcc\") " pod="openstack/neutron-5f78bcb6c6-95jr5" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.463154 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl9gr\" (UniqueName: \"kubernetes.io/projected/f02df6db-894f-46ff-9bdc-53559271efcc-kube-api-access-wl9gr\") pod \"neutron-5f78bcb6c6-95jr5\" (UID: \"f02df6db-894f-46ff-9bdc-53559271efcc\") " pod="openstack/neutron-5f78bcb6c6-95jr5" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.478895 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.547711 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f78bcb6c6-95jr5" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.689805 4897 scope.go:117] "RemoveContainer" containerID="9354b7227b857fa823732bdf0eec323739b2134c3eaa0f6f3c4f85c8a121411d" Feb 14 19:05:15 crc kubenswrapper[4897]: E0214 19:05:15.691287 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 14 19:05:15 crc kubenswrapper[4897]: E0214 19:05:15.691578 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdhqw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-577t2_openstack(6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 19:05:15 crc kubenswrapper[4897]: E0214 19:05:15.692988 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-577t2" podUID="6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.786669 4897 scope.go:117] "RemoveContainer" containerID="0112ffd128a948449293403599fd7c5ceebb645e1cb45bd65cb7a7c46f7da623" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.814342 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="419d0ff7-1ce7-4ad4-a02e-12aa69a172b8" path="/var/lib/kubelet/pods/419d0ff7-1ce7-4ad4-a02e-12aa69a172b8/volumes" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.815065 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f691ef96-83d3-4da6-879d-63f6cdb753a4" path="/var/lib/kubelet/pods/f691ef96-83d3-4da6-879d-63f6cdb753a4/volumes" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.873171 4897 scope.go:117] "RemoveContainer" containerID="dc32f229372da76a38e49e4b0cde8240e44fbabaf5a31d70a7c496a9bbe8b6f6" Feb 14 19:05:15 crc kubenswrapper[4897]: I0214 19:05:15.945403 4897 scope.go:117] "RemoveContainer" containerID="fa7a2b5fe0d9f19351d0ee6bbedbd6bebcbf47dea78d04a4038a74c2f7a9e737" Feb 14 19:05:16 crc kubenswrapper[4897]: I0214 19:05:16.060107 4897 scope.go:117] "RemoveContainer" containerID="fa08aef9083e07fdef1f76deb25a4d81ca13aa7a9e308f056c0cda17fa71cf38" Feb 14 19:05:16 crc kubenswrapper[4897]: E0214 19:05:16.118652 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-577t2" podUID="6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1" Feb 14 19:05:16 crc kubenswrapper[4897]: I0214 19:05:16.331596 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jsr6q"] Feb 14 19:05:16 crc kubenswrapper[4897]: I0214 19:05:16.645235 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 19:05:16 crc kubenswrapper[4897]: I0214 19:05:16.694004 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-5mhmn"] Feb 14 19:05:16 crc kubenswrapper[4897]: I0214 19:05:16.799170 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f78bcb6c6-95jr5"] Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.098816 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b47b5146-8110-4b6d-972a-e3d08f5c7e3c","Type":"ContainerStarted","Data":"1a2e26ee20e4599c836563b332449dd61fb4d694d2fb8e93118b941151217085"} Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.100408 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9l57t" event={"ID":"efcb9cd7-17f6-4705-96e9-40a25d718a72","Type":"ContainerStarted","Data":"724b6b8d591ef873313016a2196eaf552614bb96abc2d4fabc2e66edcd2f2a8b"} Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.101660 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jsr6q" event={"ID":"d1a362ef-bc82-43d1-93d2-81806d08bd50","Type":"ContainerStarted","Data":"3da22534b4a50b124fd6dc677c4c4db3a8b75124372f35a822a5b2175fbb0745"} Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.101717 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jsr6q" event={"ID":"d1a362ef-bc82-43d1-93d2-81806d08bd50","Type":"ContainerStarted","Data":"727ab6628d249ffada38b6c00af32169d5c893193f13208a37d5cbc15e87b27b"} Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.107536 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-j2sgf" event={"ID":"e4cf787d-aa82-449b-917e-b5863b11b429","Type":"ContainerStarted","Data":"98a95501c242ba75d29f36480bc1c47367e6d9a98059217e4839e3abf6e2dc23"} Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.132834 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-9l57t" podStartSLOduration=6.207291012 podStartE2EDuration="37.132815604s" podCreationTimestamp="2026-02-14 19:04:40 +0000 UTC" firstStartedPulling="2026-02-14 19:04:42.855789385 +0000 UTC m=+1335.832197868" lastFinishedPulling="2026-02-14 19:05:13.781313977 +0000 UTC m=+1366.757722460" observedRunningTime="2026-02-14 19:05:17.128694925 +0000 UTC m=+1370.105103408" watchObservedRunningTime="2026-02-14 19:05:17.132815604 +0000 UTC m=+1370.109224087" Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.158855 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jsr6q" podStartSLOduration=16.158837511 podStartE2EDuration="16.158837511s" podCreationTimestamp="2026-02-14 19:05:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:05:17.153217135 +0000 UTC m=+1370.129625628" watchObservedRunningTime="2026-02-14 19:05:17.158837511 +0000 UTC m=+1370.135245994" Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.186531 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-j2sgf" podStartSLOduration=3.437792136 podStartE2EDuration="36.18651384s" podCreationTimestamp="2026-02-14 19:04:41 +0000 UTC" firstStartedPulling="2026-02-14 19:04:42.911377073 +0000 UTC m=+1335.887785556" lastFinishedPulling="2026-02-14 19:05:15.660098777 +0000 UTC m=+1368.636507260" observedRunningTime="2026-02-14 19:05:17.184755775 +0000 UTC m=+1370.161164258" watchObservedRunningTime="2026-02-14 19:05:17.18651384 +0000 UTC m=+1370.162922313" Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.328785 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6dd74d4b5f-8tgjp"] Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.330521 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6dd74d4b5f-8tgjp" Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.335180 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.335323 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.340558 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6dd74d4b5f-8tgjp"] Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.432367 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-ovndb-tls-certs\") pod \"neutron-6dd74d4b5f-8tgjp\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " pod="openstack/neutron-6dd74d4b5f-8tgjp" Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.432476 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8kxn\" (UniqueName: \"kubernetes.io/projected/642b5930-c972-4455-a280-932d5fda60e5-kube-api-access-s8kxn\") pod \"neutron-6dd74d4b5f-8tgjp\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " pod="openstack/neutron-6dd74d4b5f-8tgjp" Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.432511 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-config\") pod \"neutron-6dd74d4b5f-8tgjp\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " pod="openstack/neutron-6dd74d4b5f-8tgjp" Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.432526 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-internal-tls-certs\") pod \"neutron-6dd74d4b5f-8tgjp\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " pod="openstack/neutron-6dd74d4b5f-8tgjp" Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.432583 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-public-tls-certs\") pod \"neutron-6dd74d4b5f-8tgjp\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " pod="openstack/neutron-6dd74d4b5f-8tgjp" Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.432609 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-httpd-config\") pod \"neutron-6dd74d4b5f-8tgjp\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " pod="openstack/neutron-6dd74d4b5f-8tgjp" Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.432639 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-combined-ca-bundle\") pod \"neutron-6dd74d4b5f-8tgjp\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " pod="openstack/neutron-6dd74d4b5f-8tgjp" Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.534133 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-public-tls-certs\") pod \"neutron-6dd74d4b5f-8tgjp\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " pod="openstack/neutron-6dd74d4b5f-8tgjp" Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.534421 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-httpd-config\") pod \"neutron-6dd74d4b5f-8tgjp\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " pod="openstack/neutron-6dd74d4b5f-8tgjp" Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.534453 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-combined-ca-bundle\") pod \"neutron-6dd74d4b5f-8tgjp\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " pod="openstack/neutron-6dd74d4b5f-8tgjp" Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.534519 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-ovndb-tls-certs\") pod \"neutron-6dd74d4b5f-8tgjp\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " pod="openstack/neutron-6dd74d4b5f-8tgjp" Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.534587 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8kxn\" (UniqueName: \"kubernetes.io/projected/642b5930-c972-4455-a280-932d5fda60e5-kube-api-access-s8kxn\") pod \"neutron-6dd74d4b5f-8tgjp\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " pod="openstack/neutron-6dd74d4b5f-8tgjp" Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.534617 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-config\") pod \"neutron-6dd74d4b5f-8tgjp\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " pod="openstack/neutron-6dd74d4b5f-8tgjp" Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.534634 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-internal-tls-certs\") pod \"neutron-6dd74d4b5f-8tgjp\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " pod="openstack/neutron-6dd74d4b5f-8tgjp" Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.540880 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-ovndb-tls-certs\") pod \"neutron-6dd74d4b5f-8tgjp\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " pod="openstack/neutron-6dd74d4b5f-8tgjp" Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.541698 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-combined-ca-bundle\") pod \"neutron-6dd74d4b5f-8tgjp\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " pod="openstack/neutron-6dd74d4b5f-8tgjp" Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.548397 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-internal-tls-certs\") pod \"neutron-6dd74d4b5f-8tgjp\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " pod="openstack/neutron-6dd74d4b5f-8tgjp" Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.552174 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-config\") pod \"neutron-6dd74d4b5f-8tgjp\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " pod="openstack/neutron-6dd74d4b5f-8tgjp" Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.552454 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-httpd-config\") pod \"neutron-6dd74d4b5f-8tgjp\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " pod="openstack/neutron-6dd74d4b5f-8tgjp" Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.554959 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8kxn\" (UniqueName: \"kubernetes.io/projected/642b5930-c972-4455-a280-932d5fda60e5-kube-api-access-s8kxn\") pod \"neutron-6dd74d4b5f-8tgjp\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " pod="openstack/neutron-6dd74d4b5f-8tgjp" Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.565979 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-public-tls-certs\") pod \"neutron-6dd74d4b5f-8tgjp\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " pod="openstack/neutron-6dd74d4b5f-8tgjp" Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.647791 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 19:05:17 crc kubenswrapper[4897]: I0214 19:05:17.669185 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6dd74d4b5f-8tgjp" Feb 14 19:05:18 crc kubenswrapper[4897]: W0214 19:05:18.225676 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf02df6db_894f_46ff_9bdc_53559271efcc.slice/crio-1a2a911e6d2267c919e2501d34254837a337323019e73ff8ac2f248ee9799bbc WatchSource:0}: Error finding container 1a2a911e6d2267c919e2501d34254837a337323019e73ff8ac2f248ee9799bbc: Status 404 returned error can't find the container with id 1a2a911e6d2267c919e2501d34254837a337323019e73ff8ac2f248ee9799bbc Feb 14 19:05:18 crc kubenswrapper[4897]: I0214 19:05:18.514065 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-wwgp5" podUID="f691ef96-83d3-4da6-879d-63f6cdb753a4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: i/o timeout" Feb 14 19:05:18 crc kubenswrapper[4897]: I0214 19:05:18.994532 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6dd74d4b5f-8tgjp"] Feb 14 19:05:19 crc kubenswrapper[4897]: W0214 19:05:19.000542 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod642b5930_c972_4455_a280_932d5fda60e5.slice/crio-4462136dd3fa8cfd6f68c4fa8f5e00546d27ee6daf0fd1aaa3ff988e976b3fff WatchSource:0}: Error finding container 4462136dd3fa8cfd6f68c4fa8f5e00546d27ee6daf0fd1aaa3ff988e976b3fff: Status 404 returned error can't find the container with id 4462136dd3fa8cfd6f68c4fa8f5e00546d27ee6daf0fd1aaa3ff988e976b3fff Feb 14 19:05:19 crc kubenswrapper[4897]: I0214 19:05:19.148509 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f78bcb6c6-95jr5" event={"ID":"f02df6db-894f-46ff-9bdc-53559271efcc","Type":"ContainerStarted","Data":"17a52e8a3fe8f070db20a61f31504b23b5fcfe692a2e99fbc22c1cc12e743d63"} Feb 14 19:05:19 crc kubenswrapper[4897]: I0214 19:05:19.148844 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f78bcb6c6-95jr5" event={"ID":"f02df6db-894f-46ff-9bdc-53559271efcc","Type":"ContainerStarted","Data":"1a2a911e6d2267c919e2501d34254837a337323019e73ff8ac2f248ee9799bbc"} Feb 14 19:05:19 crc kubenswrapper[4897]: I0214 19:05:19.152717 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dd74d4b5f-8tgjp" event={"ID":"642b5930-c972-4455-a280-932d5fda60e5","Type":"ContainerStarted","Data":"4462136dd3fa8cfd6f68c4fa8f5e00546d27ee6daf0fd1aaa3ff988e976b3fff"} Feb 14 19:05:19 crc kubenswrapper[4897]: I0214 19:05:19.172326 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"744ddf55-9af8-4c94-8a91-4280fd9c8d6c","Type":"ContainerStarted","Data":"cb5a41fcb4ff4f2b959cd287547afa1a35f80510e5c23b646c7944fb2ab82e26"} Feb 14 19:05:19 crc kubenswrapper[4897]: I0214 19:05:19.177223 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b47b5146-8110-4b6d-972a-e3d08f5c7e3c","Type":"ContainerStarted","Data":"b727a440edf144138b581af5aa46095cb32e7eaec0e1bc03747739a8061943c7"} Feb 14 19:05:19 crc kubenswrapper[4897]: I0214 19:05:19.182682 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"27b24061-39f4-4ddd-aa33-bdd4da0e90bd","Type":"ContainerStarted","Data":"3e8b09179bb7899d5fb2c19bb5fec0a461f3cfead54f71b840acfc3506b4b12e"} Feb 14 19:05:19 crc kubenswrapper[4897]: I0214 19:05:19.184583 4897 generic.go:334] "Generic (PLEG): container finished" podID="ec91b42f-9953-4bf3-b120-48ea5599b459" containerID="58a140e91b06d8518d7ef54870e3425c5c46e253573006d7a78bb557c73b7065" exitCode=0 Feb 14 19:05:19 crc kubenswrapper[4897]: I0214 19:05:19.184656 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" event={"ID":"ec91b42f-9953-4bf3-b120-48ea5599b459","Type":"ContainerDied","Data":"58a140e91b06d8518d7ef54870e3425c5c46e253573006d7a78bb557c73b7065"} Feb 14 19:05:19 crc kubenswrapper[4897]: I0214 19:05:19.184683 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" event={"ID":"ec91b42f-9953-4bf3-b120-48ea5599b459","Type":"ContainerStarted","Data":"291595da19a0d75b3d70a918780432ad18ea4ebc23c5340ab49bd8f0139431ae"} Feb 14 19:05:20 crc kubenswrapper[4897]: I0214 19:05:20.197289 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b47b5146-8110-4b6d-972a-e3d08f5c7e3c","Type":"ContainerStarted","Data":"ec8027176c9ecec33fecc0f1fb9f29f8ca9c4068b270aa33aaf3c3d639304bd9"} Feb 14 19:05:20 crc kubenswrapper[4897]: I0214 19:05:20.204932 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"27b24061-39f4-4ddd-aa33-bdd4da0e90bd","Type":"ContainerStarted","Data":"ebdbb10eebc8deea4b7f629fcf730b38457933a0731c74ceac878a4d4864ca1c"} Feb 14 19:05:20 crc kubenswrapper[4897]: I0214 19:05:20.204974 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"27b24061-39f4-4ddd-aa33-bdd4da0e90bd","Type":"ContainerStarted","Data":"0148c16d6c818afd1210fd9f66d1e08ddc906dda9c37b68da948287b3ca66b8b"} Feb 14 19:05:20 crc kubenswrapper[4897]: I0214 19:05:20.209471 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" event={"ID":"ec91b42f-9953-4bf3-b120-48ea5599b459","Type":"ContainerStarted","Data":"869e466cd3ba37eb78c0ac59e106aa81833c305db7e694441fb1183186213bce"} Feb 14 19:05:20 crc kubenswrapper[4897]: I0214 19:05:20.209862 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" Feb 14 19:05:20 crc kubenswrapper[4897]: I0214 19:05:20.214502 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f78bcb6c6-95jr5" event={"ID":"f02df6db-894f-46ff-9bdc-53559271efcc","Type":"ContainerStarted","Data":"006cb9f87c8e9b82c013f350d99ca6813d52dd9db09179684551e19ec51b572f"} Feb 14 19:05:20 crc kubenswrapper[4897]: I0214 19:05:20.214713 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5f78bcb6c6-95jr5" Feb 14 19:05:20 crc kubenswrapper[4897]: I0214 19:05:20.231016 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dd74d4b5f-8tgjp" event={"ID":"642b5930-c972-4455-a280-932d5fda60e5","Type":"ContainerStarted","Data":"a334f6c8f13e82eb6ab5c131b72fcadfe8822654a4d9beeae21bed808fa49172"} Feb 14 19:05:20 crc kubenswrapper[4897]: I0214 19:05:20.231069 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dd74d4b5f-8tgjp" event={"ID":"642b5930-c972-4455-a280-932d5fda60e5","Type":"ContainerStarted","Data":"73c1f6b1e0814defe620f945e54efdbc1f56cfeef435df47c1631472eee54585"} Feb 14 19:05:20 crc kubenswrapper[4897]: I0214 19:05:20.231212 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6dd74d4b5f-8tgjp" Feb 14 19:05:20 crc kubenswrapper[4897]: I0214 19:05:20.247050 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=20.247018955 podStartE2EDuration="20.247018955s" podCreationTimestamp="2026-02-14 19:05:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:05:20.224000702 +0000 UTC m=+1373.200409195" watchObservedRunningTime="2026-02-14 19:05:20.247018955 +0000 UTC m=+1373.223427438" Feb 14 19:05:20 crc kubenswrapper[4897]: I0214 19:05:20.254363 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5f78bcb6c6-95jr5" podStartSLOduration=5.254346555 podStartE2EDuration="5.254346555s" podCreationTimestamp="2026-02-14 19:05:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:05:20.244140725 +0000 UTC m=+1373.220549218" watchObservedRunningTime="2026-02-14 19:05:20.254346555 +0000 UTC m=+1373.230755028" Feb 14 19:05:20 crc kubenswrapper[4897]: I0214 19:05:20.265254 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.265238016 podStartE2EDuration="6.265238016s" podCreationTimestamp="2026-02-14 19:05:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:05:20.260330262 +0000 UTC m=+1373.236738775" watchObservedRunningTime="2026-02-14 19:05:20.265238016 +0000 UTC m=+1373.241646499" Feb 14 19:05:20 crc kubenswrapper[4897]: I0214 19:05:20.283743 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" podStartSLOduration=5.283728177 podStartE2EDuration="5.283728177s" podCreationTimestamp="2026-02-14 19:05:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:05:20.281163347 +0000 UTC m=+1373.257571850" watchObservedRunningTime="2026-02-14 19:05:20.283728177 +0000 UTC m=+1373.260136660" Feb 14 19:05:20 crc kubenswrapper[4897]: I0214 19:05:20.313282 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6dd74d4b5f-8tgjp" podStartSLOduration=3.313262823 podStartE2EDuration="3.313262823s" podCreationTimestamp="2026-02-14 19:05:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:05:20.301739432 +0000 UTC m=+1373.278147925" watchObservedRunningTime="2026-02-14 19:05:20.313262823 +0000 UTC m=+1373.289671306" Feb 14 19:05:21 crc kubenswrapper[4897]: I0214 19:05:21.276290 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 14 19:05:21 crc kubenswrapper[4897]: I0214 19:05:21.276702 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 14 19:05:21 crc kubenswrapper[4897]: I0214 19:05:21.322984 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 14 19:05:21 crc kubenswrapper[4897]: I0214 19:05:21.339712 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 14 19:05:22 crc kubenswrapper[4897]: I0214 19:05:22.253883 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 14 19:05:22 crc kubenswrapper[4897]: I0214 19:05:22.254231 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 14 19:05:23 crc kubenswrapper[4897]: I0214 19:05:23.265820 4897 generic.go:334] "Generic (PLEG): container finished" podID="e4cf787d-aa82-449b-917e-b5863b11b429" containerID="98a95501c242ba75d29f36480bc1c47367e6d9a98059217e4839e3abf6e2dc23" exitCode=0 Feb 14 19:05:23 crc kubenswrapper[4897]: I0214 19:05:23.265922 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-j2sgf" event={"ID":"e4cf787d-aa82-449b-917e-b5863b11b429","Type":"ContainerDied","Data":"98a95501c242ba75d29f36480bc1c47367e6d9a98059217e4839e3abf6e2dc23"} Feb 14 19:05:23 crc kubenswrapper[4897]: I0214 19:05:23.267826 4897 generic.go:334] "Generic (PLEG): container finished" podID="efcb9cd7-17f6-4705-96e9-40a25d718a72" containerID="724b6b8d591ef873313016a2196eaf552614bb96abc2d4fabc2e66edcd2f2a8b" exitCode=0 Feb 14 19:05:23 crc kubenswrapper[4897]: I0214 19:05:23.268920 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9l57t" event={"ID":"efcb9cd7-17f6-4705-96e9-40a25d718a72","Type":"ContainerDied","Data":"724b6b8d591ef873313016a2196eaf552614bb96abc2d4fabc2e66edcd2f2a8b"} Feb 14 19:05:24 crc kubenswrapper[4897]: I0214 19:05:24.286042 4897 generic.go:334] "Generic (PLEG): container finished" podID="d1a362ef-bc82-43d1-93d2-81806d08bd50" containerID="3da22534b4a50b124fd6dc677c4c4db3a8b75124372f35a822a5b2175fbb0745" exitCode=0 Feb 14 19:05:24 crc kubenswrapper[4897]: I0214 19:05:24.286191 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jsr6q" event={"ID":"d1a362ef-bc82-43d1-93d2-81806d08bd50","Type":"ContainerDied","Data":"3da22534b4a50b124fd6dc677c4c4db3a8b75124372f35a822a5b2175fbb0745"} Feb 14 19:05:24 crc kubenswrapper[4897]: I0214 19:05:24.891166 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 14 19:05:24 crc kubenswrapper[4897]: I0214 19:05:24.891572 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 14 19:05:24 crc kubenswrapper[4897]: I0214 19:05:24.930933 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9l57t" Feb 14 19:05:24 crc kubenswrapper[4897]: I0214 19:05:24.935707 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 14 19:05:24 crc kubenswrapper[4897]: I0214 19:05:24.938250 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-j2sgf" Feb 14 19:05:24 crc kubenswrapper[4897]: I0214 19:05:24.954930 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.047460 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efcb9cd7-17f6-4705-96e9-40a25d718a72-logs\") pod \"efcb9cd7-17f6-4705-96e9-40a25d718a72\" (UID: \"efcb9cd7-17f6-4705-96e9-40a25d718a72\") " Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.047606 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4cf787d-aa82-449b-917e-b5863b11b429-combined-ca-bundle\") pod \"e4cf787d-aa82-449b-917e-b5863b11b429\" (UID: \"e4cf787d-aa82-449b-917e-b5863b11b429\") " Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.047767 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efcb9cd7-17f6-4705-96e9-40a25d718a72-scripts\") pod \"efcb9cd7-17f6-4705-96e9-40a25d718a72\" (UID: \"efcb9cd7-17f6-4705-96e9-40a25d718a72\") " Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.047812 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e4cf787d-aa82-449b-917e-b5863b11b429-db-sync-config-data\") pod \"e4cf787d-aa82-449b-917e-b5863b11b429\" (UID: \"e4cf787d-aa82-449b-917e-b5863b11b429\") " Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.047813 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efcb9cd7-17f6-4705-96e9-40a25d718a72-logs" (OuterVolumeSpecName: "logs") pod "efcb9cd7-17f6-4705-96e9-40a25d718a72" (UID: "efcb9cd7-17f6-4705-96e9-40a25d718a72"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.047906 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efcb9cd7-17f6-4705-96e9-40a25d718a72-config-data\") pod \"efcb9cd7-17f6-4705-96e9-40a25d718a72\" (UID: \"efcb9cd7-17f6-4705-96e9-40a25d718a72\") " Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.047958 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ztcd\" (UniqueName: \"kubernetes.io/projected/e4cf787d-aa82-449b-917e-b5863b11b429-kube-api-access-9ztcd\") pod \"e4cf787d-aa82-449b-917e-b5863b11b429\" (UID: \"e4cf787d-aa82-449b-917e-b5863b11b429\") " Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.047996 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efcb9cd7-17f6-4705-96e9-40a25d718a72-combined-ca-bundle\") pod \"efcb9cd7-17f6-4705-96e9-40a25d718a72\" (UID: \"efcb9cd7-17f6-4705-96e9-40a25d718a72\") " Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.048018 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2w59\" (UniqueName: \"kubernetes.io/projected/efcb9cd7-17f6-4705-96e9-40a25d718a72-kube-api-access-b2w59\") pod \"efcb9cd7-17f6-4705-96e9-40a25d718a72\" (UID: \"efcb9cd7-17f6-4705-96e9-40a25d718a72\") " Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.048459 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efcb9cd7-17f6-4705-96e9-40a25d718a72-logs\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.053334 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efcb9cd7-17f6-4705-96e9-40a25d718a72-scripts" (OuterVolumeSpecName: "scripts") pod "efcb9cd7-17f6-4705-96e9-40a25d718a72" (UID: "efcb9cd7-17f6-4705-96e9-40a25d718a72"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.053857 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4cf787d-aa82-449b-917e-b5863b11b429-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e4cf787d-aa82-449b-917e-b5863b11b429" (UID: "e4cf787d-aa82-449b-917e-b5863b11b429"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.055416 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4cf787d-aa82-449b-917e-b5863b11b429-kube-api-access-9ztcd" (OuterVolumeSpecName: "kube-api-access-9ztcd") pod "e4cf787d-aa82-449b-917e-b5863b11b429" (UID: "e4cf787d-aa82-449b-917e-b5863b11b429"). InnerVolumeSpecName "kube-api-access-9ztcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.055738 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efcb9cd7-17f6-4705-96e9-40a25d718a72-kube-api-access-b2w59" (OuterVolumeSpecName: "kube-api-access-b2w59") pod "efcb9cd7-17f6-4705-96e9-40a25d718a72" (UID: "efcb9cd7-17f6-4705-96e9-40a25d718a72"). InnerVolumeSpecName "kube-api-access-b2w59". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.077556 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efcb9cd7-17f6-4705-96e9-40a25d718a72-config-data" (OuterVolumeSpecName: "config-data") pod "efcb9cd7-17f6-4705-96e9-40a25d718a72" (UID: "efcb9cd7-17f6-4705-96e9-40a25d718a72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.078336 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4cf787d-aa82-449b-917e-b5863b11b429-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4cf787d-aa82-449b-917e-b5863b11b429" (UID: "e4cf787d-aa82-449b-917e-b5863b11b429"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.078693 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efcb9cd7-17f6-4705-96e9-40a25d718a72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efcb9cd7-17f6-4705-96e9-40a25d718a72" (UID: "efcb9cd7-17f6-4705-96e9-40a25d718a72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.151102 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efcb9cd7-17f6-4705-96e9-40a25d718a72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.151155 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2w59\" (UniqueName: \"kubernetes.io/projected/efcb9cd7-17f6-4705-96e9-40a25d718a72-kube-api-access-b2w59\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.151175 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4cf787d-aa82-449b-917e-b5863b11b429-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.151191 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efcb9cd7-17f6-4705-96e9-40a25d718a72-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.151208 4897 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e4cf787d-aa82-449b-917e-b5863b11b429-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.151224 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efcb9cd7-17f6-4705-96e9-40a25d718a72-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.151242 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ztcd\" (UniqueName: \"kubernetes.io/projected/e4cf787d-aa82-449b-917e-b5863b11b429-kube-api-access-9ztcd\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.303109 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"744ddf55-9af8-4c94-8a91-4280fd9c8d6c","Type":"ContainerStarted","Data":"871bcec7d63d4fb3e82c583fe331bd75af3adaa20272cf68242b8de460e29f4d"} Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.314892 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-j2sgf" event={"ID":"e4cf787d-aa82-449b-917e-b5863b11b429","Type":"ContainerDied","Data":"e30cc4ace98bbb8806a8057897081dd11c7b59e588d5fc4f82b3a004f6e6db93"} Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.314938 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e30cc4ace98bbb8806a8057897081dd11c7b59e588d5fc4f82b3a004f6e6db93" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.315011 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-j2sgf" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.318139 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9l57t" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.319227 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9l57t" event={"ID":"efcb9cd7-17f6-4705-96e9-40a25d718a72","Type":"ContainerDied","Data":"2626c8260359568bc2e3591b7aff8eb3f5cc81ae8b6eeaa985d0c8490dd1710f"} Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.319269 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2626c8260359568bc2e3591b7aff8eb3f5cc81ae8b6eeaa985d0c8490dd1710f" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.320777 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.320809 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.481227 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.550341 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6fc586c7b4-8x7qx"] Feb 14 19:05:25 crc kubenswrapper[4897]: E0214 19:05:25.550797 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4cf787d-aa82-449b-917e-b5863b11b429" containerName="barbican-db-sync" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.550816 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4cf787d-aa82-449b-917e-b5863b11b429" containerName="barbican-db-sync" Feb 14 19:05:25 crc kubenswrapper[4897]: E0214 19:05:25.550844 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efcb9cd7-17f6-4705-96e9-40a25d718a72" containerName="placement-db-sync" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.550855 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="efcb9cd7-17f6-4705-96e9-40a25d718a72" containerName="placement-db-sync" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.551084 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="efcb9cd7-17f6-4705-96e9-40a25d718a72" containerName="placement-db-sync" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.551111 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4cf787d-aa82-449b-917e-b5863b11b429" containerName="barbican-db-sync" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.552535 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.558111 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.558207 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.558130 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.558172 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-gbbpq" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.566568 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.587009 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6fc586c7b4-8x7qx"] Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.627212 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-zhsbx"] Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.627461 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" podUID="73b306f6-bde9-4e5c-9466-1601184571d6" containerName="dnsmasq-dns" containerID="cri-o://f308b96003cdc1e738b4a178bc436e8a82cc4d36d3f6f8af113b055df77ef3e8" gracePeriod=10 Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.637488 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-786dc678dd-l4rb5"] Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.640352 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.653845 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-wlqtt" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.654004 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.654123 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.670969 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-scripts\") pod \"placement-6fc586c7b4-8x7qx\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.671065 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1639a907-9497-4dea-a153-945921c79337-logs\") pod \"placement-6fc586c7b4-8x7qx\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.671097 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-internal-tls-certs\") pod \"placement-6fc586c7b4-8x7qx\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.671238 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-combined-ca-bundle\") pod \"placement-6fc586c7b4-8x7qx\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.671344 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97ffh\" (UniqueName: \"kubernetes.io/projected/1639a907-9497-4dea-a153-945921c79337-kube-api-access-97ffh\") pod \"placement-6fc586c7b4-8x7qx\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.672185 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-public-tls-certs\") pod \"placement-6fc586c7b4-8x7qx\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.672264 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-config-data\") pod \"placement-6fc586c7b4-8x7qx\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.680458 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-8bddbd865-mxphm"] Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.683007 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8bddbd865-mxphm" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.692900 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-786dc678dd-l4rb5"] Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.695301 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.700565 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-8bddbd865-mxphm"] Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.773701 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-config-data\") pod \"placement-6fc586c7b4-8x7qx\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.773766 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-scripts\") pod \"placement-6fc586c7b4-8x7qx\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.773794 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2831142-237b-4232-8433-1a71cecdc1aa-logs\") pod \"barbican-keystone-listener-786dc678dd-l4rb5\" (UID: \"b2831142-237b-4232-8433-1a71cecdc1aa\") " pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.773835 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1639a907-9497-4dea-a153-945921c79337-logs\") pod \"placement-6fc586c7b4-8x7qx\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.773858 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-internal-tls-certs\") pod \"placement-6fc586c7b4-8x7qx\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.773904 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl7zp\" (UniqueName: \"kubernetes.io/projected/b2831142-237b-4232-8433-1a71cecdc1aa-kube-api-access-fl7zp\") pod \"barbican-keystone-listener-786dc678dd-l4rb5\" (UID: \"b2831142-237b-4232-8433-1a71cecdc1aa\") " pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.773932 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-combined-ca-bundle\") pod \"placement-6fc586c7b4-8x7qx\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.773955 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2831142-237b-4232-8433-1a71cecdc1aa-config-data\") pod \"barbican-keystone-listener-786dc678dd-l4rb5\" (UID: \"b2831142-237b-4232-8433-1a71cecdc1aa\") " pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.773992 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97ffh\" (UniqueName: \"kubernetes.io/projected/1639a907-9497-4dea-a153-945921c79337-kube-api-access-97ffh\") pod \"placement-6fc586c7b4-8x7qx\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.774053 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2831142-237b-4232-8433-1a71cecdc1aa-config-data-custom\") pod \"barbican-keystone-listener-786dc678dd-l4rb5\" (UID: \"b2831142-237b-4232-8433-1a71cecdc1aa\") " pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.774112 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-public-tls-certs\") pod \"placement-6fc586c7b4-8x7qx\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.774132 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2831142-237b-4232-8433-1a71cecdc1aa-combined-ca-bundle\") pod \"barbican-keystone-listener-786dc678dd-l4rb5\" (UID: \"b2831142-237b-4232-8433-1a71cecdc1aa\") " pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.775325 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1639a907-9497-4dea-a153-945921c79337-logs\") pod \"placement-6fc586c7b4-8x7qx\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.784826 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-config-data\") pod \"placement-6fc586c7b4-8x7qx\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.784904 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-scripts\") pod \"placement-6fc586c7b4-8x7qx\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.787737 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-internal-tls-certs\") pod \"placement-6fc586c7b4-8x7qx\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.787830 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-combined-ca-bundle\") pod \"placement-6fc586c7b4-8x7qx\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.800685 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97ffh\" (UniqueName: \"kubernetes.io/projected/1639a907-9497-4dea-a153-945921c79337-kube-api-access-97ffh\") pod \"placement-6fc586c7b4-8x7qx\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.810780 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-public-tls-certs\") pod \"placement-6fc586c7b4-8x7qx\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.848300 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-c4968"] Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.851052 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-c4968" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.864491 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-c4968"] Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.881128 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6708e0a-c394-435d-b408-84716a21508f-config-data\") pod \"barbican-worker-8bddbd865-mxphm\" (UID: \"d6708e0a-c394-435d-b408-84716a21508f\") " pod="openstack/barbican-worker-8bddbd865-mxphm" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.881178 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl7zp\" (UniqueName: \"kubernetes.io/projected/b2831142-237b-4232-8433-1a71cecdc1aa-kube-api-access-fl7zp\") pod \"barbican-keystone-listener-786dc678dd-l4rb5\" (UID: \"b2831142-237b-4232-8433-1a71cecdc1aa\") " pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.881204 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2831142-237b-4232-8433-1a71cecdc1aa-config-data\") pod \"barbican-keystone-listener-786dc678dd-l4rb5\" (UID: \"b2831142-237b-4232-8433-1a71cecdc1aa\") " pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.881262 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6708e0a-c394-435d-b408-84716a21508f-logs\") pod \"barbican-worker-8bddbd865-mxphm\" (UID: \"d6708e0a-c394-435d-b408-84716a21508f\") " pod="openstack/barbican-worker-8bddbd865-mxphm" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.881283 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2831142-237b-4232-8433-1a71cecdc1aa-config-data-custom\") pod \"barbican-keystone-listener-786dc678dd-l4rb5\" (UID: \"b2831142-237b-4232-8433-1a71cecdc1aa\") " pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.881339 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmz9x\" (UniqueName: \"kubernetes.io/projected/d6708e0a-c394-435d-b408-84716a21508f-kube-api-access-xmz9x\") pod \"barbican-worker-8bddbd865-mxphm\" (UID: \"d6708e0a-c394-435d-b408-84716a21508f\") " pod="openstack/barbican-worker-8bddbd865-mxphm" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.881362 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2831142-237b-4232-8433-1a71cecdc1aa-combined-ca-bundle\") pod \"barbican-keystone-listener-786dc678dd-l4rb5\" (UID: \"b2831142-237b-4232-8433-1a71cecdc1aa\") " pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.881389 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6708e0a-c394-435d-b408-84716a21508f-config-data-custom\") pod \"barbican-worker-8bddbd865-mxphm\" (UID: \"d6708e0a-c394-435d-b408-84716a21508f\") " pod="openstack/barbican-worker-8bddbd865-mxphm" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.881425 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6708e0a-c394-435d-b408-84716a21508f-combined-ca-bundle\") pod \"barbican-worker-8bddbd865-mxphm\" (UID: \"d6708e0a-c394-435d-b408-84716a21508f\") " pod="openstack/barbican-worker-8bddbd865-mxphm" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.881452 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2831142-237b-4232-8433-1a71cecdc1aa-logs\") pod \"barbican-keystone-listener-786dc678dd-l4rb5\" (UID: \"b2831142-237b-4232-8433-1a71cecdc1aa\") " pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.881819 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2831142-237b-4232-8433-1a71cecdc1aa-logs\") pod \"barbican-keystone-listener-786dc678dd-l4rb5\" (UID: \"b2831142-237b-4232-8433-1a71cecdc1aa\") " pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.889832 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2831142-237b-4232-8433-1a71cecdc1aa-combined-ca-bundle\") pod \"barbican-keystone-listener-786dc678dd-l4rb5\" (UID: \"b2831142-237b-4232-8433-1a71cecdc1aa\") " pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.893226 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2831142-237b-4232-8433-1a71cecdc1aa-config-data\") pod \"barbican-keystone-listener-786dc678dd-l4rb5\" (UID: \"b2831142-237b-4232-8433-1a71cecdc1aa\") " pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.906320 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5b85695646-lxbpp"] Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.906491 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.908215 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b85695646-lxbpp" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.909572 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl7zp\" (UniqueName: \"kubernetes.io/projected/b2831142-237b-4232-8433-1a71cecdc1aa-kube-api-access-fl7zp\") pod \"barbican-keystone-listener-786dc678dd-l4rb5\" (UID: \"b2831142-237b-4232-8433-1a71cecdc1aa\") " pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.911747 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.915965 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2831142-237b-4232-8433-1a71cecdc1aa-config-data-custom\") pod \"barbican-keystone-listener-786dc678dd-l4rb5\" (UID: \"b2831142-237b-4232-8433-1a71cecdc1aa\") " pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.932690 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b85695646-lxbpp"] Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.978437 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jsr6q" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.992742 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-c4968\" (UID: \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\") " pod="openstack/dnsmasq-dns-85ff748b95-c4968" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.992810 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6708e0a-c394-435d-b408-84716a21508f-logs\") pod \"barbican-worker-8bddbd865-mxphm\" (UID: \"d6708e0a-c394-435d-b408-84716a21508f\") " pod="openstack/barbican-worker-8bddbd865-mxphm" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.992885 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-config\") pod \"dnsmasq-dns-85ff748b95-c4968\" (UID: \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\") " pod="openstack/dnsmasq-dns-85ff748b95-c4968" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.992926 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phflc\" (UniqueName: \"kubernetes.io/projected/5911804f-29c7-44a8-8688-0bc0fe0a46ac-kube-api-access-phflc\") pod \"dnsmasq-dns-85ff748b95-c4968\" (UID: \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\") " pod="openstack/dnsmasq-dns-85ff748b95-c4968" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.992981 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmz9x\" (UniqueName: \"kubernetes.io/projected/d6708e0a-c394-435d-b408-84716a21508f-kube-api-access-xmz9x\") pod \"barbican-worker-8bddbd865-mxphm\" (UID: \"d6708e0a-c394-435d-b408-84716a21508f\") " pod="openstack/barbican-worker-8bddbd865-mxphm" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.993120 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-dns-svc\") pod \"dnsmasq-dns-85ff748b95-c4968\" (UID: \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\") " pod="openstack/dnsmasq-dns-85ff748b95-c4968" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.993158 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6708e0a-c394-435d-b408-84716a21508f-config-data-custom\") pod \"barbican-worker-8bddbd865-mxphm\" (UID: \"d6708e0a-c394-435d-b408-84716a21508f\") " pod="openstack/barbican-worker-8bddbd865-mxphm" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.993207 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6708e0a-c394-435d-b408-84716a21508f-combined-ca-bundle\") pod \"barbican-worker-8bddbd865-mxphm\" (UID: \"d6708e0a-c394-435d-b408-84716a21508f\") " pod="openstack/barbican-worker-8bddbd865-mxphm" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.993225 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-c4968\" (UID: \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\") " pod="openstack/dnsmasq-dns-85ff748b95-c4968" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.993310 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-c4968\" (UID: \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\") " pod="openstack/dnsmasq-dns-85ff748b95-c4968" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.993394 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6708e0a-c394-435d-b408-84716a21508f-config-data\") pod \"barbican-worker-8bddbd865-mxphm\" (UID: \"d6708e0a-c394-435d-b408-84716a21508f\") " pod="openstack/barbican-worker-8bddbd865-mxphm" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.993470 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6708e0a-c394-435d-b408-84716a21508f-logs\") pod \"barbican-worker-8bddbd865-mxphm\" (UID: \"d6708e0a-c394-435d-b408-84716a21508f\") " pod="openstack/barbican-worker-8bddbd865-mxphm" Feb 14 19:05:25 crc kubenswrapper[4897]: I0214 19:05:25.998184 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6708e0a-c394-435d-b408-84716a21508f-config-data\") pod \"barbican-worker-8bddbd865-mxphm\" (UID: \"d6708e0a-c394-435d-b408-84716a21508f\") " pod="openstack/barbican-worker-8bddbd865-mxphm" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.000651 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6708e0a-c394-435d-b408-84716a21508f-config-data-custom\") pod \"barbican-worker-8bddbd865-mxphm\" (UID: \"d6708e0a-c394-435d-b408-84716a21508f\") " pod="openstack/barbican-worker-8bddbd865-mxphm" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.002461 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6708e0a-c394-435d-b408-84716a21508f-combined-ca-bundle\") pod \"barbican-worker-8bddbd865-mxphm\" (UID: \"d6708e0a-c394-435d-b408-84716a21508f\") " pod="openstack/barbican-worker-8bddbd865-mxphm" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.017423 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmz9x\" (UniqueName: \"kubernetes.io/projected/d6708e0a-c394-435d-b408-84716a21508f-kube-api-access-xmz9x\") pod \"barbican-worker-8bddbd865-mxphm\" (UID: \"d6708e0a-c394-435d-b408-84716a21508f\") " pod="openstack/barbican-worker-8bddbd865-mxphm" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.028462 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.097676 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-scripts\") pod \"d1a362ef-bc82-43d1-93d2-81806d08bd50\" (UID: \"d1a362ef-bc82-43d1-93d2-81806d08bd50\") " Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.097721 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-fernet-keys\") pod \"d1a362ef-bc82-43d1-93d2-81806d08bd50\" (UID: \"d1a362ef-bc82-43d1-93d2-81806d08bd50\") " Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.097761 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-credential-keys\") pod \"d1a362ef-bc82-43d1-93d2-81806d08bd50\" (UID: \"d1a362ef-bc82-43d1-93d2-81806d08bd50\") " Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.097835 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-config-data\") pod \"d1a362ef-bc82-43d1-93d2-81806d08bd50\" (UID: \"d1a362ef-bc82-43d1-93d2-81806d08bd50\") " Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.097890 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-combined-ca-bundle\") pod \"d1a362ef-bc82-43d1-93d2-81806d08bd50\" (UID: \"d1a362ef-bc82-43d1-93d2-81806d08bd50\") " Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.097960 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q7hb\" (UniqueName: \"kubernetes.io/projected/d1a362ef-bc82-43d1-93d2-81806d08bd50-kube-api-access-6q7hb\") pod \"d1a362ef-bc82-43d1-93d2-81806d08bd50\" (UID: \"d1a362ef-bc82-43d1-93d2-81806d08bd50\") " Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.098382 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-config-data-custom\") pod \"barbican-api-5b85695646-lxbpp\" (UID: \"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1\") " pod="openstack/barbican-api-5b85695646-lxbpp" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.098417 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-c4968\" (UID: \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\") " pod="openstack/dnsmasq-dns-85ff748b95-c4968" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.098448 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-combined-ca-bundle\") pod \"barbican-api-5b85695646-lxbpp\" (UID: \"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1\") " pod="openstack/barbican-api-5b85695646-lxbpp" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.098466 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-logs\") pod \"barbican-api-5b85695646-lxbpp\" (UID: \"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1\") " pod="openstack/barbican-api-5b85695646-lxbpp" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.098499 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-config\") pod \"dnsmasq-dns-85ff748b95-c4968\" (UID: \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\") " pod="openstack/dnsmasq-dns-85ff748b95-c4968" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.098525 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7w5l\" (UniqueName: \"kubernetes.io/projected/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-kube-api-access-g7w5l\") pod \"barbican-api-5b85695646-lxbpp\" (UID: \"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1\") " pod="openstack/barbican-api-5b85695646-lxbpp" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.098542 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phflc\" (UniqueName: \"kubernetes.io/projected/5911804f-29c7-44a8-8688-0bc0fe0a46ac-kube-api-access-phflc\") pod \"dnsmasq-dns-85ff748b95-c4968\" (UID: \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\") " pod="openstack/dnsmasq-dns-85ff748b95-c4968" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.098579 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-dns-svc\") pod \"dnsmasq-dns-85ff748b95-c4968\" (UID: \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\") " pod="openstack/dnsmasq-dns-85ff748b95-c4968" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.098627 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-c4968\" (UID: \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\") " pod="openstack/dnsmasq-dns-85ff748b95-c4968" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.098665 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-config-data\") pod \"barbican-api-5b85695646-lxbpp\" (UID: \"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1\") " pod="openstack/barbican-api-5b85695646-lxbpp" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.098688 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-c4968\" (UID: \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\") " pod="openstack/dnsmasq-dns-85ff748b95-c4968" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.102425 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-scripts" (OuterVolumeSpecName: "scripts") pod "d1a362ef-bc82-43d1-93d2-81806d08bd50" (UID: "d1a362ef-bc82-43d1-93d2-81806d08bd50"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.103147 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d1a362ef-bc82-43d1-93d2-81806d08bd50" (UID: "d1a362ef-bc82-43d1-93d2-81806d08bd50"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.104258 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-dns-svc\") pod \"dnsmasq-dns-85ff748b95-c4968\" (UID: \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\") " pod="openstack/dnsmasq-dns-85ff748b95-c4968" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.104858 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-c4968\" (UID: \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\") " pod="openstack/dnsmasq-dns-85ff748b95-c4968" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.105661 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-c4968\" (UID: \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\") " pod="openstack/dnsmasq-dns-85ff748b95-c4968" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.106212 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-c4968\" (UID: \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\") " pod="openstack/dnsmasq-dns-85ff748b95-c4968" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.106763 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-config\") pod \"dnsmasq-dns-85ff748b95-c4968\" (UID: \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\") " pod="openstack/dnsmasq-dns-85ff748b95-c4968" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.107814 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1a362ef-bc82-43d1-93d2-81806d08bd50-kube-api-access-6q7hb" (OuterVolumeSpecName: "kube-api-access-6q7hb") pod "d1a362ef-bc82-43d1-93d2-81806d08bd50" (UID: "d1a362ef-bc82-43d1-93d2-81806d08bd50"). InnerVolumeSpecName "kube-api-access-6q7hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.114244 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d1a362ef-bc82-43d1-93d2-81806d08bd50" (UID: "d1a362ef-bc82-43d1-93d2-81806d08bd50"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.129379 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phflc\" (UniqueName: \"kubernetes.io/projected/5911804f-29c7-44a8-8688-0bc0fe0a46ac-kube-api-access-phflc\") pod \"dnsmasq-dns-85ff748b95-c4968\" (UID: \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\") " pod="openstack/dnsmasq-dns-85ff748b95-c4968" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.132407 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-config-data" (OuterVolumeSpecName: "config-data") pod "d1a362ef-bc82-43d1-93d2-81806d08bd50" (UID: "d1a362ef-bc82-43d1-93d2-81806d08bd50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.148463 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1a362ef-bc82-43d1-93d2-81806d08bd50" (UID: "d1a362ef-bc82-43d1-93d2-81806d08bd50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.200280 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-config-data\") pod \"barbican-api-5b85695646-lxbpp\" (UID: \"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1\") " pod="openstack/barbican-api-5b85695646-lxbpp" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.200399 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-config-data-custom\") pod \"barbican-api-5b85695646-lxbpp\" (UID: \"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1\") " pod="openstack/barbican-api-5b85695646-lxbpp" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.200453 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-combined-ca-bundle\") pod \"barbican-api-5b85695646-lxbpp\" (UID: \"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1\") " pod="openstack/barbican-api-5b85695646-lxbpp" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.200476 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-logs\") pod \"barbican-api-5b85695646-lxbpp\" (UID: \"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1\") " pod="openstack/barbican-api-5b85695646-lxbpp" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.200527 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7w5l\" (UniqueName: \"kubernetes.io/projected/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-kube-api-access-g7w5l\") pod \"barbican-api-5b85695646-lxbpp\" (UID: \"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1\") " pod="openstack/barbican-api-5b85695646-lxbpp" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.200612 4897 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.200624 4897 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.200634 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.200643 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.200651 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6q7hb\" (UniqueName: \"kubernetes.io/projected/d1a362ef-bc82-43d1-93d2-81806d08bd50-kube-api-access-6q7hb\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.200660 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1a362ef-bc82-43d1-93d2-81806d08bd50-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.205834 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-config-data\") pod \"barbican-api-5b85695646-lxbpp\" (UID: \"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1\") " pod="openstack/barbican-api-5b85695646-lxbpp" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.206858 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-logs\") pod \"barbican-api-5b85695646-lxbpp\" (UID: \"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1\") " pod="openstack/barbican-api-5b85695646-lxbpp" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.209270 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-config-data-custom\") pod \"barbican-api-5b85695646-lxbpp\" (UID: \"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1\") " pod="openstack/barbican-api-5b85695646-lxbpp" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.214273 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-combined-ca-bundle\") pod \"barbican-api-5b85695646-lxbpp\" (UID: \"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1\") " pod="openstack/barbican-api-5b85695646-lxbpp" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.224096 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7w5l\" (UniqueName: \"kubernetes.io/projected/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-kube-api-access-g7w5l\") pod \"barbican-api-5b85695646-lxbpp\" (UID: \"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1\") " pod="openstack/barbican-api-5b85695646-lxbpp" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.261679 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8bddbd865-mxphm" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.263601 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.268687 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-c4968" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.294633 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b85695646-lxbpp" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.345116 4897 generic.go:334] "Generic (PLEG): container finished" podID="73b306f6-bde9-4e5c-9466-1601184571d6" containerID="f308b96003cdc1e738b4a178bc436e8a82cc4d36d3f6f8af113b055df77ef3e8" exitCode=0 Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.345447 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" event={"ID":"73b306f6-bde9-4e5c-9466-1601184571d6","Type":"ContainerDied","Data":"f308b96003cdc1e738b4a178bc436e8a82cc4d36d3f6f8af113b055df77ef3e8"} Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.345477 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" event={"ID":"73b306f6-bde9-4e5c-9466-1601184571d6","Type":"ContainerDied","Data":"07c6c9f5c1baa4fb697e541d6214319ed06fc898098562b0ca9718bb15435aaf"} Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.345495 4897 scope.go:117] "RemoveContainer" containerID="f308b96003cdc1e738b4a178bc436e8a82cc4d36d3f6f8af113b055df77ef3e8" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.345616 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-zhsbx" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.369700 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jsr6q" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.369834 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jsr6q" event={"ID":"d1a362ef-bc82-43d1-93d2-81806d08bd50","Type":"ContainerDied","Data":"727ab6628d249ffada38b6c00af32169d5c893193f13208a37d5cbc15e87b27b"} Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.369882 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="727ab6628d249ffada38b6c00af32169d5c893193f13208a37d5cbc15e87b27b" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.406242 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfmrv\" (UniqueName: \"kubernetes.io/projected/73b306f6-bde9-4e5c-9466-1601184571d6-kube-api-access-kfmrv\") pod \"73b306f6-bde9-4e5c-9466-1601184571d6\" (UID: \"73b306f6-bde9-4e5c-9466-1601184571d6\") " Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.406402 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-dns-svc\") pod \"73b306f6-bde9-4e5c-9466-1601184571d6\" (UID: \"73b306f6-bde9-4e5c-9466-1601184571d6\") " Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.406454 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-ovsdbserver-nb\") pod \"73b306f6-bde9-4e5c-9466-1601184571d6\" (UID: \"73b306f6-bde9-4e5c-9466-1601184571d6\") " Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.406513 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-dns-swift-storage-0\") pod \"73b306f6-bde9-4e5c-9466-1601184571d6\" (UID: \"73b306f6-bde9-4e5c-9466-1601184571d6\") " Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.406540 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-ovsdbserver-sb\") pod \"73b306f6-bde9-4e5c-9466-1601184571d6\" (UID: \"73b306f6-bde9-4e5c-9466-1601184571d6\") " Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.406570 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-config\") pod \"73b306f6-bde9-4e5c-9466-1601184571d6\" (UID: \"73b306f6-bde9-4e5c-9466-1601184571d6\") " Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.416547 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73b306f6-bde9-4e5c-9466-1601184571d6-kube-api-access-kfmrv" (OuterVolumeSpecName: "kube-api-access-kfmrv") pod "73b306f6-bde9-4e5c-9466-1601184571d6" (UID: "73b306f6-bde9-4e5c-9466-1601184571d6"). InnerVolumeSpecName "kube-api-access-kfmrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.511813 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfmrv\" (UniqueName: \"kubernetes.io/projected/73b306f6-bde9-4e5c-9466-1601184571d6-kube-api-access-kfmrv\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.542564 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-798cbbdc78-n5tht"] Feb 14 19:05:26 crc kubenswrapper[4897]: E0214 19:05:26.543180 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b306f6-bde9-4e5c-9466-1601184571d6" containerName="dnsmasq-dns" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.543197 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b306f6-bde9-4e5c-9466-1601184571d6" containerName="dnsmasq-dns" Feb 14 19:05:26 crc kubenswrapper[4897]: E0214 19:05:26.543213 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1a362ef-bc82-43d1-93d2-81806d08bd50" containerName="keystone-bootstrap" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.543220 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1a362ef-bc82-43d1-93d2-81806d08bd50" containerName="keystone-bootstrap" Feb 14 19:05:26 crc kubenswrapper[4897]: E0214 19:05:26.543246 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b306f6-bde9-4e5c-9466-1601184571d6" containerName="init" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.543252 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b306f6-bde9-4e5c-9466-1601184571d6" containerName="init" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.543425 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1a362ef-bc82-43d1-93d2-81806d08bd50" containerName="keystone-bootstrap" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.543435 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="73b306f6-bde9-4e5c-9466-1601184571d6" containerName="dnsmasq-dns" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.544512 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.546852 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.547173 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.547387 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-z9242" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.547407 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.547522 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.547607 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.572950 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-798cbbdc78-n5tht"] Feb 14 19:05:26 crc kubenswrapper[4897]: W0214 19:05:26.607210 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1639a907_9497_4dea_a153_945921c79337.slice/crio-5ef470851ae3c487d97ac4629e821b476854b1422c5d0f21257bae3bf1fa1dac WatchSource:0}: Error finding container 5ef470851ae3c487d97ac4629e821b476854b1422c5d0f21257bae3bf1fa1dac: Status 404 returned error can't find the container with id 5ef470851ae3c487d97ac4629e821b476854b1422c5d0f21257bae3bf1fa1dac Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.614988 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78184439-943a-4776-834b-f797a20bb2c1-scripts\") pod \"keystone-798cbbdc78-n5tht\" (UID: \"78184439-943a-4776-834b-f797a20bb2c1\") " pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.615097 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/78184439-943a-4776-834b-f797a20bb2c1-credential-keys\") pod \"keystone-798cbbdc78-n5tht\" (UID: \"78184439-943a-4776-834b-f797a20bb2c1\") " pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.615132 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78184439-943a-4776-834b-f797a20bb2c1-config-data\") pod \"keystone-798cbbdc78-n5tht\" (UID: \"78184439-943a-4776-834b-f797a20bb2c1\") " pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.615201 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78184439-943a-4776-834b-f797a20bb2c1-internal-tls-certs\") pod \"keystone-798cbbdc78-n5tht\" (UID: \"78184439-943a-4776-834b-f797a20bb2c1\") " pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.615240 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjjs8\" (UniqueName: \"kubernetes.io/projected/78184439-943a-4776-834b-f797a20bb2c1-kube-api-access-kjjs8\") pod \"keystone-798cbbdc78-n5tht\" (UID: \"78184439-943a-4776-834b-f797a20bb2c1\") " pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.615267 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78184439-943a-4776-834b-f797a20bb2c1-public-tls-certs\") pod \"keystone-798cbbdc78-n5tht\" (UID: \"78184439-943a-4776-834b-f797a20bb2c1\") " pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.615332 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78184439-943a-4776-834b-f797a20bb2c1-combined-ca-bundle\") pod \"keystone-798cbbdc78-n5tht\" (UID: \"78184439-943a-4776-834b-f797a20bb2c1\") " pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.615382 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/78184439-943a-4776-834b-f797a20bb2c1-fernet-keys\") pod \"keystone-798cbbdc78-n5tht\" (UID: \"78184439-943a-4776-834b-f797a20bb2c1\") " pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.623886 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "73b306f6-bde9-4e5c-9466-1601184571d6" (UID: "73b306f6-bde9-4e5c-9466-1601184571d6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.640705 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6fc586c7b4-8x7qx"] Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.667225 4897 scope.go:117] "RemoveContainer" containerID="55f4a73444829a6bbe40afd8cf3f296f9d44202b74048e9ddbd87c050687f487" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.667406 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "73b306f6-bde9-4e5c-9466-1601184571d6" (UID: "73b306f6-bde9-4e5c-9466-1601184571d6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.667501 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-config" (OuterVolumeSpecName: "config") pod "73b306f6-bde9-4e5c-9466-1601184571d6" (UID: "73b306f6-bde9-4e5c-9466-1601184571d6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.671648 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "73b306f6-bde9-4e5c-9466-1601184571d6" (UID: "73b306f6-bde9-4e5c-9466-1601184571d6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.719016 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78184439-943a-4776-834b-f797a20bb2c1-combined-ca-bundle\") pod \"keystone-798cbbdc78-n5tht\" (UID: \"78184439-943a-4776-834b-f797a20bb2c1\") " pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.721407 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/78184439-943a-4776-834b-f797a20bb2c1-fernet-keys\") pod \"keystone-798cbbdc78-n5tht\" (UID: \"78184439-943a-4776-834b-f797a20bb2c1\") " pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.721526 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78184439-943a-4776-834b-f797a20bb2c1-scripts\") pod \"keystone-798cbbdc78-n5tht\" (UID: \"78184439-943a-4776-834b-f797a20bb2c1\") " pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.721633 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/78184439-943a-4776-834b-f797a20bb2c1-credential-keys\") pod \"keystone-798cbbdc78-n5tht\" (UID: \"78184439-943a-4776-834b-f797a20bb2c1\") " pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.721663 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78184439-943a-4776-834b-f797a20bb2c1-config-data\") pod \"keystone-798cbbdc78-n5tht\" (UID: \"78184439-943a-4776-834b-f797a20bb2c1\") " pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.721812 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78184439-943a-4776-834b-f797a20bb2c1-internal-tls-certs\") pod \"keystone-798cbbdc78-n5tht\" (UID: \"78184439-943a-4776-834b-f797a20bb2c1\") " pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.721853 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjjs8\" (UniqueName: \"kubernetes.io/projected/78184439-943a-4776-834b-f797a20bb2c1-kube-api-access-kjjs8\") pod \"keystone-798cbbdc78-n5tht\" (UID: \"78184439-943a-4776-834b-f797a20bb2c1\") " pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.721873 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78184439-943a-4776-834b-f797a20bb2c1-public-tls-certs\") pod \"keystone-798cbbdc78-n5tht\" (UID: \"78184439-943a-4776-834b-f797a20bb2c1\") " pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.721968 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.721980 4897 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.721989 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.721998 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.728803 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78184439-943a-4776-834b-f797a20bb2c1-combined-ca-bundle\") pod \"keystone-798cbbdc78-n5tht\" (UID: \"78184439-943a-4776-834b-f797a20bb2c1\") " pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.755552 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/78184439-943a-4776-834b-f797a20bb2c1-fernet-keys\") pod \"keystone-798cbbdc78-n5tht\" (UID: \"78184439-943a-4776-834b-f797a20bb2c1\") " pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.757195 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78184439-943a-4776-834b-f797a20bb2c1-config-data\") pod \"keystone-798cbbdc78-n5tht\" (UID: \"78184439-943a-4776-834b-f797a20bb2c1\") " pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.759635 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78184439-943a-4776-834b-f797a20bb2c1-public-tls-certs\") pod \"keystone-798cbbdc78-n5tht\" (UID: \"78184439-943a-4776-834b-f797a20bb2c1\") " pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.777140 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78184439-943a-4776-834b-f797a20bb2c1-internal-tls-certs\") pod \"keystone-798cbbdc78-n5tht\" (UID: \"78184439-943a-4776-834b-f797a20bb2c1\") " pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.781282 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjjs8\" (UniqueName: \"kubernetes.io/projected/78184439-943a-4776-834b-f797a20bb2c1-kube-api-access-kjjs8\") pod \"keystone-798cbbdc78-n5tht\" (UID: \"78184439-943a-4776-834b-f797a20bb2c1\") " pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.781856 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78184439-943a-4776-834b-f797a20bb2c1-scripts\") pod \"keystone-798cbbdc78-n5tht\" (UID: \"78184439-943a-4776-834b-f797a20bb2c1\") " pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.791666 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "73b306f6-bde9-4e5c-9466-1601184571d6" (UID: "73b306f6-bde9-4e5c-9466-1601184571d6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.798463 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/78184439-943a-4776-834b-f797a20bb2c1-credential-keys\") pod \"keystone-798cbbdc78-n5tht\" (UID: \"78184439-943a-4776-834b-f797a20bb2c1\") " pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.823575 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-786dc678dd-l4rb5"] Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.858132 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73b306f6-bde9-4e5c-9466-1601184571d6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.881960 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.971314 4897 scope.go:117] "RemoveContainer" containerID="f308b96003cdc1e738b4a178bc436e8a82cc4d36d3f6f8af113b055df77ef3e8" Feb 14 19:05:26 crc kubenswrapper[4897]: E0214 19:05:26.980217 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f308b96003cdc1e738b4a178bc436e8a82cc4d36d3f6f8af113b055df77ef3e8\": container with ID starting with f308b96003cdc1e738b4a178bc436e8a82cc4d36d3f6f8af113b055df77ef3e8 not found: ID does not exist" containerID="f308b96003cdc1e738b4a178bc436e8a82cc4d36d3f6f8af113b055df77ef3e8" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.980261 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f308b96003cdc1e738b4a178bc436e8a82cc4d36d3f6f8af113b055df77ef3e8"} err="failed to get container status \"f308b96003cdc1e738b4a178bc436e8a82cc4d36d3f6f8af113b055df77ef3e8\": rpc error: code = NotFound desc = could not find container \"f308b96003cdc1e738b4a178bc436e8a82cc4d36d3f6f8af113b055df77ef3e8\": container with ID starting with f308b96003cdc1e738b4a178bc436e8a82cc4d36d3f6f8af113b055df77ef3e8 not found: ID does not exist" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.980284 4897 scope.go:117] "RemoveContainer" containerID="55f4a73444829a6bbe40afd8cf3f296f9d44202b74048e9ddbd87c050687f487" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.981188 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6576bd4d47-rhqmj"] Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.982937 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6576bd4d47-rhqmj" Feb 14 19:05:26 crc kubenswrapper[4897]: E0214 19:05:26.983544 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55f4a73444829a6bbe40afd8cf3f296f9d44202b74048e9ddbd87c050687f487\": container with ID starting with 55f4a73444829a6bbe40afd8cf3f296f9d44202b74048e9ddbd87c050687f487 not found: ID does not exist" containerID="55f4a73444829a6bbe40afd8cf3f296f9d44202b74048e9ddbd87c050687f487" Feb 14 19:05:26 crc kubenswrapper[4897]: I0214 19:05:26.983586 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55f4a73444829a6bbe40afd8cf3f296f9d44202b74048e9ddbd87c050687f487"} err="failed to get container status \"55f4a73444829a6bbe40afd8cf3f296f9d44202b74048e9ddbd87c050687f487\": rpc error: code = NotFound desc = could not find container \"55f4a73444829a6bbe40afd8cf3f296f9d44202b74048e9ddbd87c050687f487\": container with ID starting with 55f4a73444829a6bbe40afd8cf3f296f9d44202b74048e9ddbd87c050687f487 not found: ID does not exist" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.019487 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7c4d46dc74-lkxxb"] Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.038400 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7c4d46dc74-lkxxb" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.056121 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6576bd4d47-rhqmj"] Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.074606 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7e158b-1796-4311-89ce-c05a5f1acd87-combined-ca-bundle\") pod \"barbican-worker-6576bd4d47-rhqmj\" (UID: \"8a7e158b-1796-4311-89ce-c05a5f1acd87\") " pod="openstack/barbican-worker-6576bd4d47-rhqmj" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.074755 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtdjq\" (UniqueName: \"kubernetes.io/projected/8a7e158b-1796-4311-89ce-c05a5f1acd87-kube-api-access-gtdjq\") pod \"barbican-worker-6576bd4d47-rhqmj\" (UID: \"8a7e158b-1796-4311-89ce-c05a5f1acd87\") " pod="openstack/barbican-worker-6576bd4d47-rhqmj" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.074834 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7e158b-1796-4311-89ce-c05a5f1acd87-config-data\") pod \"barbican-worker-6576bd4d47-rhqmj\" (UID: \"8a7e158b-1796-4311-89ce-c05a5f1acd87\") " pod="openstack/barbican-worker-6576bd4d47-rhqmj" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.074922 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a7e158b-1796-4311-89ce-c05a5f1acd87-config-data-custom\") pod \"barbican-worker-6576bd4d47-rhqmj\" (UID: \"8a7e158b-1796-4311-89ce-c05a5f1acd87\") " pod="openstack/barbican-worker-6576bd4d47-rhqmj" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.074954 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a7e158b-1796-4311-89ce-c05a5f1acd87-logs\") pod \"barbican-worker-6576bd4d47-rhqmj\" (UID: \"8a7e158b-1796-4311-89ce-c05a5f1acd87\") " pod="openstack/barbican-worker-6576bd4d47-rhqmj" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.079885 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7c4d46dc74-lkxxb"] Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.102039 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-669b4bcf6b-jk28p"] Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.104376 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-669b4bcf6b-jk28p" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.140289 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-669b4bcf6b-jk28p"] Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.176316 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a7e158b-1796-4311-89ce-c05a5f1acd87-config-data-custom\") pod \"barbican-worker-6576bd4d47-rhqmj\" (UID: \"8a7e158b-1796-4311-89ce-c05a5f1acd87\") " pod="openstack/barbican-worker-6576bd4d47-rhqmj" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.176371 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a7e158b-1796-4311-89ce-c05a5f1acd87-logs\") pod \"barbican-worker-6576bd4d47-rhqmj\" (UID: \"8a7e158b-1796-4311-89ce-c05a5f1acd87\") " pod="openstack/barbican-worker-6576bd4d47-rhqmj" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.176413 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xft6\" (UniqueName: \"kubernetes.io/projected/cecda2fd-aafa-4261-9947-e07a96c39aa5-kube-api-access-6xft6\") pod \"barbican-keystone-listener-7c4d46dc74-lkxxb\" (UID: \"cecda2fd-aafa-4261-9947-e07a96c39aa5\") " pod="openstack/barbican-keystone-listener-7c4d46dc74-lkxxb" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.176443 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cfccb60-304d-4c37-b2ac-ed560f3830fe-config-data-custom\") pod \"barbican-api-669b4bcf6b-jk28p\" (UID: \"6cfccb60-304d-4c37-b2ac-ed560f3830fe\") " pod="openstack/barbican-api-669b4bcf6b-jk28p" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.176467 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8pn5\" (UniqueName: \"kubernetes.io/projected/6cfccb60-304d-4c37-b2ac-ed560f3830fe-kube-api-access-x8pn5\") pod \"barbican-api-669b4bcf6b-jk28p\" (UID: \"6cfccb60-304d-4c37-b2ac-ed560f3830fe\") " pod="openstack/barbican-api-669b4bcf6b-jk28p" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.176509 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cfccb60-304d-4c37-b2ac-ed560f3830fe-logs\") pod \"barbican-api-669b4bcf6b-jk28p\" (UID: \"6cfccb60-304d-4c37-b2ac-ed560f3830fe\") " pod="openstack/barbican-api-669b4bcf6b-jk28p" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.176529 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7e158b-1796-4311-89ce-c05a5f1acd87-combined-ca-bundle\") pod \"barbican-worker-6576bd4d47-rhqmj\" (UID: \"8a7e158b-1796-4311-89ce-c05a5f1acd87\") " pod="openstack/barbican-worker-6576bd4d47-rhqmj" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.176556 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cecda2fd-aafa-4261-9947-e07a96c39aa5-config-data-custom\") pod \"barbican-keystone-listener-7c4d46dc74-lkxxb\" (UID: \"cecda2fd-aafa-4261-9947-e07a96c39aa5\") " pod="openstack/barbican-keystone-listener-7c4d46dc74-lkxxb" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.176604 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtdjq\" (UniqueName: \"kubernetes.io/projected/8a7e158b-1796-4311-89ce-c05a5f1acd87-kube-api-access-gtdjq\") pod \"barbican-worker-6576bd4d47-rhqmj\" (UID: \"8a7e158b-1796-4311-89ce-c05a5f1acd87\") " pod="openstack/barbican-worker-6576bd4d47-rhqmj" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.176640 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cecda2fd-aafa-4261-9947-e07a96c39aa5-logs\") pod \"barbican-keystone-listener-7c4d46dc74-lkxxb\" (UID: \"cecda2fd-aafa-4261-9947-e07a96c39aa5\") " pod="openstack/barbican-keystone-listener-7c4d46dc74-lkxxb" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.176661 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cecda2fd-aafa-4261-9947-e07a96c39aa5-config-data\") pod \"barbican-keystone-listener-7c4d46dc74-lkxxb\" (UID: \"cecda2fd-aafa-4261-9947-e07a96c39aa5\") " pod="openstack/barbican-keystone-listener-7c4d46dc74-lkxxb" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.176686 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7e158b-1796-4311-89ce-c05a5f1acd87-config-data\") pod \"barbican-worker-6576bd4d47-rhqmj\" (UID: \"8a7e158b-1796-4311-89ce-c05a5f1acd87\") " pod="openstack/barbican-worker-6576bd4d47-rhqmj" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.176711 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cecda2fd-aafa-4261-9947-e07a96c39aa5-combined-ca-bundle\") pod \"barbican-keystone-listener-7c4d46dc74-lkxxb\" (UID: \"cecda2fd-aafa-4261-9947-e07a96c39aa5\") " pod="openstack/barbican-keystone-listener-7c4d46dc74-lkxxb" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.190432 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cfccb60-304d-4c37-b2ac-ed560f3830fe-config-data\") pod \"barbican-api-669b4bcf6b-jk28p\" (UID: \"6cfccb60-304d-4c37-b2ac-ed560f3830fe\") " pod="openstack/barbican-api-669b4bcf6b-jk28p" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.190507 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cfccb60-304d-4c37-b2ac-ed560f3830fe-combined-ca-bundle\") pod \"barbican-api-669b4bcf6b-jk28p\" (UID: \"6cfccb60-304d-4c37-b2ac-ed560f3830fe\") " pod="openstack/barbican-api-669b4bcf6b-jk28p" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.191510 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-zhsbx"] Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.192375 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a7e158b-1796-4311-89ce-c05a5f1acd87-logs\") pod \"barbican-worker-6576bd4d47-rhqmj\" (UID: \"8a7e158b-1796-4311-89ce-c05a5f1acd87\") " pod="openstack/barbican-worker-6576bd4d47-rhqmj" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.226008 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7e158b-1796-4311-89ce-c05a5f1acd87-config-data\") pod \"barbican-worker-6576bd4d47-rhqmj\" (UID: \"8a7e158b-1796-4311-89ce-c05a5f1acd87\") " pod="openstack/barbican-worker-6576bd4d47-rhqmj" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.231103 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a7e158b-1796-4311-89ce-c05a5f1acd87-config-data-custom\") pod \"barbican-worker-6576bd4d47-rhqmj\" (UID: \"8a7e158b-1796-4311-89ce-c05a5f1acd87\") " pod="openstack/barbican-worker-6576bd4d47-rhqmj" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.240168 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7e158b-1796-4311-89ce-c05a5f1acd87-combined-ca-bundle\") pod \"barbican-worker-6576bd4d47-rhqmj\" (UID: \"8a7e158b-1796-4311-89ce-c05a5f1acd87\") " pod="openstack/barbican-worker-6576bd4d47-rhqmj" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.243075 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtdjq\" (UniqueName: \"kubernetes.io/projected/8a7e158b-1796-4311-89ce-c05a5f1acd87-kube-api-access-gtdjq\") pod \"barbican-worker-6576bd4d47-rhqmj\" (UID: \"8a7e158b-1796-4311-89ce-c05a5f1acd87\") " pod="openstack/barbican-worker-6576bd4d47-rhqmj" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.246193 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-zhsbx"] Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.312705 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cecda2fd-aafa-4261-9947-e07a96c39aa5-config-data-custom\") pod \"barbican-keystone-listener-7c4d46dc74-lkxxb\" (UID: \"cecda2fd-aafa-4261-9947-e07a96c39aa5\") " pod="openstack/barbican-keystone-listener-7c4d46dc74-lkxxb" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.312828 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cecda2fd-aafa-4261-9947-e07a96c39aa5-logs\") pod \"barbican-keystone-listener-7c4d46dc74-lkxxb\" (UID: \"cecda2fd-aafa-4261-9947-e07a96c39aa5\") " pod="openstack/barbican-keystone-listener-7c4d46dc74-lkxxb" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.312852 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cecda2fd-aafa-4261-9947-e07a96c39aa5-config-data\") pod \"barbican-keystone-listener-7c4d46dc74-lkxxb\" (UID: \"cecda2fd-aafa-4261-9947-e07a96c39aa5\") " pod="openstack/barbican-keystone-listener-7c4d46dc74-lkxxb" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.312898 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cecda2fd-aafa-4261-9947-e07a96c39aa5-combined-ca-bundle\") pod \"barbican-keystone-listener-7c4d46dc74-lkxxb\" (UID: \"cecda2fd-aafa-4261-9947-e07a96c39aa5\") " pod="openstack/barbican-keystone-listener-7c4d46dc74-lkxxb" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.312926 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cfccb60-304d-4c37-b2ac-ed560f3830fe-config-data\") pod \"barbican-api-669b4bcf6b-jk28p\" (UID: \"6cfccb60-304d-4c37-b2ac-ed560f3830fe\") " pod="openstack/barbican-api-669b4bcf6b-jk28p" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.312946 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cfccb60-304d-4c37-b2ac-ed560f3830fe-combined-ca-bundle\") pod \"barbican-api-669b4bcf6b-jk28p\" (UID: \"6cfccb60-304d-4c37-b2ac-ed560f3830fe\") " pod="openstack/barbican-api-669b4bcf6b-jk28p" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.313002 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xft6\" (UniqueName: \"kubernetes.io/projected/cecda2fd-aafa-4261-9947-e07a96c39aa5-kube-api-access-6xft6\") pod \"barbican-keystone-listener-7c4d46dc74-lkxxb\" (UID: \"cecda2fd-aafa-4261-9947-e07a96c39aa5\") " pod="openstack/barbican-keystone-listener-7c4d46dc74-lkxxb" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.313041 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cfccb60-304d-4c37-b2ac-ed560f3830fe-config-data-custom\") pod \"barbican-api-669b4bcf6b-jk28p\" (UID: \"6cfccb60-304d-4c37-b2ac-ed560f3830fe\") " pod="openstack/barbican-api-669b4bcf6b-jk28p" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.313063 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8pn5\" (UniqueName: \"kubernetes.io/projected/6cfccb60-304d-4c37-b2ac-ed560f3830fe-kube-api-access-x8pn5\") pod \"barbican-api-669b4bcf6b-jk28p\" (UID: \"6cfccb60-304d-4c37-b2ac-ed560f3830fe\") " pod="openstack/barbican-api-669b4bcf6b-jk28p" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.313101 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cfccb60-304d-4c37-b2ac-ed560f3830fe-logs\") pod \"barbican-api-669b4bcf6b-jk28p\" (UID: \"6cfccb60-304d-4c37-b2ac-ed560f3830fe\") " pod="openstack/barbican-api-669b4bcf6b-jk28p" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.313468 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cfccb60-304d-4c37-b2ac-ed560f3830fe-logs\") pod \"barbican-api-669b4bcf6b-jk28p\" (UID: \"6cfccb60-304d-4c37-b2ac-ed560f3830fe\") " pod="openstack/barbican-api-669b4bcf6b-jk28p" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.325418 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cecda2fd-aafa-4261-9947-e07a96c39aa5-logs\") pod \"barbican-keystone-listener-7c4d46dc74-lkxxb\" (UID: \"cecda2fd-aafa-4261-9947-e07a96c39aa5\") " pod="openstack/barbican-keystone-listener-7c4d46dc74-lkxxb" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.336402 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6576bd4d47-rhqmj" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.337770 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cecda2fd-aafa-4261-9947-e07a96c39aa5-config-data-custom\") pod \"barbican-keystone-listener-7c4d46dc74-lkxxb\" (UID: \"cecda2fd-aafa-4261-9947-e07a96c39aa5\") " pod="openstack/barbican-keystone-listener-7c4d46dc74-lkxxb" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.339745 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cfccb60-304d-4c37-b2ac-ed560f3830fe-config-data-custom\") pod \"barbican-api-669b4bcf6b-jk28p\" (UID: \"6cfccb60-304d-4c37-b2ac-ed560f3830fe\") " pod="openstack/barbican-api-669b4bcf6b-jk28p" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.357090 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cfccb60-304d-4c37-b2ac-ed560f3830fe-config-data\") pod \"barbican-api-669b4bcf6b-jk28p\" (UID: \"6cfccb60-304d-4c37-b2ac-ed560f3830fe\") " pod="openstack/barbican-api-669b4bcf6b-jk28p" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.365076 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xft6\" (UniqueName: \"kubernetes.io/projected/cecda2fd-aafa-4261-9947-e07a96c39aa5-kube-api-access-6xft6\") pod \"barbican-keystone-listener-7c4d46dc74-lkxxb\" (UID: \"cecda2fd-aafa-4261-9947-e07a96c39aa5\") " pod="openstack/barbican-keystone-listener-7c4d46dc74-lkxxb" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.373788 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cecda2fd-aafa-4261-9947-e07a96c39aa5-config-data\") pod \"barbican-keystone-listener-7c4d46dc74-lkxxb\" (UID: \"cecda2fd-aafa-4261-9947-e07a96c39aa5\") " pod="openstack/barbican-keystone-listener-7c4d46dc74-lkxxb" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.377553 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cfccb60-304d-4c37-b2ac-ed560f3830fe-combined-ca-bundle\") pod \"barbican-api-669b4bcf6b-jk28p\" (UID: \"6cfccb60-304d-4c37-b2ac-ed560f3830fe\") " pod="openstack/barbican-api-669b4bcf6b-jk28p" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.382053 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cecda2fd-aafa-4261-9947-e07a96c39aa5-combined-ca-bundle\") pod \"barbican-keystone-listener-7c4d46dc74-lkxxb\" (UID: \"cecda2fd-aafa-4261-9947-e07a96c39aa5\") " pod="openstack/barbican-keystone-listener-7c4d46dc74-lkxxb" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.382555 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7c4d46dc74-lkxxb" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.383346 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8pn5\" (UniqueName: \"kubernetes.io/projected/6cfccb60-304d-4c37-b2ac-ed560f3830fe-kube-api-access-x8pn5\") pod \"barbican-api-669b4bcf6b-jk28p\" (UID: \"6cfccb60-304d-4c37-b2ac-ed560f3830fe\") " pod="openstack/barbican-api-669b4bcf6b-jk28p" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.445544 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-669b4bcf6b-jk28p" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.474443 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" event={"ID":"b2831142-237b-4232-8433-1a71cecdc1aa","Type":"ContainerStarted","Data":"4fe9d51f8b225a71d7a25149bff878076501cd799bdfd04df4dc4ce50e4c4d7c"} Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.497780 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-c4968"] Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.510014 4897 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.510048 4897 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.510156 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6fc586c7b4-8x7qx" event={"ID":"1639a907-9497-4dea-a153-945921c79337","Type":"ContainerStarted","Data":"5ef470851ae3c487d97ac4629e821b476854b1422c5d0f21257bae3bf1fa1dac"} Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.544137 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-8bddbd865-mxphm"] Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.742600 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b85695646-lxbpp"] Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.865563 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73b306f6-bde9-4e5c-9466-1601184571d6" path="/var/lib/kubelet/pods/73b306f6-bde9-4e5c-9466-1601184571d6/volumes" Feb 14 19:05:27 crc kubenswrapper[4897]: I0214 19:05:27.977098 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-798cbbdc78-n5tht"] Feb 14 19:05:28 crc kubenswrapper[4897]: I0214 19:05:28.542574 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8bddbd865-mxphm" event={"ID":"d6708e0a-c394-435d-b408-84716a21508f","Type":"ContainerStarted","Data":"6b844b3d6d2ceb4319a55dd0d435f8bcbc4ca9892469061eae1fc3d9974d4b7b"} Feb 14 19:05:28 crc kubenswrapper[4897]: I0214 19:05:28.556637 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-c4968" event={"ID":"5911804f-29c7-44a8-8688-0bc0fe0a46ac","Type":"ContainerStarted","Data":"798ee1583a2be97a6e93df3e717089c60efa4b18d25da73d8ec19ad8e4a6b419"} Feb 14 19:05:28 crc kubenswrapper[4897]: I0214 19:05:28.556690 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-c4968" event={"ID":"5911804f-29c7-44a8-8688-0bc0fe0a46ac","Type":"ContainerStarted","Data":"37327f376d9119a7befb95a3c120918ab578c79b1ee6128a34109555fe04a698"} Feb 14 19:05:28 crc kubenswrapper[4897]: I0214 19:05:28.576731 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-798cbbdc78-n5tht" event={"ID":"78184439-943a-4776-834b-f797a20bb2c1","Type":"ContainerStarted","Data":"220ba82b3d0e53bef1b59f7eacd8d9d2d8843fe16ece10109c7fbf53432b5792"} Feb 14 19:05:28 crc kubenswrapper[4897]: I0214 19:05:28.591180 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b85695646-lxbpp" event={"ID":"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1","Type":"ContainerStarted","Data":"03e407e0edc47795aa1bebe2fc8071ab7288b5c33a72b68799cf0e5a2ab562ca"} Feb 14 19:05:28 crc kubenswrapper[4897]: I0214 19:05:28.591241 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b85695646-lxbpp" event={"ID":"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1","Type":"ContainerStarted","Data":"4eb54eef99c779ed6fa7cdce20971499664f0535741704f17e82ea0fa2276693"} Feb 14 19:05:28 crc kubenswrapper[4897]: I0214 19:05:28.602260 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6fc586c7b4-8x7qx" event={"ID":"1639a907-9497-4dea-a153-945921c79337","Type":"ContainerStarted","Data":"82dd2f2688766725650bee1eb8d63c5a544e36fef4aa12e9f5c39f2fa22c5032"} Feb 14 19:05:28 crc kubenswrapper[4897]: I0214 19:05:28.604039 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:05:28 crc kubenswrapper[4897]: I0214 19:05:28.604083 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:05:28 crc kubenswrapper[4897]: I0214 19:05:28.825592 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6fc586c7b4-8x7qx" podStartSLOduration=3.82557369 podStartE2EDuration="3.82557369s" podCreationTimestamp="2026-02-14 19:05:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:05:28.646273423 +0000 UTC m=+1381.622681926" watchObservedRunningTime="2026-02-14 19:05:28.82557369 +0000 UTC m=+1381.801982173" Feb 14 19:05:28 crc kubenswrapper[4897]: I0214 19:05:28.895155 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6576bd4d47-rhqmj"] Feb 14 19:05:28 crc kubenswrapper[4897]: W0214 19:05:28.909402 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcecda2fd_aafa_4261_9947_e07a96c39aa5.slice/crio-730815b4e4a1806155a1b9ed93592041eec8785a2cea4fdcb17bbf20d6b5958b WatchSource:0}: Error finding container 730815b4e4a1806155a1b9ed93592041eec8785a2cea4fdcb17bbf20d6b5958b: Status 404 returned error can't find the container with id 730815b4e4a1806155a1b9ed93592041eec8785a2cea4fdcb17bbf20d6b5958b Feb 14 19:05:28 crc kubenswrapper[4897]: I0214 19:05:28.924492 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7c4d46dc74-lkxxb"] Feb 14 19:05:28 crc kubenswrapper[4897]: I0214 19:05:28.938331 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-669b4bcf6b-jk28p"] Feb 14 19:05:28 crc kubenswrapper[4897]: W0214 19:05:28.940976 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cfccb60_304d_4c37_b2ac_ed560f3830fe.slice/crio-5e50b2751d0d92aca6c5b02f8b69e0cca1689520cbb8ba878c6dfcd5aa486e29 WatchSource:0}: Error finding container 5e50b2751d0d92aca6c5b02f8b69e0cca1689520cbb8ba878c6dfcd5aa486e29: Status 404 returned error can't find the container with id 5e50b2751d0d92aca6c5b02f8b69e0cca1689520cbb8ba878c6dfcd5aa486e29 Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.481700 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5b85695646-lxbpp"] Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.512005 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-75dc4484db-pr977"] Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.521872 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75dc4484db-pr977" Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.527766 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.530419 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.549007 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75dc4484db-pr977"] Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.637110 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-669b4bcf6b-jk28p" event={"ID":"6cfccb60-304d-4c37-b2ac-ed560f3830fe","Type":"ContainerStarted","Data":"3f3c6aee89597acc0c4e089826806382f3c22822c9d1201a9f1084bb32f12e68"} Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.637151 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-669b4bcf6b-jk28p" event={"ID":"6cfccb60-304d-4c37-b2ac-ed560f3830fe","Type":"ContainerStarted","Data":"5e50b2751d0d92aca6c5b02f8b69e0cca1689520cbb8ba878c6dfcd5aa486e29"} Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.643150 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7c4d46dc74-lkxxb" event={"ID":"cecda2fd-aafa-4261-9947-e07a96c39aa5","Type":"ContainerStarted","Data":"730815b4e4a1806155a1b9ed93592041eec8785a2cea4fdcb17bbf20d6b5958b"} Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.660820 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6fc586c7b4-8x7qx" event={"ID":"1639a907-9497-4dea-a153-945921c79337","Type":"ContainerStarted","Data":"50173ab05d1c9f56f9b06b808ec984f5e26ec10f3a8eaf8d7c6e65e628bf172a"} Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.674957 4897 generic.go:334] "Generic (PLEG): container finished" podID="5911804f-29c7-44a8-8688-0bc0fe0a46ac" containerID="798ee1583a2be97a6e93df3e717089c60efa4b18d25da73d8ec19ad8e4a6b419" exitCode=0 Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.675060 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-c4968" event={"ID":"5911804f-29c7-44a8-8688-0bc0fe0a46ac","Type":"ContainerDied","Data":"798ee1583a2be97a6e93df3e717089c60efa4b18d25da73d8ec19ad8e4a6b419"} Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.691778 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-798cbbdc78-n5tht" event={"ID":"78184439-943a-4776-834b-f797a20bb2c1","Type":"ContainerStarted","Data":"5ebeb130f21463d57020bb6acd3eb45cc5f48a56c62e2c5eade5663a698e839a"} Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.692669 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.700866 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b85695646-lxbpp" event={"ID":"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1","Type":"ContainerStarted","Data":"917171c806a0183508ac29aeb0b3654e1042330d30293f1dd4c399e2505ce2b1"} Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.701682 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b85695646-lxbpp" Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.701713 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b85695646-lxbpp" Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.702751 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkkcj\" (UniqueName: \"kubernetes.io/projected/79135975-c59e-4ea0-8487-7d47e4d5d632-kube-api-access-nkkcj\") pod \"barbican-api-75dc4484db-pr977\" (UID: \"79135975-c59e-4ea0-8487-7d47e4d5d632\") " pod="openstack/barbican-api-75dc4484db-pr977" Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.702849 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79135975-c59e-4ea0-8487-7d47e4d5d632-config-data-custom\") pod \"barbican-api-75dc4484db-pr977\" (UID: \"79135975-c59e-4ea0-8487-7d47e4d5d632\") " pod="openstack/barbican-api-75dc4484db-pr977" Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.702931 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79135975-c59e-4ea0-8487-7d47e4d5d632-config-data\") pod \"barbican-api-75dc4484db-pr977\" (UID: \"79135975-c59e-4ea0-8487-7d47e4d5d632\") " pod="openstack/barbican-api-75dc4484db-pr977" Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.702953 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79135975-c59e-4ea0-8487-7d47e4d5d632-logs\") pod \"barbican-api-75dc4484db-pr977\" (UID: \"79135975-c59e-4ea0-8487-7d47e4d5d632\") " pod="openstack/barbican-api-75dc4484db-pr977" Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.702976 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79135975-c59e-4ea0-8487-7d47e4d5d632-internal-tls-certs\") pod \"barbican-api-75dc4484db-pr977\" (UID: \"79135975-c59e-4ea0-8487-7d47e4d5d632\") " pod="openstack/barbican-api-75dc4484db-pr977" Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.702993 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79135975-c59e-4ea0-8487-7d47e4d5d632-public-tls-certs\") pod \"barbican-api-75dc4484db-pr977\" (UID: \"79135975-c59e-4ea0-8487-7d47e4d5d632\") " pod="openstack/barbican-api-75dc4484db-pr977" Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.703045 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79135975-c59e-4ea0-8487-7d47e4d5d632-combined-ca-bundle\") pod \"barbican-api-75dc4484db-pr977\" (UID: \"79135975-c59e-4ea0-8487-7d47e4d5d632\") " pod="openstack/barbican-api-75dc4484db-pr977" Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.721471 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6576bd4d47-rhqmj" event={"ID":"8a7e158b-1796-4311-89ce-c05a5f1acd87","Type":"ContainerStarted","Data":"5993e8a5b20613774498cf24965b9d5ee4f2c22e11486bca8f853fbbf6e8ac69"} Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.751450 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5b85695646-lxbpp" podStartSLOduration=4.751429765 podStartE2EDuration="4.751429765s" podCreationTimestamp="2026-02-14 19:05:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:05:29.733592695 +0000 UTC m=+1382.710001188" watchObservedRunningTime="2026-02-14 19:05:29.751429765 +0000 UTC m=+1382.727838248" Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.814985 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79135975-c59e-4ea0-8487-7d47e4d5d632-logs\") pod \"barbican-api-75dc4484db-pr977\" (UID: \"79135975-c59e-4ea0-8487-7d47e4d5d632\") " pod="openstack/barbican-api-75dc4484db-pr977" Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.815056 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79135975-c59e-4ea0-8487-7d47e4d5d632-internal-tls-certs\") pod \"barbican-api-75dc4484db-pr977\" (UID: \"79135975-c59e-4ea0-8487-7d47e4d5d632\") " pod="openstack/barbican-api-75dc4484db-pr977" Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.815089 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79135975-c59e-4ea0-8487-7d47e4d5d632-public-tls-certs\") pod \"barbican-api-75dc4484db-pr977\" (UID: \"79135975-c59e-4ea0-8487-7d47e4d5d632\") " pod="openstack/barbican-api-75dc4484db-pr977" Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.815164 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79135975-c59e-4ea0-8487-7d47e4d5d632-combined-ca-bundle\") pod \"barbican-api-75dc4484db-pr977\" (UID: \"79135975-c59e-4ea0-8487-7d47e4d5d632\") " pod="openstack/barbican-api-75dc4484db-pr977" Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.815220 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkkcj\" (UniqueName: \"kubernetes.io/projected/79135975-c59e-4ea0-8487-7d47e4d5d632-kube-api-access-nkkcj\") pod \"barbican-api-75dc4484db-pr977\" (UID: \"79135975-c59e-4ea0-8487-7d47e4d5d632\") " pod="openstack/barbican-api-75dc4484db-pr977" Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.815359 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79135975-c59e-4ea0-8487-7d47e4d5d632-config-data-custom\") pod \"barbican-api-75dc4484db-pr977\" (UID: \"79135975-c59e-4ea0-8487-7d47e4d5d632\") " pod="openstack/barbican-api-75dc4484db-pr977" Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.816663 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79135975-c59e-4ea0-8487-7d47e4d5d632-logs\") pod \"barbican-api-75dc4484db-pr977\" (UID: \"79135975-c59e-4ea0-8487-7d47e4d5d632\") " pod="openstack/barbican-api-75dc4484db-pr977" Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.832112 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79135975-c59e-4ea0-8487-7d47e4d5d632-config-data\") pod \"barbican-api-75dc4484db-pr977\" (UID: \"79135975-c59e-4ea0-8487-7d47e4d5d632\") " pod="openstack/barbican-api-75dc4484db-pr977" Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.849041 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79135975-c59e-4ea0-8487-7d47e4d5d632-combined-ca-bundle\") pod \"barbican-api-75dc4484db-pr977\" (UID: \"79135975-c59e-4ea0-8487-7d47e4d5d632\") " pod="openstack/barbican-api-75dc4484db-pr977" Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.850855 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79135975-c59e-4ea0-8487-7d47e4d5d632-config-data-custom\") pod \"barbican-api-75dc4484db-pr977\" (UID: \"79135975-c59e-4ea0-8487-7d47e4d5d632\") " pod="openstack/barbican-api-75dc4484db-pr977" Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.851475 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79135975-c59e-4ea0-8487-7d47e4d5d632-config-data\") pod \"barbican-api-75dc4484db-pr977\" (UID: \"79135975-c59e-4ea0-8487-7d47e4d5d632\") " pod="openstack/barbican-api-75dc4484db-pr977" Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.870569 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79135975-c59e-4ea0-8487-7d47e4d5d632-internal-tls-certs\") pod \"barbican-api-75dc4484db-pr977\" (UID: \"79135975-c59e-4ea0-8487-7d47e4d5d632\") " pod="openstack/barbican-api-75dc4484db-pr977" Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.871424 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79135975-c59e-4ea0-8487-7d47e4d5d632-public-tls-certs\") pod \"barbican-api-75dc4484db-pr977\" (UID: \"79135975-c59e-4ea0-8487-7d47e4d5d632\") " pod="openstack/barbican-api-75dc4484db-pr977" Feb 14 19:05:29 crc kubenswrapper[4897]: I0214 19:05:29.912301 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkkcj\" (UniqueName: \"kubernetes.io/projected/79135975-c59e-4ea0-8487-7d47e4d5d632-kube-api-access-nkkcj\") pod \"barbican-api-75dc4484db-pr977\" (UID: \"79135975-c59e-4ea0-8487-7d47e4d5d632\") " pod="openstack/barbican-api-75dc4484db-pr977" Feb 14 19:05:30 crc kubenswrapper[4897]: I0214 19:05:30.175431 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75dc4484db-pr977" Feb 14 19:05:30 crc kubenswrapper[4897]: I0214 19:05:30.733264 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5b85695646-lxbpp" podUID="e2f12e13-e810-495c-8e1c-b4a33d3c8ec1" containerName="barbican-api-log" containerID="cri-o://03e407e0edc47795aa1bebe2fc8071ab7288b5c33a72b68799cf0e5a2ab562ca" gracePeriod=30 Feb 14 19:05:30 crc kubenswrapper[4897]: I0214 19:05:30.733789 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5b85695646-lxbpp" podUID="e2f12e13-e810-495c-8e1c-b4a33d3c8ec1" containerName="barbican-api" containerID="cri-o://917171c806a0183508ac29aeb0b3654e1042330d30293f1dd4c399e2505ce2b1" gracePeriod=30 Feb 14 19:05:31 crc kubenswrapper[4897]: I0214 19:05:31.157740 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-798cbbdc78-n5tht" podStartSLOduration=5.157719318 podStartE2EDuration="5.157719318s" podCreationTimestamp="2026-02-14 19:05:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:05:29.765436605 +0000 UTC m=+1382.741845088" watchObservedRunningTime="2026-02-14 19:05:31.157719318 +0000 UTC m=+1384.134127801" Feb 14 19:05:31 crc kubenswrapper[4897]: I0214 19:05:31.165120 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75dc4484db-pr977"] Feb 14 19:05:31 crc kubenswrapper[4897]: I0214 19:05:31.725601 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:05:31 crc kubenswrapper[4897]: I0214 19:05:31.725965 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:05:31 crc kubenswrapper[4897]: I0214 19:05:31.746213 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-c4968" event={"ID":"5911804f-29c7-44a8-8688-0bc0fe0a46ac","Type":"ContainerStarted","Data":"7fb45f764d0e47dbee23705b6d10a036878347ac28a4af43fa589645ad4eea2a"} Feb 14 19:05:31 crc kubenswrapper[4897]: I0214 19:05:31.746414 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-c4968" Feb 14 19:05:31 crc kubenswrapper[4897]: I0214 19:05:31.748883 4897 generic.go:334] "Generic (PLEG): container finished" podID="e2f12e13-e810-495c-8e1c-b4a33d3c8ec1" containerID="917171c806a0183508ac29aeb0b3654e1042330d30293f1dd4c399e2505ce2b1" exitCode=0 Feb 14 19:05:31 crc kubenswrapper[4897]: I0214 19:05:31.748907 4897 generic.go:334] "Generic (PLEG): container finished" podID="e2f12e13-e810-495c-8e1c-b4a33d3c8ec1" containerID="03e407e0edc47795aa1bebe2fc8071ab7288b5c33a72b68799cf0e5a2ab562ca" exitCode=143 Feb 14 19:05:31 crc kubenswrapper[4897]: I0214 19:05:31.749174 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b85695646-lxbpp" event={"ID":"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1","Type":"ContainerDied","Data":"917171c806a0183508ac29aeb0b3654e1042330d30293f1dd4c399e2505ce2b1"} Feb 14 19:05:31 crc kubenswrapper[4897]: I0214 19:05:31.749203 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b85695646-lxbpp" event={"ID":"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1","Type":"ContainerDied","Data":"03e407e0edc47795aa1bebe2fc8071ab7288b5c33a72b68799cf0e5a2ab562ca"} Feb 14 19:05:31 crc kubenswrapper[4897]: I0214 19:05:31.767636 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-c4968" podStartSLOduration=6.7676198979999995 podStartE2EDuration="6.767619898s" podCreationTimestamp="2026-02-14 19:05:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:05:31.760620318 +0000 UTC m=+1384.737028801" watchObservedRunningTime="2026-02-14 19:05:31.767619898 +0000 UTC m=+1384.744028381" Feb 14 19:05:31 crc kubenswrapper[4897]: I0214 19:05:31.839342 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 14 19:05:31 crc kubenswrapper[4897]: I0214 19:05:31.839441 4897 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 19:05:31 crc kubenswrapper[4897]: I0214 19:05:31.976043 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.046288 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b85695646-lxbpp" Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.196844 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-logs\") pod \"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1\" (UID: \"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1\") " Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.197674 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-logs" (OuterVolumeSpecName: "logs") pod "e2f12e13-e810-495c-8e1c-b4a33d3c8ec1" (UID: "e2f12e13-e810-495c-8e1c-b4a33d3c8ec1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.197836 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-config-data\") pod \"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1\" (UID: \"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1\") " Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.197908 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7w5l\" (UniqueName: \"kubernetes.io/projected/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-kube-api-access-g7w5l\") pod \"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1\" (UID: \"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1\") " Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.198473 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-config-data-custom\") pod \"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1\" (UID: \"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1\") " Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.198570 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-combined-ca-bundle\") pod \"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1\" (UID: \"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1\") " Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.199238 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-logs\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.211363 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-kube-api-access-g7w5l" (OuterVolumeSpecName: "kube-api-access-g7w5l") pod "e2f12e13-e810-495c-8e1c-b4a33d3c8ec1" (UID: "e2f12e13-e810-495c-8e1c-b4a33d3c8ec1"). InnerVolumeSpecName "kube-api-access-g7w5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.232391 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e2f12e13-e810-495c-8e1c-b4a33d3c8ec1" (UID: "e2f12e13-e810-495c-8e1c-b4a33d3c8ec1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.303482 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7w5l\" (UniqueName: \"kubernetes.io/projected/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-kube-api-access-g7w5l\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.303630 4897 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.408104 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2f12e13-e810-495c-8e1c-b4a33d3c8ec1" (UID: "e2f12e13-e810-495c-8e1c-b4a33d3c8ec1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.409189 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-config-data" (OuterVolumeSpecName: "config-data") pod "e2f12e13-e810-495c-8e1c-b4a33d3c8ec1" (UID: "e2f12e13-e810-495c-8e1c-b4a33d3c8ec1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.507065 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.507292 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.781150 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zxbnk"] Feb 14 19:05:32 crc kubenswrapper[4897]: E0214 19:05:32.781648 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f12e13-e810-495c-8e1c-b4a33d3c8ec1" containerName="barbican-api-log" Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.781659 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f12e13-e810-495c-8e1c-b4a33d3c8ec1" containerName="barbican-api-log" Feb 14 19:05:32 crc kubenswrapper[4897]: E0214 19:05:32.781693 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f12e13-e810-495c-8e1c-b4a33d3c8ec1" containerName="barbican-api" Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.781700 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f12e13-e810-495c-8e1c-b4a33d3c8ec1" containerName="barbican-api" Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.781906 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2f12e13-e810-495c-8e1c-b4a33d3c8ec1" containerName="barbican-api" Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.781922 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2f12e13-e810-495c-8e1c-b4a33d3c8ec1" containerName="barbican-api-log" Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.783598 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zxbnk" Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.784772 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b85695646-lxbpp" event={"ID":"e2f12e13-e810-495c-8e1c-b4a33d3c8ec1","Type":"ContainerDied","Data":"4eb54eef99c779ed6fa7cdce20971499664f0535741704f17e82ea0fa2276693"} Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.784814 4897 scope.go:117] "RemoveContainer" containerID="917171c806a0183508ac29aeb0b3654e1042330d30293f1dd4c399e2505ce2b1" Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.784947 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b85695646-lxbpp" Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.794475 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75dc4484db-pr977" event={"ID":"79135975-c59e-4ea0-8487-7d47e4d5d632","Type":"ContainerStarted","Data":"a5b196361fe1b2be826c6e598eec9dbec9c8e86be382ff757a9403c851480a88"} Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.798687 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zxbnk"] Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.807166 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-2rjdz" event={"ID":"c17a810c-7598-46ab-93c3-c480c175ca61","Type":"ContainerStarted","Data":"eee94c9fd239e653cc0a0ffca6a13a2f2f49f3bf9e7f99594c093072baed3b5f"} Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.899257 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-2rjdz" podStartSLOduration=6.063915172 podStartE2EDuration="52.89922395s" podCreationTimestamp="2026-02-14 19:04:40 +0000 UTC" firstStartedPulling="2026-02-14 19:04:42.452507158 +0000 UTC m=+1335.428915641" lastFinishedPulling="2026-02-14 19:05:29.287815936 +0000 UTC m=+1382.264224419" observedRunningTime="2026-02-14 19:05:32.861665531 +0000 UTC m=+1385.838074014" watchObservedRunningTime="2026-02-14 19:05:32.89922395 +0000 UTC m=+1385.875632433" Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.919763 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ntrk\" (UniqueName: \"kubernetes.io/projected/9113bf61-0b89-4343-b4e4-93cc2f704cf9-kube-api-access-5ntrk\") pod \"redhat-operators-zxbnk\" (UID: \"9113bf61-0b89-4343-b4e4-93cc2f704cf9\") " pod="openshift-marketplace/redhat-operators-zxbnk" Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.919893 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9113bf61-0b89-4343-b4e4-93cc2f704cf9-utilities\") pod \"redhat-operators-zxbnk\" (UID: \"9113bf61-0b89-4343-b4e4-93cc2f704cf9\") " pod="openshift-marketplace/redhat-operators-zxbnk" Feb 14 19:05:32 crc kubenswrapper[4897]: I0214 19:05:32.919959 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9113bf61-0b89-4343-b4e4-93cc2f704cf9-catalog-content\") pod \"redhat-operators-zxbnk\" (UID: \"9113bf61-0b89-4343-b4e4-93cc2f704cf9\") " pod="openshift-marketplace/redhat-operators-zxbnk" Feb 14 19:05:33 crc kubenswrapper[4897]: I0214 19:05:33.021831 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ntrk\" (UniqueName: \"kubernetes.io/projected/9113bf61-0b89-4343-b4e4-93cc2f704cf9-kube-api-access-5ntrk\") pod \"redhat-operators-zxbnk\" (UID: \"9113bf61-0b89-4343-b4e4-93cc2f704cf9\") " pod="openshift-marketplace/redhat-operators-zxbnk" Feb 14 19:05:33 crc kubenswrapper[4897]: I0214 19:05:33.022214 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9113bf61-0b89-4343-b4e4-93cc2f704cf9-utilities\") pod \"redhat-operators-zxbnk\" (UID: \"9113bf61-0b89-4343-b4e4-93cc2f704cf9\") " pod="openshift-marketplace/redhat-operators-zxbnk" Feb 14 19:05:33 crc kubenswrapper[4897]: I0214 19:05:33.022244 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9113bf61-0b89-4343-b4e4-93cc2f704cf9-catalog-content\") pod \"redhat-operators-zxbnk\" (UID: \"9113bf61-0b89-4343-b4e4-93cc2f704cf9\") " pod="openshift-marketplace/redhat-operators-zxbnk" Feb 14 19:05:33 crc kubenswrapper[4897]: I0214 19:05:33.022818 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9113bf61-0b89-4343-b4e4-93cc2f704cf9-catalog-content\") pod \"redhat-operators-zxbnk\" (UID: \"9113bf61-0b89-4343-b4e4-93cc2f704cf9\") " pod="openshift-marketplace/redhat-operators-zxbnk" Feb 14 19:05:33 crc kubenswrapper[4897]: I0214 19:05:33.023285 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9113bf61-0b89-4343-b4e4-93cc2f704cf9-utilities\") pod \"redhat-operators-zxbnk\" (UID: \"9113bf61-0b89-4343-b4e4-93cc2f704cf9\") " pod="openshift-marketplace/redhat-operators-zxbnk" Feb 14 19:05:33 crc kubenswrapper[4897]: I0214 19:05:33.044648 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ntrk\" (UniqueName: \"kubernetes.io/projected/9113bf61-0b89-4343-b4e4-93cc2f704cf9-kube-api-access-5ntrk\") pod \"redhat-operators-zxbnk\" (UID: \"9113bf61-0b89-4343-b4e4-93cc2f704cf9\") " pod="openshift-marketplace/redhat-operators-zxbnk" Feb 14 19:05:33 crc kubenswrapper[4897]: I0214 19:05:33.297088 4897 scope.go:117] "RemoveContainer" containerID="03e407e0edc47795aa1bebe2fc8071ab7288b5c33a72b68799cf0e5a2ab562ca" Feb 14 19:05:33 crc kubenswrapper[4897]: I0214 19:05:33.415169 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zxbnk" Feb 14 19:05:33 crc kubenswrapper[4897]: I0214 19:05:33.439088 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5b85695646-lxbpp"] Feb 14 19:05:33 crc kubenswrapper[4897]: I0214 19:05:33.455113 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5b85695646-lxbpp"] Feb 14 19:05:33 crc kubenswrapper[4897]: I0214 19:05:33.832301 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2f12e13-e810-495c-8e1c-b4a33d3c8ec1" path="/var/lib/kubelet/pods/e2f12e13-e810-495c-8e1c-b4a33d3c8ec1/volumes" Feb 14 19:05:33 crc kubenswrapper[4897]: I0214 19:05:33.879199 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" event={"ID":"b2831142-237b-4232-8433-1a71cecdc1aa","Type":"ContainerStarted","Data":"42011cc9478d5b964b4de7de2ddb08f4eb06dd4e662546321d90a436eeb366c4"} Feb 14 19:05:33 crc kubenswrapper[4897]: I0214 19:05:33.879251 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" event={"ID":"b2831142-237b-4232-8433-1a71cecdc1aa","Type":"ContainerStarted","Data":"d66d9475087a65c875d4f2ba4d212cab00fdfed9b71e525f6378b5a934609052"} Feb 14 19:05:33 crc kubenswrapper[4897]: I0214 19:05:33.891988 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75dc4484db-pr977" event={"ID":"79135975-c59e-4ea0-8487-7d47e4d5d632","Type":"ContainerStarted","Data":"c651c07da3c87e32377d0e4eb4088286eedf621c0408b4f9cd93c9fb41da9a06"} Feb 14 19:05:33 crc kubenswrapper[4897]: I0214 19:05:33.892059 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75dc4484db-pr977" event={"ID":"79135975-c59e-4ea0-8487-7d47e4d5d632","Type":"ContainerStarted","Data":"ce6d6cdb214b445681122f4f005aa519c3da3c7760d37818cb373d454b6cba68"} Feb 14 19:05:33 crc kubenswrapper[4897]: I0214 19:05:33.893204 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75dc4484db-pr977" Feb 14 19:05:33 crc kubenswrapper[4897]: I0214 19:05:33.893236 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75dc4484db-pr977" Feb 14 19:05:33 crc kubenswrapper[4897]: I0214 19:05:33.899066 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" podStartSLOduration=3.808410669 podStartE2EDuration="8.899023235s" podCreationTimestamp="2026-02-14 19:05:25 +0000 UTC" firstStartedPulling="2026-02-14 19:05:26.834587417 +0000 UTC m=+1379.810995890" lastFinishedPulling="2026-02-14 19:05:31.925199973 +0000 UTC m=+1384.901608456" observedRunningTime="2026-02-14 19:05:33.89663325 +0000 UTC m=+1386.873041733" watchObservedRunningTime="2026-02-14 19:05:33.899023235 +0000 UTC m=+1386.875431718" Feb 14 19:05:33 crc kubenswrapper[4897]: I0214 19:05:33.900689 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-669b4bcf6b-jk28p" event={"ID":"6cfccb60-304d-4c37-b2ac-ed560f3830fe","Type":"ContainerStarted","Data":"84e213c59cf7e3f9b97ea123e8a0a8aa03db9ea8b937582a040daa9a0d92a8d7"} Feb 14 19:05:33 crc kubenswrapper[4897]: I0214 19:05:33.901605 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-669b4bcf6b-jk28p" Feb 14 19:05:33 crc kubenswrapper[4897]: I0214 19:05:33.901629 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-669b4bcf6b-jk28p" Feb 14 19:05:33 crc kubenswrapper[4897]: I0214 19:05:33.927454 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7c4d46dc74-lkxxb" event={"ID":"cecda2fd-aafa-4261-9947-e07a96c39aa5","Type":"ContainerStarted","Data":"ddae24b7128865a0b4d2bc22ac8ab2e893ea97ddffd8a35cd5778726037e7711"} Feb 14 19:05:33 crc kubenswrapper[4897]: I0214 19:05:33.957264 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8bddbd865-mxphm" event={"ID":"d6708e0a-c394-435d-b408-84716a21508f","Type":"ContainerStarted","Data":"a425845eec9fec19817515b2fa74cb33b7510f33c4feca0989f5697d2a099040"} Feb 14 19:05:33 crc kubenswrapper[4897]: I0214 19:05:33.957311 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8bddbd865-mxphm" event={"ID":"d6708e0a-c394-435d-b408-84716a21508f","Type":"ContainerStarted","Data":"3f8294d94fc13e1d78970f43614442c964101a7b4ea7435e69b8ff0f08361831"} Feb 14 19:05:33 crc kubenswrapper[4897]: I0214 19:05:33.980941 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6576bd4d47-rhqmj" event={"ID":"8a7e158b-1796-4311-89ce-c05a5f1acd87","Type":"ContainerStarted","Data":"6c6ce171ad3383814b85bcd7d7c1d147270ecb028e4d06f884eb8c600ce03563"} Feb 14 19:05:34 crc kubenswrapper[4897]: I0214 19:05:34.001531 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-75dc4484db-pr977" podStartSLOduration=5.001512502 podStartE2EDuration="5.001512502s" podCreationTimestamp="2026-02-14 19:05:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:05:33.942426688 +0000 UTC m=+1386.918835171" watchObservedRunningTime="2026-02-14 19:05:34.001512502 +0000 UTC m=+1386.977920985" Feb 14 19:05:34 crc kubenswrapper[4897]: I0214 19:05:34.001972 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-669b4bcf6b-jk28p" podStartSLOduration=8.001966637 podStartE2EDuration="8.001966637s" podCreationTimestamp="2026-02-14 19:05:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:05:33.964970875 +0000 UTC m=+1386.941379368" watchObservedRunningTime="2026-02-14 19:05:34.001966637 +0000 UTC m=+1386.978375120" Feb 14 19:05:34 crc kubenswrapper[4897]: I0214 19:05:34.032226 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-8bddbd865-mxphm" podStartSLOduration=4.532888466 podStartE2EDuration="9.032203565s" podCreationTimestamp="2026-02-14 19:05:25 +0000 UTC" firstStartedPulling="2026-02-14 19:05:27.617538958 +0000 UTC m=+1380.593947441" lastFinishedPulling="2026-02-14 19:05:32.116854057 +0000 UTC m=+1385.093262540" observedRunningTime="2026-02-14 19:05:33.996457534 +0000 UTC m=+1386.972866017" watchObservedRunningTime="2026-02-14 19:05:34.032203565 +0000 UTC m=+1387.008612048" Feb 14 19:05:34 crc kubenswrapper[4897]: I0214 19:05:34.258801 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zxbnk"] Feb 14 19:05:34 crc kubenswrapper[4897]: I0214 19:05:34.902775 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 14 19:05:34 crc kubenswrapper[4897]: I0214 19:05:34.912587 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 14 19:05:35 crc kubenswrapper[4897]: I0214 19:05:34.999594 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-577t2" event={"ID":"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1","Type":"ContainerStarted","Data":"c5e4b357e3e2a2032666ed8ac6a46c18162b9637d256ffc347ec143c21db4e3c"} Feb 14 19:05:35 crc kubenswrapper[4897]: I0214 19:05:35.013019 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7c4d46dc74-lkxxb" event={"ID":"cecda2fd-aafa-4261-9947-e07a96c39aa5","Type":"ContainerStarted","Data":"8bf76ecf0cfa903756c3d6f30e35ee4ffdb7c04d40966de545203d629f135056"} Feb 14 19:05:35 crc kubenswrapper[4897]: I0214 19:05:35.030490 4897 generic.go:334] "Generic (PLEG): container finished" podID="9113bf61-0b89-4343-b4e4-93cc2f704cf9" containerID="d3cc87f22a000ef6381c38e1a9d3d8b1aa1e2339d301b786609ebb185125ed7c" exitCode=0 Feb 14 19:05:35 crc kubenswrapper[4897]: I0214 19:05:35.030558 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxbnk" event={"ID":"9113bf61-0b89-4343-b4e4-93cc2f704cf9","Type":"ContainerDied","Data":"d3cc87f22a000ef6381c38e1a9d3d8b1aa1e2339d301b786609ebb185125ed7c"} Feb 14 19:05:35 crc kubenswrapper[4897]: I0214 19:05:35.030587 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxbnk" event={"ID":"9113bf61-0b89-4343-b4e4-93cc2f704cf9","Type":"ContainerStarted","Data":"a3e0b31d5e928939ac44d82ef8ef55fc3d0773b70e77738809bce604b7b315f1"} Feb 14 19:05:35 crc kubenswrapper[4897]: I0214 19:05:35.034320 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-577t2" podStartSLOduration=5.62494796 podStartE2EDuration="55.034297363s" podCreationTimestamp="2026-02-14 19:04:40 +0000 UTC" firstStartedPulling="2026-02-14 19:04:43.128245178 +0000 UTC m=+1336.104653661" lastFinishedPulling="2026-02-14 19:05:32.537594591 +0000 UTC m=+1385.514003064" observedRunningTime="2026-02-14 19:05:35.02973456 +0000 UTC m=+1388.006143043" watchObservedRunningTime="2026-02-14 19:05:35.034297363 +0000 UTC m=+1388.010705846" Feb 14 19:05:35 crc kubenswrapper[4897]: I0214 19:05:35.048519 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6576bd4d47-rhqmj" event={"ID":"8a7e158b-1796-4311-89ce-c05a5f1acd87","Type":"ContainerStarted","Data":"ee5a2bd2b12de273fc51cce95946c1e78c558c820b975fa4374abe1465f0de3f"} Feb 14 19:05:35 crc kubenswrapper[4897]: I0214 19:05:35.094215 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7c4d46dc74-lkxxb" podStartSLOduration=6.088933421 podStartE2EDuration="9.094198923s" podCreationTimestamp="2026-02-14 19:05:26 +0000 UTC" firstStartedPulling="2026-02-14 19:05:28.920197269 +0000 UTC m=+1381.896605752" lastFinishedPulling="2026-02-14 19:05:31.925462771 +0000 UTC m=+1384.901871254" observedRunningTime="2026-02-14 19:05:35.055605321 +0000 UTC m=+1388.032013804" watchObservedRunningTime="2026-02-14 19:05:35.094198923 +0000 UTC m=+1388.070607406" Feb 14 19:05:35 crc kubenswrapper[4897]: I0214 19:05:35.149600 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-786dc678dd-l4rb5"] Feb 14 19:05:35 crc kubenswrapper[4897]: I0214 19:05:35.160476 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6576bd4d47-rhqmj" podStartSLOduration=5.935536107 podStartE2EDuration="9.160456202s" podCreationTimestamp="2026-02-14 19:05:26 +0000 UTC" firstStartedPulling="2026-02-14 19:05:28.911128414 +0000 UTC m=+1381.887536897" lastFinishedPulling="2026-02-14 19:05:32.136048509 +0000 UTC m=+1385.112456992" observedRunningTime="2026-02-14 19:05:35.131163853 +0000 UTC m=+1388.107572336" watchObservedRunningTime="2026-02-14 19:05:35.160456202 +0000 UTC m=+1388.136864685" Feb 14 19:05:35 crc kubenswrapper[4897]: I0214 19:05:35.176662 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-8bddbd865-mxphm"] Feb 14 19:05:36 crc kubenswrapper[4897]: I0214 19:05:36.056004 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" podUID="b2831142-237b-4232-8433-1a71cecdc1aa" containerName="barbican-keystone-listener-log" containerID="cri-o://d66d9475087a65c875d4f2ba4d212cab00fdfed9b71e525f6378b5a934609052" gracePeriod=30 Feb 14 19:05:36 crc kubenswrapper[4897]: I0214 19:05:36.057430 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-8bddbd865-mxphm" podUID="d6708e0a-c394-435d-b408-84716a21508f" containerName="barbican-worker-log" containerID="cri-o://3f8294d94fc13e1d78970f43614442c964101a7b4ea7435e69b8ff0f08361831" gracePeriod=30 Feb 14 19:05:36 crc kubenswrapper[4897]: I0214 19:05:36.060488 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" podUID="b2831142-237b-4232-8433-1a71cecdc1aa" containerName="barbican-keystone-listener" containerID="cri-o://42011cc9478d5b964b4de7de2ddb08f4eb06dd4e662546321d90a436eeb366c4" gracePeriod=30 Feb 14 19:05:36 crc kubenswrapper[4897]: I0214 19:05:36.060557 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-8bddbd865-mxphm" podUID="d6708e0a-c394-435d-b408-84716a21508f" containerName="barbican-worker" containerID="cri-o://a425845eec9fec19817515b2fa74cb33b7510f33c4feca0989f5697d2a099040" gracePeriod=30 Feb 14 19:05:36 crc kubenswrapper[4897]: I0214 19:05:36.271317 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-c4968" Feb 14 19:05:36 crc kubenswrapper[4897]: I0214 19:05:36.332469 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-5mhmn"] Feb 14 19:05:36 crc kubenswrapper[4897]: I0214 19:05:36.332724 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" podUID="ec91b42f-9953-4bf3-b120-48ea5599b459" containerName="dnsmasq-dns" containerID="cri-o://869e466cd3ba37eb78c0ac59e106aa81833c305db7e694441fb1183186213bce" gracePeriod=10 Feb 14 19:05:37 crc kubenswrapper[4897]: I0214 19:05:37.077145 4897 generic.go:334] "Generic (PLEG): container finished" podID="d6708e0a-c394-435d-b408-84716a21508f" containerID="3f8294d94fc13e1d78970f43614442c964101a7b4ea7435e69b8ff0f08361831" exitCode=143 Feb 14 19:05:37 crc kubenswrapper[4897]: I0214 19:05:37.077236 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8bddbd865-mxphm" event={"ID":"d6708e0a-c394-435d-b408-84716a21508f","Type":"ContainerDied","Data":"3f8294d94fc13e1d78970f43614442c964101a7b4ea7435e69b8ff0f08361831"} Feb 14 19:05:37 crc kubenswrapper[4897]: I0214 19:05:37.080852 4897 generic.go:334] "Generic (PLEG): container finished" podID="b2831142-237b-4232-8433-1a71cecdc1aa" containerID="d66d9475087a65c875d4f2ba4d212cab00fdfed9b71e525f6378b5a934609052" exitCode=143 Feb 14 19:05:37 crc kubenswrapper[4897]: I0214 19:05:37.080953 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" event={"ID":"b2831142-237b-4232-8433-1a71cecdc1aa","Type":"ContainerDied","Data":"d66d9475087a65c875d4f2ba4d212cab00fdfed9b71e525f6378b5a934609052"} Feb 14 19:05:37 crc kubenswrapper[4897]: I0214 19:05:37.099596 4897 generic.go:334] "Generic (PLEG): container finished" podID="ec91b42f-9953-4bf3-b120-48ea5599b459" containerID="869e466cd3ba37eb78c0ac59e106aa81833c305db7e694441fb1183186213bce" exitCode=0 Feb 14 19:05:37 crc kubenswrapper[4897]: I0214 19:05:37.099655 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" event={"ID":"ec91b42f-9953-4bf3-b120-48ea5599b459","Type":"ContainerDied","Data":"869e466cd3ba37eb78c0ac59e106aa81833c305db7e694441fb1183186213bce"} Feb 14 19:05:37 crc kubenswrapper[4897]: I0214 19:05:37.546689 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-669b4bcf6b-jk28p" Feb 14 19:05:39 crc kubenswrapper[4897]: I0214 19:05:39.193821 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-669b4bcf6b-jk28p" Feb 14 19:05:40 crc kubenswrapper[4897]: I0214 19:05:40.479831 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" podUID="ec91b42f-9953-4bf3-b120-48ea5599b459" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.192:5353: connect: connection refused" Feb 14 19:05:41 crc kubenswrapper[4897]: I0214 19:05:41.581285 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" Feb 14 19:05:41 crc kubenswrapper[4897]: I0214 19:05:41.739171 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-ovsdbserver-sb\") pod \"ec91b42f-9953-4bf3-b120-48ea5599b459\" (UID: \"ec91b42f-9953-4bf3-b120-48ea5599b459\") " Feb 14 19:05:41 crc kubenswrapper[4897]: I0214 19:05:41.739226 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-926d7\" (UniqueName: \"kubernetes.io/projected/ec91b42f-9953-4bf3-b120-48ea5599b459-kube-api-access-926d7\") pod \"ec91b42f-9953-4bf3-b120-48ea5599b459\" (UID: \"ec91b42f-9953-4bf3-b120-48ea5599b459\") " Feb 14 19:05:41 crc kubenswrapper[4897]: I0214 19:05:41.739338 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-config\") pod \"ec91b42f-9953-4bf3-b120-48ea5599b459\" (UID: \"ec91b42f-9953-4bf3-b120-48ea5599b459\") " Feb 14 19:05:41 crc kubenswrapper[4897]: I0214 19:05:41.739369 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-dns-svc\") pod \"ec91b42f-9953-4bf3-b120-48ea5599b459\" (UID: \"ec91b42f-9953-4bf3-b120-48ea5599b459\") " Feb 14 19:05:41 crc kubenswrapper[4897]: I0214 19:05:41.739553 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-dns-swift-storage-0\") pod \"ec91b42f-9953-4bf3-b120-48ea5599b459\" (UID: \"ec91b42f-9953-4bf3-b120-48ea5599b459\") " Feb 14 19:05:41 crc kubenswrapper[4897]: I0214 19:05:41.739582 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-ovsdbserver-nb\") pod \"ec91b42f-9953-4bf3-b120-48ea5599b459\" (UID: \"ec91b42f-9953-4bf3-b120-48ea5599b459\") " Feb 14 19:05:41 crc kubenswrapper[4897]: I0214 19:05:41.754292 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec91b42f-9953-4bf3-b120-48ea5599b459-kube-api-access-926d7" (OuterVolumeSpecName: "kube-api-access-926d7") pod "ec91b42f-9953-4bf3-b120-48ea5599b459" (UID: "ec91b42f-9953-4bf3-b120-48ea5599b459"). InnerVolumeSpecName "kube-api-access-926d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:05:41 crc kubenswrapper[4897]: I0214 19:05:41.794590 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ec91b42f-9953-4bf3-b120-48ea5599b459" (UID: "ec91b42f-9953-4bf3-b120-48ea5599b459"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:05:41 crc kubenswrapper[4897]: I0214 19:05:41.808960 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ec91b42f-9953-4bf3-b120-48ea5599b459" (UID: "ec91b42f-9953-4bf3-b120-48ea5599b459"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:05:41 crc kubenswrapper[4897]: I0214 19:05:41.809381 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ec91b42f-9953-4bf3-b120-48ea5599b459" (UID: "ec91b42f-9953-4bf3-b120-48ea5599b459"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:05:41 crc kubenswrapper[4897]: I0214 19:05:41.809960 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-config" (OuterVolumeSpecName: "config") pod "ec91b42f-9953-4bf3-b120-48ea5599b459" (UID: "ec91b42f-9953-4bf3-b120-48ea5599b459"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:05:41 crc kubenswrapper[4897]: I0214 19:05:41.826463 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ec91b42f-9953-4bf3-b120-48ea5599b459" (UID: "ec91b42f-9953-4bf3-b120-48ea5599b459"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:05:41 crc kubenswrapper[4897]: I0214 19:05:41.841656 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:41 crc kubenswrapper[4897]: I0214 19:05:41.841703 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:41 crc kubenswrapper[4897]: I0214 19:05:41.841714 4897 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:41 crc kubenswrapper[4897]: I0214 19:05:41.841726 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:41 crc kubenswrapper[4897]: I0214 19:05:41.841734 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec91b42f-9953-4bf3-b120-48ea5599b459-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:41 crc kubenswrapper[4897]: I0214 19:05:41.841742 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-926d7\" (UniqueName: \"kubernetes.io/projected/ec91b42f-9953-4bf3-b120-48ea5599b459-kube-api-access-926d7\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:41 crc kubenswrapper[4897]: I0214 19:05:41.957044 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75dc4484db-pr977" Feb 14 19:05:42 crc kubenswrapper[4897]: I0214 19:05:42.171492 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" event={"ID":"ec91b42f-9953-4bf3-b120-48ea5599b459","Type":"ContainerDied","Data":"291595da19a0d75b3d70a918780432ad18ea4ebc23c5340ab49bd8f0139431ae"} Feb 14 19:05:42 crc kubenswrapper[4897]: I0214 19:05:42.171573 4897 scope.go:117] "RemoveContainer" containerID="869e466cd3ba37eb78c0ac59e106aa81833c305db7e694441fb1183186213bce" Feb 14 19:05:42 crc kubenswrapper[4897]: I0214 19:05:42.171762 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-5mhmn" Feb 14 19:05:42 crc kubenswrapper[4897]: I0214 19:05:42.176558 4897 generic.go:334] "Generic (PLEG): container finished" podID="c17a810c-7598-46ab-93c3-c480c175ca61" containerID="eee94c9fd239e653cc0a0ffca6a13a2f2f49f3bf9e7f99594c093072baed3b5f" exitCode=0 Feb 14 19:05:42 crc kubenswrapper[4897]: I0214 19:05:42.176598 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-2rjdz" event={"ID":"c17a810c-7598-46ab-93c3-c480c175ca61","Type":"ContainerDied","Data":"eee94c9fd239e653cc0a0ffca6a13a2f2f49f3bf9e7f99594c093072baed3b5f"} Feb 14 19:05:42 crc kubenswrapper[4897]: I0214 19:05:42.282087 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-5mhmn"] Feb 14 19:05:42 crc kubenswrapper[4897]: I0214 19:05:42.302348 4897 scope.go:117] "RemoveContainer" containerID="58a140e91b06d8518d7ef54870e3425c5c46e253573006d7a78bb557c73b7065" Feb 14 19:05:42 crc kubenswrapper[4897]: I0214 19:05:42.304731 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-5mhmn"] Feb 14 19:05:42 crc kubenswrapper[4897]: I0214 19:05:42.400836 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75dc4484db-pr977" Feb 14 19:05:42 crc kubenswrapper[4897]: I0214 19:05:42.493176 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-669b4bcf6b-jk28p"] Feb 14 19:05:42 crc kubenswrapper[4897]: I0214 19:05:42.493409 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-669b4bcf6b-jk28p" podUID="6cfccb60-304d-4c37-b2ac-ed560f3830fe" containerName="barbican-api-log" containerID="cri-o://3f3c6aee89597acc0c4e089826806382f3c22822c9d1201a9f1084bb32f12e68" gracePeriod=30 Feb 14 19:05:42 crc kubenswrapper[4897]: I0214 19:05:42.493545 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-669b4bcf6b-jk28p" podUID="6cfccb60-304d-4c37-b2ac-ed560f3830fe" containerName="barbican-api" containerID="cri-o://84e213c59cf7e3f9b97ea123e8a0a8aa03db9ea8b937582a040daa9a0d92a8d7" gracePeriod=30 Feb 14 19:05:42 crc kubenswrapper[4897]: E0214 19:05:42.770812 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="744ddf55-9af8-4c94-8a91-4280fd9c8d6c" Feb 14 19:05:43 crc kubenswrapper[4897]: I0214 19:05:43.187953 4897 generic.go:334] "Generic (PLEG): container finished" podID="6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1" containerID="c5e4b357e3e2a2032666ed8ac6a46c18162b9637d256ffc347ec143c21db4e3c" exitCode=0 Feb 14 19:05:43 crc kubenswrapper[4897]: I0214 19:05:43.188251 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-577t2" event={"ID":"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1","Type":"ContainerDied","Data":"c5e4b357e3e2a2032666ed8ac6a46c18162b9637d256ffc347ec143c21db4e3c"} Feb 14 19:05:43 crc kubenswrapper[4897]: I0214 19:05:43.189593 4897 generic.go:334] "Generic (PLEG): container finished" podID="6cfccb60-304d-4c37-b2ac-ed560f3830fe" containerID="3f3c6aee89597acc0c4e089826806382f3c22822c9d1201a9f1084bb32f12e68" exitCode=143 Feb 14 19:05:43 crc kubenswrapper[4897]: I0214 19:05:43.189706 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-669b4bcf6b-jk28p" event={"ID":"6cfccb60-304d-4c37-b2ac-ed560f3830fe","Type":"ContainerDied","Data":"3f3c6aee89597acc0c4e089826806382f3c22822c9d1201a9f1084bb32f12e68"} Feb 14 19:05:43 crc kubenswrapper[4897]: I0214 19:05:43.193400 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxbnk" event={"ID":"9113bf61-0b89-4343-b4e4-93cc2f704cf9","Type":"ContainerStarted","Data":"eb6bc599b2c73789ed3a6e08c06490781b3d327bdb3a712c9afbb33fb4177bfd"} Feb 14 19:05:43 crc kubenswrapper[4897]: I0214 19:05:43.194875 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"744ddf55-9af8-4c94-8a91-4280fd9c8d6c","Type":"ContainerStarted","Data":"5447dfb3697cbfb7d2299c2acb826449c9be89342879d4e8b9345e6280e019bb"} Feb 14 19:05:43 crc kubenswrapper[4897]: I0214 19:05:43.194960 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="744ddf55-9af8-4c94-8a91-4280fd9c8d6c" containerName="ceilometer-notification-agent" containerID="cri-o://cb5a41fcb4ff4f2b959cd287547afa1a35f80510e5c23b646c7944fb2ab82e26" gracePeriod=30 Feb 14 19:05:43 crc kubenswrapper[4897]: I0214 19:05:43.195096 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="744ddf55-9af8-4c94-8a91-4280fd9c8d6c" containerName="sg-core" containerID="cri-o://871bcec7d63d4fb3e82c583fe331bd75af3adaa20272cf68242b8de460e29f4d" gracePeriod=30 Feb 14 19:05:43 crc kubenswrapper[4897]: I0214 19:05:43.195215 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="744ddf55-9af8-4c94-8a91-4280fd9c8d6c" containerName="proxy-httpd" containerID="cri-o://5447dfb3697cbfb7d2299c2acb826449c9be89342879d4e8b9345e6280e019bb" gracePeriod=30 Feb 14 19:05:43 crc kubenswrapper[4897]: I0214 19:05:43.808688 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec91b42f-9953-4bf3-b120-48ea5599b459" path="/var/lib/kubelet/pods/ec91b42f-9953-4bf3-b120-48ea5599b459/volumes" Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.205788 4897 generic.go:334] "Generic (PLEG): container finished" podID="9113bf61-0b89-4343-b4e4-93cc2f704cf9" containerID="eb6bc599b2c73789ed3a6e08c06490781b3d327bdb3a712c9afbb33fb4177bfd" exitCode=0 Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.205890 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxbnk" event={"ID":"9113bf61-0b89-4343-b4e4-93cc2f704cf9","Type":"ContainerDied","Data":"eb6bc599b2c73789ed3a6e08c06490781b3d327bdb3a712c9afbb33fb4177bfd"} Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.209648 4897 generic.go:334] "Generic (PLEG): container finished" podID="744ddf55-9af8-4c94-8a91-4280fd9c8d6c" containerID="5447dfb3697cbfb7d2299c2acb826449c9be89342879d4e8b9345e6280e019bb" exitCode=0 Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.209670 4897 generic.go:334] "Generic (PLEG): container finished" podID="744ddf55-9af8-4c94-8a91-4280fd9c8d6c" containerID="871bcec7d63d4fb3e82c583fe331bd75af3adaa20272cf68242b8de460e29f4d" exitCode=2 Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.209679 4897 generic.go:334] "Generic (PLEG): container finished" podID="744ddf55-9af8-4c94-8a91-4280fd9c8d6c" containerID="cb5a41fcb4ff4f2b959cd287547afa1a35f80510e5c23b646c7944fb2ab82e26" exitCode=0 Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.209711 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"744ddf55-9af8-4c94-8a91-4280fd9c8d6c","Type":"ContainerDied","Data":"5447dfb3697cbfb7d2299c2acb826449c9be89342879d4e8b9345e6280e019bb"} Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.209774 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"744ddf55-9af8-4c94-8a91-4280fd9c8d6c","Type":"ContainerDied","Data":"871bcec7d63d4fb3e82c583fe331bd75af3adaa20272cf68242b8de460e29f4d"} Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.209789 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"744ddf55-9af8-4c94-8a91-4280fd9c8d6c","Type":"ContainerDied","Data":"cb5a41fcb4ff4f2b959cd287547afa1a35f80510e5c23b646c7944fb2ab82e26"} Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.335555 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-2rjdz" Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.420309 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c17a810c-7598-46ab-93c3-c480c175ca61-config-data\") pod \"c17a810c-7598-46ab-93c3-c480c175ca61\" (UID: \"c17a810c-7598-46ab-93c3-c480c175ca61\") " Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.420363 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-222vr\" (UniqueName: \"kubernetes.io/projected/c17a810c-7598-46ab-93c3-c480c175ca61-kube-api-access-222vr\") pod \"c17a810c-7598-46ab-93c3-c480c175ca61\" (UID: \"c17a810c-7598-46ab-93c3-c480c175ca61\") " Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.420413 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17a810c-7598-46ab-93c3-c480c175ca61-combined-ca-bundle\") pod \"c17a810c-7598-46ab-93c3-c480c175ca61\" (UID: \"c17a810c-7598-46ab-93c3-c480c175ca61\") " Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.425379 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c17a810c-7598-46ab-93c3-c480c175ca61-kube-api-access-222vr" (OuterVolumeSpecName: "kube-api-access-222vr") pod "c17a810c-7598-46ab-93c3-c480c175ca61" (UID: "c17a810c-7598-46ab-93c3-c480c175ca61"). InnerVolumeSpecName "kube-api-access-222vr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.470857 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c17a810c-7598-46ab-93c3-c480c175ca61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c17a810c-7598-46ab-93c3-c480c175ca61" (UID: "c17a810c-7598-46ab-93c3-c480c175ca61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.516811 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c17a810c-7598-46ab-93c3-c480c175ca61-config-data" (OuterVolumeSpecName: "config-data") pod "c17a810c-7598-46ab-93c3-c480c175ca61" (UID: "c17a810c-7598-46ab-93c3-c480c175ca61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.523440 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c17a810c-7598-46ab-93c3-c480c175ca61-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.523489 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-222vr\" (UniqueName: \"kubernetes.io/projected/c17a810c-7598-46ab-93c3-c480c175ca61-kube-api-access-222vr\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.523507 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17a810c-7598-46ab-93c3-c480c175ca61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.793430 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-577t2" Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.886245 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.929982 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-combined-ca-bundle\") pod \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\" (UID: \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\") " Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.930092 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-config-data\") pod \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\" (UID: \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\") " Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.930240 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-db-sync-config-data\") pod \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\" (UID: \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\") " Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.930294 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-etc-machine-id\") pod \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\" (UID: \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\") " Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.930325 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-scripts\") pod \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\" (UID: \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\") " Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.930429 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdhqw\" (UniqueName: \"kubernetes.io/projected/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-kube-api-access-qdhqw\") pod \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\" (UID: \"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1\") " Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.930446 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1" (UID: "6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.932475 4897 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.934743 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-kube-api-access-qdhqw" (OuterVolumeSpecName: "kube-api-access-qdhqw") pod "6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1" (UID: "6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1"). InnerVolumeSpecName "kube-api-access-qdhqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.934998 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1" (UID: "6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.935053 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-scripts" (OuterVolumeSpecName: "scripts") pod "6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1" (UID: "6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.971746 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1" (UID: "6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:44 crc kubenswrapper[4897]: I0214 19:05:44.993694 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-config-data" (OuterVolumeSpecName: "config-data") pod "6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1" (UID: "6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.034257 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46sfb\" (UniqueName: \"kubernetes.io/projected/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-kube-api-access-46sfb\") pod \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.034370 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-sg-core-conf-yaml\") pod \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.034428 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-combined-ca-bundle\") pod \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.034481 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-scripts\") pod \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.034535 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-run-httpd\") pod \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.034592 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-config-data\") pod \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.034670 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-log-httpd\") pod \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\" (UID: \"744ddf55-9af8-4c94-8a91-4280fd9c8d6c\") " Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.035151 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.035173 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.035182 4897 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.035190 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.035199 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdhqw\" (UniqueName: \"kubernetes.io/projected/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1-kube-api-access-qdhqw\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.035328 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "744ddf55-9af8-4c94-8a91-4280fd9c8d6c" (UID: "744ddf55-9af8-4c94-8a91-4280fd9c8d6c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.035475 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "744ddf55-9af8-4c94-8a91-4280fd9c8d6c" (UID: "744ddf55-9af8-4c94-8a91-4280fd9c8d6c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.037548 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-scripts" (OuterVolumeSpecName: "scripts") pod "744ddf55-9af8-4c94-8a91-4280fd9c8d6c" (UID: "744ddf55-9af8-4c94-8a91-4280fd9c8d6c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.037998 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-kube-api-access-46sfb" (OuterVolumeSpecName: "kube-api-access-46sfb") pod "744ddf55-9af8-4c94-8a91-4280fd9c8d6c" (UID: "744ddf55-9af8-4c94-8a91-4280fd9c8d6c"). InnerVolumeSpecName "kube-api-access-46sfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.101250 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "744ddf55-9af8-4c94-8a91-4280fd9c8d6c" (UID: "744ddf55-9af8-4c94-8a91-4280fd9c8d6c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.120228 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "744ddf55-9af8-4c94-8a91-4280fd9c8d6c" (UID: "744ddf55-9af8-4c94-8a91-4280fd9c8d6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.137424 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46sfb\" (UniqueName: \"kubernetes.io/projected/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-kube-api-access-46sfb\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.137479 4897 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.137499 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.137518 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.137537 4897 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.137555 4897 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.143845 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-config-data" (OuterVolumeSpecName: "config-data") pod "744ddf55-9af8-4c94-8a91-4280fd9c8d6c" (UID: "744ddf55-9af8-4c94-8a91-4280fd9c8d6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.223784 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxbnk" event={"ID":"9113bf61-0b89-4343-b4e4-93cc2f704cf9","Type":"ContainerStarted","Data":"06e14efc6be3cdbabe81c38b4386cca770eb38fde459f4d8e32bd8c12973dc6b"} Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.229642 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"744ddf55-9af8-4c94-8a91-4280fd9c8d6c","Type":"ContainerDied","Data":"05587ed4ba696b5e71e10829f7c77636251ff15708457c5f4916154b99fe1d29"} Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.229702 4897 scope.go:117] "RemoveContainer" containerID="5447dfb3697cbfb7d2299c2acb826449c9be89342879d4e8b9345e6280e019bb" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.229869 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.234573 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-577t2" event={"ID":"6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1","Type":"ContainerDied","Data":"52c594c4b0a814706f3038551703f21b2a3aec9032fa649794b4ddbfc803dbaa"} Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.234620 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52c594c4b0a814706f3038551703f21b2a3aec9032fa649794b4ddbfc803dbaa" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.234680 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-577t2" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.239691 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/744ddf55-9af8-4c94-8a91-4280fd9c8d6c-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.241737 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-2rjdz" event={"ID":"c17a810c-7598-46ab-93c3-c480c175ca61","Type":"ContainerDied","Data":"dfb3b57d7b891e300ad76c5a8744de110daa830d14ea87557ee595f70db57434"} Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.241953 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfb3b57d7b891e300ad76c5a8744de110daa830d14ea87557ee595f70db57434" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.242182 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-2rjdz" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.274435 4897 scope.go:117] "RemoveContainer" containerID="871bcec7d63d4fb3e82c583fe331bd75af3adaa20272cf68242b8de460e29f4d" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.318826 4897 scope.go:117] "RemoveContainer" containerID="cb5a41fcb4ff4f2b959cd287547afa1a35f80510e5c23b646c7944fb2ab82e26" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.354219 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.369672 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.383386 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:05:45 crc kubenswrapper[4897]: E0214 19:05:45.383882 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec91b42f-9953-4bf3-b120-48ea5599b459" containerName="init" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.383901 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec91b42f-9953-4bf3-b120-48ea5599b459" containerName="init" Feb 14 19:05:45 crc kubenswrapper[4897]: E0214 19:05:45.383934 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="744ddf55-9af8-4c94-8a91-4280fd9c8d6c" containerName="proxy-httpd" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.383940 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="744ddf55-9af8-4c94-8a91-4280fd9c8d6c" containerName="proxy-httpd" Feb 14 19:05:45 crc kubenswrapper[4897]: E0214 19:05:45.383956 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c17a810c-7598-46ab-93c3-c480c175ca61" containerName="heat-db-sync" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.383962 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c17a810c-7598-46ab-93c3-c480c175ca61" containerName="heat-db-sync" Feb 14 19:05:45 crc kubenswrapper[4897]: E0214 19:05:45.383976 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec91b42f-9953-4bf3-b120-48ea5599b459" containerName="dnsmasq-dns" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.383982 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec91b42f-9953-4bf3-b120-48ea5599b459" containerName="dnsmasq-dns" Feb 14 19:05:45 crc kubenswrapper[4897]: E0214 19:05:45.383993 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="744ddf55-9af8-4c94-8a91-4280fd9c8d6c" containerName="sg-core" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.383998 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="744ddf55-9af8-4c94-8a91-4280fd9c8d6c" containerName="sg-core" Feb 14 19:05:45 crc kubenswrapper[4897]: E0214 19:05:45.384009 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1" containerName="cinder-db-sync" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.384014 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1" containerName="cinder-db-sync" Feb 14 19:05:45 crc kubenswrapper[4897]: E0214 19:05:45.384042 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="744ddf55-9af8-4c94-8a91-4280fd9c8d6c" containerName="ceilometer-notification-agent" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.384049 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="744ddf55-9af8-4c94-8a91-4280fd9c8d6c" containerName="ceilometer-notification-agent" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.384235 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="744ddf55-9af8-4c94-8a91-4280fd9c8d6c" containerName="sg-core" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.384254 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="c17a810c-7598-46ab-93c3-c480c175ca61" containerName="heat-db-sync" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.384265 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="744ddf55-9af8-4c94-8a91-4280fd9c8d6c" containerName="proxy-httpd" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.384278 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec91b42f-9953-4bf3-b120-48ea5599b459" containerName="dnsmasq-dns" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.384291 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="744ddf55-9af8-4c94-8a91-4280fd9c8d6c" containerName="ceilometer-notification-agent" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.384302 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1" containerName="cinder-db-sync" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.398223 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.401528 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.408533 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.414676 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.554892 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8fb69d8d-0e17-4fce-83d7-c983dade92d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " pod="openstack/ceilometer-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.554976 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fb69d8d-0e17-4fce-83d7-c983dade92d9-config-data\") pod \"ceilometer-0\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " pod="openstack/ceilometer-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.555016 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fb69d8d-0e17-4fce-83d7-c983dade92d9-scripts\") pod \"ceilometer-0\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " pod="openstack/ceilometer-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.555054 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fb69d8d-0e17-4fce-83d7-c983dade92d9-log-httpd\") pod \"ceilometer-0\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " pod="openstack/ceilometer-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.555168 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fb69d8d-0e17-4fce-83d7-c983dade92d9-run-httpd\") pod \"ceilometer-0\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " pod="openstack/ceilometer-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.555202 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fb69d8d-0e17-4fce-83d7-c983dade92d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " pod="openstack/ceilometer-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.555269 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn7jd\" (UniqueName: \"kubernetes.io/projected/8fb69d8d-0e17-4fce-83d7-c983dade92d9-kube-api-access-wn7jd\") pod \"ceilometer-0\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " pod="openstack/ceilometer-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.635384 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.637609 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.650090 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-8fgns" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.650382 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.650518 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.650938 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.656591 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fb69d8d-0e17-4fce-83d7-c983dade92d9-run-httpd\") pod \"ceilometer-0\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " pod="openstack/ceilometer-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.656635 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fb69d8d-0e17-4fce-83d7-c983dade92d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " pod="openstack/ceilometer-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.656686 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn7jd\" (UniqueName: \"kubernetes.io/projected/8fb69d8d-0e17-4fce-83d7-c983dade92d9-kube-api-access-wn7jd\") pod \"ceilometer-0\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " pod="openstack/ceilometer-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.656750 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8fb69d8d-0e17-4fce-83d7-c983dade92d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " pod="openstack/ceilometer-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.656810 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fb69d8d-0e17-4fce-83d7-c983dade92d9-config-data\") pod \"ceilometer-0\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " pod="openstack/ceilometer-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.656834 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fb69d8d-0e17-4fce-83d7-c983dade92d9-scripts\") pod \"ceilometer-0\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " pod="openstack/ceilometer-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.656848 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fb69d8d-0e17-4fce-83d7-c983dade92d9-log-httpd\") pod \"ceilometer-0\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " pod="openstack/ceilometer-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.657361 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fb69d8d-0e17-4fce-83d7-c983dade92d9-log-httpd\") pod \"ceilometer-0\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " pod="openstack/ceilometer-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.657458 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fb69d8d-0e17-4fce-83d7-c983dade92d9-run-httpd\") pod \"ceilometer-0\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " pod="openstack/ceilometer-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.671707 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fb69d8d-0e17-4fce-83d7-c983dade92d9-scripts\") pod \"ceilometer-0\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " pod="openstack/ceilometer-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.672993 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8fb69d8d-0e17-4fce-83d7-c983dade92d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " pod="openstack/ceilometer-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.673736 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fb69d8d-0e17-4fce-83d7-c983dade92d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " pod="openstack/ceilometer-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.681044 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fb69d8d-0e17-4fce-83d7-c983dade92d9-config-data\") pod \"ceilometer-0\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " pod="openstack/ceilometer-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.688363 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.730887 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn7jd\" (UniqueName: \"kubernetes.io/projected/8fb69d8d-0e17-4fce-83d7-c983dade92d9-kube-api-access-wn7jd\") pod \"ceilometer-0\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " pod="openstack/ceilometer-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.758485 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\") " pod="openstack/cinder-scheduler-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.758824 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\") " pod="openstack/cinder-scheduler-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.758895 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-config-data\") pod \"cinder-scheduler-0\" (UID: \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\") " pod="openstack/cinder-scheduler-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.758987 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-scripts\") pod \"cinder-scheduler-0\" (UID: \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\") " pod="openstack/cinder-scheduler-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.759122 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj4vg\" (UniqueName: \"kubernetes.io/projected/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-kube-api-access-hj4vg\") pod \"cinder-scheduler-0\" (UID: \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\") " pod="openstack/cinder-scheduler-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.759217 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\") " pod="openstack/cinder-scheduler-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.819044 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="744ddf55-9af8-4c94-8a91-4280fd9c8d6c" path="/var/lib/kubelet/pods/744ddf55-9af8-4c94-8a91-4280fd9c8d6c/volumes" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.821472 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t4f5v"] Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.823256 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.861422 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\") " pod="openstack/cinder-scheduler-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.861489 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\") " pod="openstack/cinder-scheduler-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.861620 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\") " pod="openstack/cinder-scheduler-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.861645 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-config-data\") pod \"cinder-scheduler-0\" (UID: \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\") " pod="openstack/cinder-scheduler-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.861679 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-scripts\") pod \"cinder-scheduler-0\" (UID: \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\") " pod="openstack/cinder-scheduler-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.861754 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj4vg\" (UniqueName: \"kubernetes.io/projected/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-kube-api-access-hj4vg\") pod \"cinder-scheduler-0\" (UID: \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\") " pod="openstack/cinder-scheduler-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.862004 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\") " pod="openstack/cinder-scheduler-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.868225 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-config-data\") pod \"cinder-scheduler-0\" (UID: \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\") " pod="openstack/cinder-scheduler-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.868313 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t4f5v"] Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.883810 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-scripts\") pod \"cinder-scheduler-0\" (UID: \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\") " pod="openstack/cinder-scheduler-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.883930 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\") " pod="openstack/cinder-scheduler-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.884286 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\") " pod="openstack/cinder-scheduler-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.894357 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5f78bcb6c6-95jr5" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.900191 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj4vg\" (UniqueName: \"kubernetes.io/projected/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-kube-api-access-hj4vg\") pod \"cinder-scheduler-0\" (UID: \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\") " pod="openstack/cinder-scheduler-0" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.963702 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-t4f5v\" (UID: \"e04401f8-3fac-42bb-924b-1235cb127ed3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.963992 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-config\") pod \"dnsmasq-dns-5c9776ccc5-t4f5v\" (UID: \"e04401f8-3fac-42bb-924b-1235cb127ed3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.964092 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-t4f5v\" (UID: \"e04401f8-3fac-42bb-924b-1235cb127ed3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.964141 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj5rt\" (UniqueName: \"kubernetes.io/projected/e04401f8-3fac-42bb-924b-1235cb127ed3-kube-api-access-lj5rt\") pod \"dnsmasq-dns-5c9776ccc5-t4f5v\" (UID: \"e04401f8-3fac-42bb-924b-1235cb127ed3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.964173 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-t4f5v\" (UID: \"e04401f8-3fac-42bb-924b-1235cb127ed3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" Feb 14 19:05:45 crc kubenswrapper[4897]: I0214 19:05:45.964253 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-t4f5v\" (UID: \"e04401f8-3fac-42bb-924b-1235cb127ed3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.020860 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.030101 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.032340 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.036161 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.066813 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj5rt\" (UniqueName: \"kubernetes.io/projected/e04401f8-3fac-42bb-924b-1235cb127ed3-kube-api-access-lj5rt\") pod \"dnsmasq-dns-5c9776ccc5-t4f5v\" (UID: \"e04401f8-3fac-42bb-924b-1235cb127ed3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.066864 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-t4f5v\" (UID: \"e04401f8-3fac-42bb-924b-1235cb127ed3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.066916 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-t4f5v\" (UID: \"e04401f8-3fac-42bb-924b-1235cb127ed3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.067002 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-t4f5v\" (UID: \"e04401f8-3fac-42bb-924b-1235cb127ed3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.067046 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-config\") pod \"dnsmasq-dns-5c9776ccc5-t4f5v\" (UID: \"e04401f8-3fac-42bb-924b-1235cb127ed3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.067108 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-t4f5v\" (UID: \"e04401f8-3fac-42bb-924b-1235cb127ed3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.067905 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-t4f5v\" (UID: \"e04401f8-3fac-42bb-924b-1235cb127ed3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.068461 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-t4f5v\" (UID: \"e04401f8-3fac-42bb-924b-1235cb127ed3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.069048 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-t4f5v\" (UID: \"e04401f8-3fac-42bb-924b-1235cb127ed3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.069059 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-config\") pod \"dnsmasq-dns-5c9776ccc5-t4f5v\" (UID: \"e04401f8-3fac-42bb-924b-1235cb127ed3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.077311 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-t4f5v\" (UID: \"e04401f8-3fac-42bb-924b-1235cb127ed3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.102850 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj5rt\" (UniqueName: \"kubernetes.io/projected/e04401f8-3fac-42bb-924b-1235cb127ed3-kube-api-access-lj5rt\") pod \"dnsmasq-dns-5c9776ccc5-t4f5v\" (UID: \"e04401f8-3fac-42bb-924b-1235cb127ed3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.113907 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.150546 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.158176 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.169360 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47617af5-9d67-473f-aefb-624a6da6a037-config-data-custom\") pod \"cinder-api-0\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " pod="openstack/cinder-api-0" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.169419 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/47617af5-9d67-473f-aefb-624a6da6a037-etc-machine-id\") pod \"cinder-api-0\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " pod="openstack/cinder-api-0" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.169443 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47617af5-9d67-473f-aefb-624a6da6a037-scripts\") pod \"cinder-api-0\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " pod="openstack/cinder-api-0" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.169485 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47617af5-9d67-473f-aefb-624a6da6a037-logs\") pod \"cinder-api-0\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " pod="openstack/cinder-api-0" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.169534 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47617af5-9d67-473f-aefb-624a6da6a037-config-data\") pod \"cinder-api-0\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " pod="openstack/cinder-api-0" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.169559 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47617af5-9d67-473f-aefb-624a6da6a037-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " pod="openstack/cinder-api-0" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.169592 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-568lq\" (UniqueName: \"kubernetes.io/projected/47617af5-9d67-473f-aefb-624a6da6a037-kube-api-access-568lq\") pod \"cinder-api-0\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " pod="openstack/cinder-api-0" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.266048 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6dd74d4b5f-8tgjp"] Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.266582 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6dd74d4b5f-8tgjp" podUID="642b5930-c972-4455-a280-932d5fda60e5" containerName="neutron-api" containerID="cri-o://73c1f6b1e0814defe620f945e54efdbc1f56cfeef435df47c1631472eee54585" gracePeriod=30 Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.266701 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6dd74d4b5f-8tgjp" podUID="642b5930-c972-4455-a280-932d5fda60e5" containerName="neutron-httpd" containerID="cri-o://a334f6c8f13e82eb6ab5c131b72fcadfe8822654a4d9beeae21bed808fa49172" gracePeriod=30 Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.271927 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47617af5-9d67-473f-aefb-624a6da6a037-config-data-custom\") pod \"cinder-api-0\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " pod="openstack/cinder-api-0" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.271976 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/47617af5-9d67-473f-aefb-624a6da6a037-etc-machine-id\") pod \"cinder-api-0\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " pod="openstack/cinder-api-0" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.271997 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47617af5-9d67-473f-aefb-624a6da6a037-scripts\") pod \"cinder-api-0\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " pod="openstack/cinder-api-0" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.272061 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47617af5-9d67-473f-aefb-624a6da6a037-logs\") pod \"cinder-api-0\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " pod="openstack/cinder-api-0" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.272107 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47617af5-9d67-473f-aefb-624a6da6a037-config-data\") pod \"cinder-api-0\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " pod="openstack/cinder-api-0" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.272136 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47617af5-9d67-473f-aefb-624a6da6a037-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " pod="openstack/cinder-api-0" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.272169 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-568lq\" (UniqueName: \"kubernetes.io/projected/47617af5-9d67-473f-aefb-624a6da6a037-kube-api-access-568lq\") pod \"cinder-api-0\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " pod="openstack/cinder-api-0" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.275320 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47617af5-9d67-473f-aefb-624a6da6a037-logs\") pod \"cinder-api-0\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " pod="openstack/cinder-api-0" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.275361 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/47617af5-9d67-473f-aefb-624a6da6a037-etc-machine-id\") pod \"cinder-api-0\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " pod="openstack/cinder-api-0" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.294949 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47617af5-9d67-473f-aefb-624a6da6a037-scripts\") pod \"cinder-api-0\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " pod="openstack/cinder-api-0" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.296200 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47617af5-9d67-473f-aefb-624a6da6a037-config-data\") pod \"cinder-api-0\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " pod="openstack/cinder-api-0" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.296225 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47617af5-9d67-473f-aefb-624a6da6a037-config-data-custom\") pod \"cinder-api-0\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " pod="openstack/cinder-api-0" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.303827 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47617af5-9d67-473f-aefb-624a6da6a037-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " pod="openstack/cinder-api-0" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.309619 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-567589579f-jbtqc"] Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.311531 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-567589579f-jbtqc" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.309770 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-568lq\" (UniqueName: \"kubernetes.io/projected/47617af5-9d67-473f-aefb-624a6da6a037-kube-api-access-568lq\") pod \"cinder-api-0\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " pod="openstack/cinder-api-0" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.336718 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-567589579f-jbtqc"] Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.342487 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zxbnk" podStartSLOduration=4.705265161 podStartE2EDuration="14.342465308s" podCreationTimestamp="2026-02-14 19:05:32 +0000 UTC" firstStartedPulling="2026-02-14 19:05:35.033689674 +0000 UTC m=+1388.010098157" lastFinishedPulling="2026-02-14 19:05:44.670889821 +0000 UTC m=+1397.647298304" observedRunningTime="2026-02-14 19:05:46.33488927 +0000 UTC m=+1399.311297773" watchObservedRunningTime="2026-02-14 19:05:46.342465308 +0000 UTC m=+1399.318873781" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.390575 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s2hr\" (UniqueName: \"kubernetes.io/projected/ec334ed0-f181-451a-8f76-12defbfc2460-kube-api-access-8s2hr\") pod \"neutron-567589579f-jbtqc\" (UID: \"ec334ed0-f181-451a-8f76-12defbfc2460\") " pod="openstack/neutron-567589579f-jbtqc" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.391174 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec334ed0-f181-451a-8f76-12defbfc2460-public-tls-certs\") pod \"neutron-567589579f-jbtqc\" (UID: \"ec334ed0-f181-451a-8f76-12defbfc2460\") " pod="openstack/neutron-567589579f-jbtqc" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.391299 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec334ed0-f181-451a-8f76-12defbfc2460-config\") pod \"neutron-567589579f-jbtqc\" (UID: \"ec334ed0-f181-451a-8f76-12defbfc2460\") " pod="openstack/neutron-567589579f-jbtqc" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.391503 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec334ed0-f181-451a-8f76-12defbfc2460-ovndb-tls-certs\") pod \"neutron-567589579f-jbtqc\" (UID: \"ec334ed0-f181-451a-8f76-12defbfc2460\") " pod="openstack/neutron-567589579f-jbtqc" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.391702 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ec334ed0-f181-451a-8f76-12defbfc2460-httpd-config\") pod \"neutron-567589579f-jbtqc\" (UID: \"ec334ed0-f181-451a-8f76-12defbfc2460\") " pod="openstack/neutron-567589579f-jbtqc" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.391792 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec334ed0-f181-451a-8f76-12defbfc2460-internal-tls-certs\") pod \"neutron-567589579f-jbtqc\" (UID: \"ec334ed0-f181-451a-8f76-12defbfc2460\") " pod="openstack/neutron-567589579f-jbtqc" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.391976 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec334ed0-f181-451a-8f76-12defbfc2460-combined-ca-bundle\") pod \"neutron-567589579f-jbtqc\" (UID: \"ec334ed0-f181-451a-8f76-12defbfc2460\") " pod="openstack/neutron-567589579f-jbtqc" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.409824 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.420112 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6dd74d4b5f-8tgjp" podUID="642b5930-c972-4455-a280-932d5fda60e5" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.194:9696/\": read tcp 10.217.0.2:46242->10.217.0.194:9696: read: connection reset by peer" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.496334 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec334ed0-f181-451a-8f76-12defbfc2460-ovndb-tls-certs\") pod \"neutron-567589579f-jbtqc\" (UID: \"ec334ed0-f181-451a-8f76-12defbfc2460\") " pod="openstack/neutron-567589579f-jbtqc" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.496468 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ec334ed0-f181-451a-8f76-12defbfc2460-httpd-config\") pod \"neutron-567589579f-jbtqc\" (UID: \"ec334ed0-f181-451a-8f76-12defbfc2460\") " pod="openstack/neutron-567589579f-jbtqc" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.496509 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec334ed0-f181-451a-8f76-12defbfc2460-internal-tls-certs\") pod \"neutron-567589579f-jbtqc\" (UID: \"ec334ed0-f181-451a-8f76-12defbfc2460\") " pod="openstack/neutron-567589579f-jbtqc" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.496607 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec334ed0-f181-451a-8f76-12defbfc2460-combined-ca-bundle\") pod \"neutron-567589579f-jbtqc\" (UID: \"ec334ed0-f181-451a-8f76-12defbfc2460\") " pod="openstack/neutron-567589579f-jbtqc" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.496633 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s2hr\" (UniqueName: \"kubernetes.io/projected/ec334ed0-f181-451a-8f76-12defbfc2460-kube-api-access-8s2hr\") pod \"neutron-567589579f-jbtqc\" (UID: \"ec334ed0-f181-451a-8f76-12defbfc2460\") " pod="openstack/neutron-567589579f-jbtqc" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.496661 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec334ed0-f181-451a-8f76-12defbfc2460-public-tls-certs\") pod \"neutron-567589579f-jbtqc\" (UID: \"ec334ed0-f181-451a-8f76-12defbfc2460\") " pod="openstack/neutron-567589579f-jbtqc" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.496686 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec334ed0-f181-451a-8f76-12defbfc2460-config\") pod \"neutron-567589579f-jbtqc\" (UID: \"ec334ed0-f181-451a-8f76-12defbfc2460\") " pod="openstack/neutron-567589579f-jbtqc" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.505279 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec334ed0-f181-451a-8f76-12defbfc2460-config\") pod \"neutron-567589579f-jbtqc\" (UID: \"ec334ed0-f181-451a-8f76-12defbfc2460\") " pod="openstack/neutron-567589579f-jbtqc" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.506009 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ec334ed0-f181-451a-8f76-12defbfc2460-httpd-config\") pod \"neutron-567589579f-jbtqc\" (UID: \"ec334ed0-f181-451a-8f76-12defbfc2460\") " pod="openstack/neutron-567589579f-jbtqc" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.507199 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec334ed0-f181-451a-8f76-12defbfc2460-internal-tls-certs\") pod \"neutron-567589579f-jbtqc\" (UID: \"ec334ed0-f181-451a-8f76-12defbfc2460\") " pod="openstack/neutron-567589579f-jbtqc" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.508927 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec334ed0-f181-451a-8f76-12defbfc2460-combined-ca-bundle\") pod \"neutron-567589579f-jbtqc\" (UID: \"ec334ed0-f181-451a-8f76-12defbfc2460\") " pod="openstack/neutron-567589579f-jbtqc" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.515166 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec334ed0-f181-451a-8f76-12defbfc2460-ovndb-tls-certs\") pod \"neutron-567589579f-jbtqc\" (UID: \"ec334ed0-f181-451a-8f76-12defbfc2460\") " pod="openstack/neutron-567589579f-jbtqc" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.519192 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec334ed0-f181-451a-8f76-12defbfc2460-public-tls-certs\") pod \"neutron-567589579f-jbtqc\" (UID: \"ec334ed0-f181-451a-8f76-12defbfc2460\") " pod="openstack/neutron-567589579f-jbtqc" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.534298 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s2hr\" (UniqueName: \"kubernetes.io/projected/ec334ed0-f181-451a-8f76-12defbfc2460-kube-api-access-8s2hr\") pod \"neutron-567589579f-jbtqc\" (UID: \"ec334ed0-f181-451a-8f76-12defbfc2460\") " pod="openstack/neutron-567589579f-jbtqc" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.642769 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-567589579f-jbtqc" Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.811276 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:05:46 crc kubenswrapper[4897]: W0214 19:05:46.836473 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fb69d8d_0e17_4fce_83d7_c983dade92d9.slice/crio-cdf761d947d703368c31790fd8fd4b55ff0eb0660771cba73b485c113b8d11c9 WatchSource:0}: Error finding container cdf761d947d703368c31790fd8fd4b55ff0eb0660771cba73b485c113b8d11c9: Status 404 returned error can't find the container with id cdf761d947d703368c31790fd8fd4b55ff0eb0660771cba73b485c113b8d11c9 Feb 14 19:05:46 crc kubenswrapper[4897]: I0214 19:05:46.960867 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-669b4bcf6b-jk28p" Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.132886 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cfccb60-304d-4c37-b2ac-ed560f3830fe-config-data-custom\") pod \"6cfccb60-304d-4c37-b2ac-ed560f3830fe\" (UID: \"6cfccb60-304d-4c37-b2ac-ed560f3830fe\") " Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.132991 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8pn5\" (UniqueName: \"kubernetes.io/projected/6cfccb60-304d-4c37-b2ac-ed560f3830fe-kube-api-access-x8pn5\") pod \"6cfccb60-304d-4c37-b2ac-ed560f3830fe\" (UID: \"6cfccb60-304d-4c37-b2ac-ed560f3830fe\") " Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.133141 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cfccb60-304d-4c37-b2ac-ed560f3830fe-logs\") pod \"6cfccb60-304d-4c37-b2ac-ed560f3830fe\" (UID: \"6cfccb60-304d-4c37-b2ac-ed560f3830fe\") " Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.133277 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cfccb60-304d-4c37-b2ac-ed560f3830fe-combined-ca-bundle\") pod \"6cfccb60-304d-4c37-b2ac-ed560f3830fe\" (UID: \"6cfccb60-304d-4c37-b2ac-ed560f3830fe\") " Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.133314 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cfccb60-304d-4c37-b2ac-ed560f3830fe-config-data\") pod \"6cfccb60-304d-4c37-b2ac-ed560f3830fe\" (UID: \"6cfccb60-304d-4c37-b2ac-ed560f3830fe\") " Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.136406 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cfccb60-304d-4c37-b2ac-ed560f3830fe-logs" (OuterVolumeSpecName: "logs") pod "6cfccb60-304d-4c37-b2ac-ed560f3830fe" (UID: "6cfccb60-304d-4c37-b2ac-ed560f3830fe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.142160 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cfccb60-304d-4c37-b2ac-ed560f3830fe-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6cfccb60-304d-4c37-b2ac-ed560f3830fe" (UID: "6cfccb60-304d-4c37-b2ac-ed560f3830fe"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.142212 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cfccb60-304d-4c37-b2ac-ed560f3830fe-kube-api-access-x8pn5" (OuterVolumeSpecName: "kube-api-access-x8pn5") pod "6cfccb60-304d-4c37-b2ac-ed560f3830fe" (UID: "6cfccb60-304d-4c37-b2ac-ed560f3830fe"). InnerVolumeSpecName "kube-api-access-x8pn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.222778 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cfccb60-304d-4c37-b2ac-ed560f3830fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6cfccb60-304d-4c37-b2ac-ed560f3830fe" (UID: "6cfccb60-304d-4c37-b2ac-ed560f3830fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.242792 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cfccb60-304d-4c37-b2ac-ed560f3830fe-logs\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.243002 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cfccb60-304d-4c37-b2ac-ed560f3830fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.243014 4897 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cfccb60-304d-4c37-b2ac-ed560f3830fe-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.243215 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8pn5\" (UniqueName: \"kubernetes.io/projected/6cfccb60-304d-4c37-b2ac-ed560f3830fe-kube-api-access-x8pn5\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.243253 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.250126 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cfccb60-304d-4c37-b2ac-ed560f3830fe-config-data" (OuterVolumeSpecName: "config-data") pod "6cfccb60-304d-4c37-b2ac-ed560f3830fe" (UID: "6cfccb60-304d-4c37-b2ac-ed560f3830fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.273799 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.319638 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t4f5v"] Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.330420 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"47617af5-9d67-473f-aefb-624a6da6a037","Type":"ContainerStarted","Data":"c118173f340810c11e7074974f12c2c8ad26cf0c7fa9923c6ed3f408f86dfea7"} Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.336037 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b40e3c50-9e19-4e83-af97-75ddf8aa8d88","Type":"ContainerStarted","Data":"9557d514311944daa587aca1a5b0bf39f409071ceaac780d001cca5f58c47f16"} Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.345393 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cfccb60-304d-4c37-b2ac-ed560f3830fe-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.351752 4897 generic.go:334] "Generic (PLEG): container finished" podID="642b5930-c972-4455-a280-932d5fda60e5" containerID="a334f6c8f13e82eb6ab5c131b72fcadfe8822654a4d9beeae21bed808fa49172" exitCode=0 Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.351844 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dd74d4b5f-8tgjp" event={"ID":"642b5930-c972-4455-a280-932d5fda60e5","Type":"ContainerDied","Data":"a334f6c8f13e82eb6ab5c131b72fcadfe8822654a4d9beeae21bed808fa49172"} Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.372710 4897 generic.go:334] "Generic (PLEG): container finished" podID="6cfccb60-304d-4c37-b2ac-ed560f3830fe" containerID="84e213c59cf7e3f9b97ea123e8a0a8aa03db9ea8b937582a040daa9a0d92a8d7" exitCode=0 Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.372854 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-669b4bcf6b-jk28p" Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.373715 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-669b4bcf6b-jk28p" event={"ID":"6cfccb60-304d-4c37-b2ac-ed560f3830fe","Type":"ContainerDied","Data":"84e213c59cf7e3f9b97ea123e8a0a8aa03db9ea8b937582a040daa9a0d92a8d7"} Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.373764 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-669b4bcf6b-jk28p" event={"ID":"6cfccb60-304d-4c37-b2ac-ed560f3830fe","Type":"ContainerDied","Data":"5e50b2751d0d92aca6c5b02f8b69e0cca1689520cbb8ba878c6dfcd5aa486e29"} Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.373809 4897 scope.go:117] "RemoveContainer" containerID="84e213c59cf7e3f9b97ea123e8a0a8aa03db9ea8b937582a040daa9a0d92a8d7" Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.378110 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fb69d8d-0e17-4fce-83d7-c983dade92d9","Type":"ContainerStarted","Data":"cdf761d947d703368c31790fd8fd4b55ff0eb0660771cba73b485c113b8d11c9"} Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.399426 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" event={"ID":"e04401f8-3fac-42bb-924b-1235cb127ed3","Type":"ContainerStarted","Data":"ad1b6c7679c3c3da5a6922c9fce4e458226ae29d698d8deee28e371711aaf297"} Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.442094 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-669b4bcf6b-jk28p"] Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.452318 4897 scope.go:117] "RemoveContainer" containerID="3f3c6aee89597acc0c4e089826806382f3c22822c9d1201a9f1084bb32f12e68" Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.486626 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-669b4bcf6b-jk28p"] Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.572192 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-567589579f-jbtqc"] Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.574686 4897 scope.go:117] "RemoveContainer" containerID="84e213c59cf7e3f9b97ea123e8a0a8aa03db9ea8b937582a040daa9a0d92a8d7" Feb 14 19:05:47 crc kubenswrapper[4897]: E0214 19:05:47.575758 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84e213c59cf7e3f9b97ea123e8a0a8aa03db9ea8b937582a040daa9a0d92a8d7\": container with ID starting with 84e213c59cf7e3f9b97ea123e8a0a8aa03db9ea8b937582a040daa9a0d92a8d7 not found: ID does not exist" containerID="84e213c59cf7e3f9b97ea123e8a0a8aa03db9ea8b937582a040daa9a0d92a8d7" Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.575785 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84e213c59cf7e3f9b97ea123e8a0a8aa03db9ea8b937582a040daa9a0d92a8d7"} err="failed to get container status \"84e213c59cf7e3f9b97ea123e8a0a8aa03db9ea8b937582a040daa9a0d92a8d7\": rpc error: code = NotFound desc = could not find container \"84e213c59cf7e3f9b97ea123e8a0a8aa03db9ea8b937582a040daa9a0d92a8d7\": container with ID starting with 84e213c59cf7e3f9b97ea123e8a0a8aa03db9ea8b937582a040daa9a0d92a8d7 not found: ID does not exist" Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.575806 4897 scope.go:117] "RemoveContainer" containerID="3f3c6aee89597acc0c4e089826806382f3c22822c9d1201a9f1084bb32f12e68" Feb 14 19:05:47 crc kubenswrapper[4897]: E0214 19:05:47.576023 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f3c6aee89597acc0c4e089826806382f3c22822c9d1201a9f1084bb32f12e68\": container with ID starting with 3f3c6aee89597acc0c4e089826806382f3c22822c9d1201a9f1084bb32f12e68 not found: ID does not exist" containerID="3f3c6aee89597acc0c4e089826806382f3c22822c9d1201a9f1084bb32f12e68" Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.576058 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f3c6aee89597acc0c4e089826806382f3c22822c9d1201a9f1084bb32f12e68"} err="failed to get container status \"3f3c6aee89597acc0c4e089826806382f3c22822c9d1201a9f1084bb32f12e68\": rpc error: code = NotFound desc = could not find container \"3f3c6aee89597acc0c4e089826806382f3c22822c9d1201a9f1084bb32f12e68\": container with ID starting with 3f3c6aee89597acc0c4e089826806382f3c22822c9d1201a9f1084bb32f12e68 not found: ID does not exist" Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.671252 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6dd74d4b5f-8tgjp" podUID="642b5930-c972-4455-a280-932d5fda60e5" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.194:9696/\": dial tcp 10.217.0.194:9696: connect: connection refused" Feb 14 19:05:47 crc kubenswrapper[4897]: I0214 19:05:47.822913 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cfccb60-304d-4c37-b2ac-ed560f3830fe" path="/var/lib/kubelet/pods/6cfccb60-304d-4c37-b2ac-ed560f3830fe/volumes" Feb 14 19:05:48 crc kubenswrapper[4897]: I0214 19:05:48.111908 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 14 19:05:48 crc kubenswrapper[4897]: I0214 19:05:48.418073 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fb69d8d-0e17-4fce-83d7-c983dade92d9","Type":"ContainerStarted","Data":"760a6f2275c9ee6c8d45053f8eac13713f8914b73393fe564d116b644dd6e7c5"} Feb 14 19:05:48 crc kubenswrapper[4897]: I0214 19:05:48.430445 4897 generic.go:334] "Generic (PLEG): container finished" podID="e04401f8-3fac-42bb-924b-1235cb127ed3" containerID="5793d7b05973908f0526bd3adac4a3c62d4e21ec11d577c92127c1b132491be6" exitCode=0 Feb 14 19:05:48 crc kubenswrapper[4897]: I0214 19:05:48.430503 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" event={"ID":"e04401f8-3fac-42bb-924b-1235cb127ed3","Type":"ContainerDied","Data":"5793d7b05973908f0526bd3adac4a3c62d4e21ec11d577c92127c1b132491be6"} Feb 14 19:05:48 crc kubenswrapper[4897]: I0214 19:05:48.435697 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-567589579f-jbtqc" event={"ID":"ec334ed0-f181-451a-8f76-12defbfc2460","Type":"ContainerStarted","Data":"e95287f4f4c5aefc1886bd477e6a6bdb2a658b3e9ff2a654991eb35c36df98b3"} Feb 14 19:05:48 crc kubenswrapper[4897]: I0214 19:05:48.435733 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-567589579f-jbtqc" event={"ID":"ec334ed0-f181-451a-8f76-12defbfc2460","Type":"ContainerStarted","Data":"6418eed14e74f146c1eb3bfe461d7e9a7b9f1d3b3ab37e66afe665e20eadfb2e"} Feb 14 19:05:49 crc kubenswrapper[4897]: I0214 19:05:49.469632 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-567589579f-jbtqc" event={"ID":"ec334ed0-f181-451a-8f76-12defbfc2460","Type":"ContainerStarted","Data":"749862e02cb847826aad17a288f64f76f2c4cd3c577e824989562e89c62b40c3"} Feb 14 19:05:49 crc kubenswrapper[4897]: I0214 19:05:49.470425 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-567589579f-jbtqc" Feb 14 19:05:49 crc kubenswrapper[4897]: I0214 19:05:49.475396 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b40e3c50-9e19-4e83-af97-75ddf8aa8d88","Type":"ContainerStarted","Data":"b03e5ffab161800ddd9e3a9db87fb8e513602d6cc7ba4acb210aa13c4b454bd9"} Feb 14 19:05:49 crc kubenswrapper[4897]: I0214 19:05:49.476837 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fb69d8d-0e17-4fce-83d7-c983dade92d9","Type":"ContainerStarted","Data":"1d952058c55e433be40d9c8cfa8f59ce4da5b40845d30717f31a857b05b6797c"} Feb 14 19:05:49 crc kubenswrapper[4897]: I0214 19:05:49.483429 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" event={"ID":"e04401f8-3fac-42bb-924b-1235cb127ed3","Type":"ContainerStarted","Data":"34c914b71c349cb7c38e42c539e03e76c6eec67c09b0d60f9805530d85c70491"} Feb 14 19:05:49 crc kubenswrapper[4897]: I0214 19:05:49.483479 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" Feb 14 19:05:49 crc kubenswrapper[4897]: I0214 19:05:49.493235 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-567589579f-jbtqc" podStartSLOduration=3.493191054 podStartE2EDuration="3.493191054s" podCreationTimestamp="2026-02-14 19:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:05:49.486086312 +0000 UTC m=+1402.462494805" watchObservedRunningTime="2026-02-14 19:05:49.493191054 +0000 UTC m=+1402.469599547" Feb 14 19:05:49 crc kubenswrapper[4897]: I0214 19:05:49.495211 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"47617af5-9d67-473f-aefb-624a6da6a037","Type":"ContainerStarted","Data":"50b41489918b7bdd61db55f02e63406d20eb9247394027acee1bf8c934c76430"} Feb 14 19:05:49 crc kubenswrapper[4897]: I0214 19:05:49.527616 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" podStartSLOduration=4.527599844 podStartE2EDuration="4.527599844s" podCreationTimestamp="2026-02-14 19:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:05:49.510126276 +0000 UTC m=+1402.486534789" watchObservedRunningTime="2026-02-14 19:05:49.527599844 +0000 UTC m=+1402.504008317" Feb 14 19:05:50 crc kubenswrapper[4897]: I0214 19:05:50.506003 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fb69d8d-0e17-4fce-83d7-c983dade92d9","Type":"ContainerStarted","Data":"ddc76b40d2e013af34001f733a82ec7a31602e292c41f23b0a0dcc2397b9bdb8"} Feb 14 19:05:50 crc kubenswrapper[4897]: I0214 19:05:50.507438 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"47617af5-9d67-473f-aefb-624a6da6a037","Type":"ContainerStarted","Data":"3227363b79f079134f30a71b8b5a456ce9f60c9af2a09f86018eb706aeac00e2"} Feb 14 19:05:50 crc kubenswrapper[4897]: I0214 19:05:50.507578 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="47617af5-9d67-473f-aefb-624a6da6a037" containerName="cinder-api-log" containerID="cri-o://50b41489918b7bdd61db55f02e63406d20eb9247394027acee1bf8c934c76430" gracePeriod=30 Feb 14 19:05:50 crc kubenswrapper[4897]: I0214 19:05:50.507690 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="47617af5-9d67-473f-aefb-624a6da6a037" containerName="cinder-api" containerID="cri-o://3227363b79f079134f30a71b8b5a456ce9f60c9af2a09f86018eb706aeac00e2" gracePeriod=30 Feb 14 19:05:50 crc kubenswrapper[4897]: I0214 19:05:50.507857 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 14 19:05:50 crc kubenswrapper[4897]: I0214 19:05:50.515962 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b40e3c50-9e19-4e83-af97-75ddf8aa8d88","Type":"ContainerStarted","Data":"d474a8ee93a1b643b6e71d96bb3fac578ae40675bde2d9f660ca216e3f6f39de"} Feb 14 19:05:50 crc kubenswrapper[4897]: I0214 19:05:50.536341 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.536324881 podStartE2EDuration="5.536324881s" podCreationTimestamp="2026-02-14 19:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:05:50.532894873 +0000 UTC m=+1403.509303366" watchObservedRunningTime="2026-02-14 19:05:50.536324881 +0000 UTC m=+1403.512733364" Feb 14 19:05:50 crc kubenswrapper[4897]: I0214 19:05:50.566790 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.69091719 podStartE2EDuration="5.566768026s" podCreationTimestamp="2026-02-14 19:05:45 +0000 UTC" firstStartedPulling="2026-02-14 19:05:47.225436137 +0000 UTC m=+1400.201844620" lastFinishedPulling="2026-02-14 19:05:48.101286973 +0000 UTC m=+1401.077695456" observedRunningTime="2026-02-14 19:05:50.554253263 +0000 UTC m=+1403.530661756" watchObservedRunningTime="2026-02-14 19:05:50.566768026 +0000 UTC m=+1403.543176509" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.152335 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.163554 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6dd74d4b5f-8tgjp" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.242956 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-internal-tls-certs\") pod \"642b5930-c972-4455-a280-932d5fda60e5\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.243012 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-httpd-config\") pod \"642b5930-c972-4455-a280-932d5fda60e5\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.243075 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-public-tls-certs\") pod \"642b5930-c972-4455-a280-932d5fda60e5\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.243100 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-config\") pod \"642b5930-c972-4455-a280-932d5fda60e5\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.243153 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-ovndb-tls-certs\") pod \"642b5930-c972-4455-a280-932d5fda60e5\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.243196 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8kxn\" (UniqueName: \"kubernetes.io/projected/642b5930-c972-4455-a280-932d5fda60e5-kube-api-access-s8kxn\") pod \"642b5930-c972-4455-a280-932d5fda60e5\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.243288 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-combined-ca-bundle\") pod \"642b5930-c972-4455-a280-932d5fda60e5\" (UID: \"642b5930-c972-4455-a280-932d5fda60e5\") " Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.253294 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "642b5930-c972-4455-a280-932d5fda60e5" (UID: "642b5930-c972-4455-a280-932d5fda60e5"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.256166 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/642b5930-c972-4455-a280-932d5fda60e5-kube-api-access-s8kxn" (OuterVolumeSpecName: "kube-api-access-s8kxn") pod "642b5930-c972-4455-a280-932d5fda60e5" (UID: "642b5930-c972-4455-a280-932d5fda60e5"). InnerVolumeSpecName "kube-api-access-s8kxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.316195 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "642b5930-c972-4455-a280-932d5fda60e5" (UID: "642b5930-c972-4455-a280-932d5fda60e5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.331536 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-config" (OuterVolumeSpecName: "config") pod "642b5930-c972-4455-a280-932d5fda60e5" (UID: "642b5930-c972-4455-a280-932d5fda60e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.344126 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "642b5930-c972-4455-a280-932d5fda60e5" (UID: "642b5930-c972-4455-a280-932d5fda60e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.345651 4897 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.345680 4897 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.345691 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.345701 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8kxn\" (UniqueName: \"kubernetes.io/projected/642b5930-c972-4455-a280-932d5fda60e5-kube-api-access-s8kxn\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.345713 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.348120 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.394797 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "642b5930-c972-4455-a280-932d5fda60e5" (UID: "642b5930-c972-4455-a280-932d5fda60e5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.413264 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "642b5930-c972-4455-a280-932d5fda60e5" (UID: "642b5930-c972-4455-a280-932d5fda60e5"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.452882 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/47617af5-9d67-473f-aefb-624a6da6a037-etc-machine-id\") pod \"47617af5-9d67-473f-aefb-624a6da6a037\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.453188 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47617af5-9d67-473f-aefb-624a6da6a037-scripts\") pod \"47617af5-9d67-473f-aefb-624a6da6a037\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.453025 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47617af5-9d67-473f-aefb-624a6da6a037-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "47617af5-9d67-473f-aefb-624a6da6a037" (UID: "47617af5-9d67-473f-aefb-624a6da6a037"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.453687 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47617af5-9d67-473f-aefb-624a6da6a037-config-data-custom\") pod \"47617af5-9d67-473f-aefb-624a6da6a037\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.453720 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47617af5-9d67-473f-aefb-624a6da6a037-config-data\") pod \"47617af5-9d67-473f-aefb-624a6da6a037\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.453740 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47617af5-9d67-473f-aefb-624a6da6a037-logs\") pod \"47617af5-9d67-473f-aefb-624a6da6a037\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.454006 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47617af5-9d67-473f-aefb-624a6da6a037-combined-ca-bundle\") pod \"47617af5-9d67-473f-aefb-624a6da6a037\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.454195 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-568lq\" (UniqueName: \"kubernetes.io/projected/47617af5-9d67-473f-aefb-624a6da6a037-kube-api-access-568lq\") pod \"47617af5-9d67-473f-aefb-624a6da6a037\" (UID: \"47617af5-9d67-473f-aefb-624a6da6a037\") " Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.454240 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47617af5-9d67-473f-aefb-624a6da6a037-logs" (OuterVolumeSpecName: "logs") pod "47617af5-9d67-473f-aefb-624a6da6a037" (UID: "47617af5-9d67-473f-aefb-624a6da6a037"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.454967 4897 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/47617af5-9d67-473f-aefb-624a6da6a037-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.455050 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47617af5-9d67-473f-aefb-624a6da6a037-logs\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.455170 4897 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.455228 4897 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/642b5930-c972-4455-a280-932d5fda60e5-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.458131 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47617af5-9d67-473f-aefb-624a6da6a037-scripts" (OuterVolumeSpecName: "scripts") pod "47617af5-9d67-473f-aefb-624a6da6a037" (UID: "47617af5-9d67-473f-aefb-624a6da6a037"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.458221 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47617af5-9d67-473f-aefb-624a6da6a037-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "47617af5-9d67-473f-aefb-624a6da6a037" (UID: "47617af5-9d67-473f-aefb-624a6da6a037"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.458267 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47617af5-9d67-473f-aefb-624a6da6a037-kube-api-access-568lq" (OuterVolumeSpecName: "kube-api-access-568lq") pod "47617af5-9d67-473f-aefb-624a6da6a037" (UID: "47617af5-9d67-473f-aefb-624a6da6a037"). InnerVolumeSpecName "kube-api-access-568lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.489987 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47617af5-9d67-473f-aefb-624a6da6a037-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47617af5-9d67-473f-aefb-624a6da6a037" (UID: "47617af5-9d67-473f-aefb-624a6da6a037"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.515165 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47617af5-9d67-473f-aefb-624a6da6a037-config-data" (OuterVolumeSpecName: "config-data") pod "47617af5-9d67-473f-aefb-624a6da6a037" (UID: "47617af5-9d67-473f-aefb-624a6da6a037"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.528733 4897 generic.go:334] "Generic (PLEG): container finished" podID="642b5930-c972-4455-a280-932d5fda60e5" containerID="73c1f6b1e0814defe620f945e54efdbc1f56cfeef435df47c1631472eee54585" exitCode=0 Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.528912 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6dd74d4b5f-8tgjp" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.528876 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dd74d4b5f-8tgjp" event={"ID":"642b5930-c972-4455-a280-932d5fda60e5","Type":"ContainerDied","Data":"73c1f6b1e0814defe620f945e54efdbc1f56cfeef435df47c1631472eee54585"} Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.528957 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dd74d4b5f-8tgjp" event={"ID":"642b5930-c972-4455-a280-932d5fda60e5","Type":"ContainerDied","Data":"4462136dd3fa8cfd6f68c4fa8f5e00546d27ee6daf0fd1aaa3ff988e976b3fff"} Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.528986 4897 scope.go:117] "RemoveContainer" containerID="a334f6c8f13e82eb6ab5c131b72fcadfe8822654a4d9beeae21bed808fa49172" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.536554 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fb69d8d-0e17-4fce-83d7-c983dade92d9","Type":"ContainerStarted","Data":"1ef467dc1eac14c9f1cfb39daf5dfa4b241eb9208fe08d70df48f51546b37db3"} Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.538610 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.541145 4897 generic.go:334] "Generic (PLEG): container finished" podID="47617af5-9d67-473f-aefb-624a6da6a037" containerID="3227363b79f079134f30a71b8b5a456ce9f60c9af2a09f86018eb706aeac00e2" exitCode=0 Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.541230 4897 generic.go:334] "Generic (PLEG): container finished" podID="47617af5-9d67-473f-aefb-624a6da6a037" containerID="50b41489918b7bdd61db55f02e63406d20eb9247394027acee1bf8c934c76430" exitCode=143 Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.542381 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"47617af5-9d67-473f-aefb-624a6da6a037","Type":"ContainerDied","Data":"3227363b79f079134f30a71b8b5a456ce9f60c9af2a09f86018eb706aeac00e2"} Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.542570 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"47617af5-9d67-473f-aefb-624a6da6a037","Type":"ContainerDied","Data":"50b41489918b7bdd61db55f02e63406d20eb9247394027acee1bf8c934c76430"} Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.542662 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"47617af5-9d67-473f-aefb-624a6da6a037","Type":"ContainerDied","Data":"c118173f340810c11e7074974f12c2c8ad26cf0c7fa9923c6ed3f408f86dfea7"} Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.543565 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.563086 4897 scope.go:117] "RemoveContainer" containerID="73c1f6b1e0814defe620f945e54efdbc1f56cfeef435df47c1631472eee54585" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.563495 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47617af5-9d67-473f-aefb-624a6da6a037-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.563517 4897 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47617af5-9d67-473f-aefb-624a6da6a037-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.563531 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47617af5-9d67-473f-aefb-624a6da6a037-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.563545 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47617af5-9d67-473f-aefb-624a6da6a037-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.563556 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-568lq\" (UniqueName: \"kubernetes.io/projected/47617af5-9d67-473f-aefb-624a6da6a037-kube-api-access-568lq\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.576861 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.658277061 podStartE2EDuration="6.576840684s" podCreationTimestamp="2026-02-14 19:05:45 +0000 UTC" firstStartedPulling="2026-02-14 19:05:46.838287438 +0000 UTC m=+1399.814695921" lastFinishedPulling="2026-02-14 19:05:50.756851061 +0000 UTC m=+1403.733259544" observedRunningTime="2026-02-14 19:05:51.554843364 +0000 UTC m=+1404.531251867" watchObservedRunningTime="2026-02-14 19:05:51.576840684 +0000 UTC m=+1404.553249167" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.608604 4897 scope.go:117] "RemoveContainer" containerID="a334f6c8f13e82eb6ab5c131b72fcadfe8822654a4d9beeae21bed808fa49172" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.608698 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6dd74d4b5f-8tgjp"] Feb 14 19:05:51 crc kubenswrapper[4897]: E0214 19:05:51.608974 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a334f6c8f13e82eb6ab5c131b72fcadfe8822654a4d9beeae21bed808fa49172\": container with ID starting with a334f6c8f13e82eb6ab5c131b72fcadfe8822654a4d9beeae21bed808fa49172 not found: ID does not exist" containerID="a334f6c8f13e82eb6ab5c131b72fcadfe8822654a4d9beeae21bed808fa49172" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.608998 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a334f6c8f13e82eb6ab5c131b72fcadfe8822654a4d9beeae21bed808fa49172"} err="failed to get container status \"a334f6c8f13e82eb6ab5c131b72fcadfe8822654a4d9beeae21bed808fa49172\": rpc error: code = NotFound desc = could not find container \"a334f6c8f13e82eb6ab5c131b72fcadfe8822654a4d9beeae21bed808fa49172\": container with ID starting with a334f6c8f13e82eb6ab5c131b72fcadfe8822654a4d9beeae21bed808fa49172 not found: ID does not exist" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.609016 4897 scope.go:117] "RemoveContainer" containerID="73c1f6b1e0814defe620f945e54efdbc1f56cfeef435df47c1631472eee54585" Feb 14 19:05:51 crc kubenswrapper[4897]: E0214 19:05:51.609371 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73c1f6b1e0814defe620f945e54efdbc1f56cfeef435df47c1631472eee54585\": container with ID starting with 73c1f6b1e0814defe620f945e54efdbc1f56cfeef435df47c1631472eee54585 not found: ID does not exist" containerID="73c1f6b1e0814defe620f945e54efdbc1f56cfeef435df47c1631472eee54585" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.609423 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73c1f6b1e0814defe620f945e54efdbc1f56cfeef435df47c1631472eee54585"} err="failed to get container status \"73c1f6b1e0814defe620f945e54efdbc1f56cfeef435df47c1631472eee54585\": rpc error: code = NotFound desc = could not find container \"73c1f6b1e0814defe620f945e54efdbc1f56cfeef435df47c1631472eee54585\": container with ID starting with 73c1f6b1e0814defe620f945e54efdbc1f56cfeef435df47c1631472eee54585 not found: ID does not exist" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.609454 4897 scope.go:117] "RemoveContainer" containerID="3227363b79f079134f30a71b8b5a456ce9f60c9af2a09f86018eb706aeac00e2" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.632693 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6dd74d4b5f-8tgjp"] Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.651483 4897 scope.go:117] "RemoveContainer" containerID="50b41489918b7bdd61db55f02e63406d20eb9247394027acee1bf8c934c76430" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.686040 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.695052 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.703272 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 14 19:05:51 crc kubenswrapper[4897]: E0214 19:05:51.703799 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cfccb60-304d-4c37-b2ac-ed560f3830fe" containerName="barbican-api" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.703818 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cfccb60-304d-4c37-b2ac-ed560f3830fe" containerName="barbican-api" Feb 14 19:05:51 crc kubenswrapper[4897]: E0214 19:05:51.703831 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47617af5-9d67-473f-aefb-624a6da6a037" containerName="cinder-api" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.703837 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="47617af5-9d67-473f-aefb-624a6da6a037" containerName="cinder-api" Feb 14 19:05:51 crc kubenswrapper[4897]: E0214 19:05:51.703864 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="642b5930-c972-4455-a280-932d5fda60e5" containerName="neutron-httpd" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.703870 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="642b5930-c972-4455-a280-932d5fda60e5" containerName="neutron-httpd" Feb 14 19:05:51 crc kubenswrapper[4897]: E0214 19:05:51.703879 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47617af5-9d67-473f-aefb-624a6da6a037" containerName="cinder-api-log" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.703885 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="47617af5-9d67-473f-aefb-624a6da6a037" containerName="cinder-api-log" Feb 14 19:05:51 crc kubenswrapper[4897]: E0214 19:05:51.703894 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cfccb60-304d-4c37-b2ac-ed560f3830fe" containerName="barbican-api-log" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.703902 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cfccb60-304d-4c37-b2ac-ed560f3830fe" containerName="barbican-api-log" Feb 14 19:05:51 crc kubenswrapper[4897]: E0214 19:05:51.703916 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="642b5930-c972-4455-a280-932d5fda60e5" containerName="neutron-api" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.703922 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="642b5930-c972-4455-a280-932d5fda60e5" containerName="neutron-api" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.704166 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="642b5930-c972-4455-a280-932d5fda60e5" containerName="neutron-httpd" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.704183 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cfccb60-304d-4c37-b2ac-ed560f3830fe" containerName="barbican-api" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.704196 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="47617af5-9d67-473f-aefb-624a6da6a037" containerName="cinder-api-log" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.704208 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="47617af5-9d67-473f-aefb-624a6da6a037" containerName="cinder-api" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.704218 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cfccb60-304d-4c37-b2ac-ed560f3830fe" containerName="barbican-api-log" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.704228 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="642b5930-c972-4455-a280-932d5fda60e5" containerName="neutron-api" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.707011 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.710280 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.710458 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.711774 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.711831 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.767858 4897 scope.go:117] "RemoveContainer" containerID="3227363b79f079134f30a71b8b5a456ce9f60c9af2a09f86018eb706aeac00e2" Feb 14 19:05:51 crc kubenswrapper[4897]: E0214 19:05:51.768367 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3227363b79f079134f30a71b8b5a456ce9f60c9af2a09f86018eb706aeac00e2\": container with ID starting with 3227363b79f079134f30a71b8b5a456ce9f60c9af2a09f86018eb706aeac00e2 not found: ID does not exist" containerID="3227363b79f079134f30a71b8b5a456ce9f60c9af2a09f86018eb706aeac00e2" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.768408 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3227363b79f079134f30a71b8b5a456ce9f60c9af2a09f86018eb706aeac00e2"} err="failed to get container status \"3227363b79f079134f30a71b8b5a456ce9f60c9af2a09f86018eb706aeac00e2\": rpc error: code = NotFound desc = could not find container \"3227363b79f079134f30a71b8b5a456ce9f60c9af2a09f86018eb706aeac00e2\": container with ID starting with 3227363b79f079134f30a71b8b5a456ce9f60c9af2a09f86018eb706aeac00e2 not found: ID does not exist" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.768435 4897 scope.go:117] "RemoveContainer" containerID="50b41489918b7bdd61db55f02e63406d20eb9247394027acee1bf8c934c76430" Feb 14 19:05:51 crc kubenswrapper[4897]: E0214 19:05:51.769016 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50b41489918b7bdd61db55f02e63406d20eb9247394027acee1bf8c934c76430\": container with ID starting with 50b41489918b7bdd61db55f02e63406d20eb9247394027acee1bf8c934c76430 not found: ID does not exist" containerID="50b41489918b7bdd61db55f02e63406d20eb9247394027acee1bf8c934c76430" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.769079 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50b41489918b7bdd61db55f02e63406d20eb9247394027acee1bf8c934c76430"} err="failed to get container status \"50b41489918b7bdd61db55f02e63406d20eb9247394027acee1bf8c934c76430\": rpc error: code = NotFound desc = could not find container \"50b41489918b7bdd61db55f02e63406d20eb9247394027acee1bf8c934c76430\": container with ID starting with 50b41489918b7bdd61db55f02e63406d20eb9247394027acee1bf8c934c76430 not found: ID does not exist" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.769105 4897 scope.go:117] "RemoveContainer" containerID="3227363b79f079134f30a71b8b5a456ce9f60c9af2a09f86018eb706aeac00e2" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.769738 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3227363b79f079134f30a71b8b5a456ce9f60c9af2a09f86018eb706aeac00e2"} err="failed to get container status \"3227363b79f079134f30a71b8b5a456ce9f60c9af2a09f86018eb706aeac00e2\": rpc error: code = NotFound desc = could not find container \"3227363b79f079134f30a71b8b5a456ce9f60c9af2a09f86018eb706aeac00e2\": container with ID starting with 3227363b79f079134f30a71b8b5a456ce9f60c9af2a09f86018eb706aeac00e2 not found: ID does not exist" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.769761 4897 scope.go:117] "RemoveContainer" containerID="50b41489918b7bdd61db55f02e63406d20eb9247394027acee1bf8c934c76430" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.770494 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50b41489918b7bdd61db55f02e63406d20eb9247394027acee1bf8c934c76430"} err="failed to get container status \"50b41489918b7bdd61db55f02e63406d20eb9247394027acee1bf8c934c76430\": rpc error: code = NotFound desc = could not find container \"50b41489918b7bdd61db55f02e63406d20eb9247394027acee1bf8c934c76430\": container with ID starting with 50b41489918b7bdd61db55f02e63406d20eb9247394027acee1bf8c934c76430 not found: ID does not exist" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.773053 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c87e5c6-bedb-4830-9ad3-96d9eda6f476-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5c87e5c6-bedb-4830-9ad3-96d9eda6f476\") " pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.773186 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tjf5\" (UniqueName: \"kubernetes.io/projected/5c87e5c6-bedb-4830-9ad3-96d9eda6f476-kube-api-access-7tjf5\") pod \"cinder-api-0\" (UID: \"5c87e5c6-bedb-4830-9ad3-96d9eda6f476\") " pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.773311 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c87e5c6-bedb-4830-9ad3-96d9eda6f476-config-data-custom\") pod \"cinder-api-0\" (UID: \"5c87e5c6-bedb-4830-9ad3-96d9eda6f476\") " pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.773373 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5c87e5c6-bedb-4830-9ad3-96d9eda6f476-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5c87e5c6-bedb-4830-9ad3-96d9eda6f476\") " pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.773480 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c87e5c6-bedb-4830-9ad3-96d9eda6f476-logs\") pod \"cinder-api-0\" (UID: \"5c87e5c6-bedb-4830-9ad3-96d9eda6f476\") " pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.773567 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c87e5c6-bedb-4830-9ad3-96d9eda6f476-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5c87e5c6-bedb-4830-9ad3-96d9eda6f476\") " pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.773631 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c87e5c6-bedb-4830-9ad3-96d9eda6f476-scripts\") pod \"cinder-api-0\" (UID: \"5c87e5c6-bedb-4830-9ad3-96d9eda6f476\") " pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.773656 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c87e5c6-bedb-4830-9ad3-96d9eda6f476-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5c87e5c6-bedb-4830-9ad3-96d9eda6f476\") " pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.773721 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c87e5c6-bedb-4830-9ad3-96d9eda6f476-config-data\") pod \"cinder-api-0\" (UID: \"5c87e5c6-bedb-4830-9ad3-96d9eda6f476\") " pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.807724 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47617af5-9d67-473f-aefb-624a6da6a037" path="/var/lib/kubelet/pods/47617af5-9d67-473f-aefb-624a6da6a037/volumes" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.808371 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="642b5930-c972-4455-a280-932d5fda60e5" path="/var/lib/kubelet/pods/642b5930-c972-4455-a280-932d5fda60e5/volumes" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.876115 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c87e5c6-bedb-4830-9ad3-96d9eda6f476-logs\") pod \"cinder-api-0\" (UID: \"5c87e5c6-bedb-4830-9ad3-96d9eda6f476\") " pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.876507 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c87e5c6-bedb-4830-9ad3-96d9eda6f476-logs\") pod \"cinder-api-0\" (UID: \"5c87e5c6-bedb-4830-9ad3-96d9eda6f476\") " pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.877289 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c87e5c6-bedb-4830-9ad3-96d9eda6f476-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5c87e5c6-bedb-4830-9ad3-96d9eda6f476\") " pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.878159 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c87e5c6-bedb-4830-9ad3-96d9eda6f476-scripts\") pod \"cinder-api-0\" (UID: \"5c87e5c6-bedb-4830-9ad3-96d9eda6f476\") " pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.878213 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c87e5c6-bedb-4830-9ad3-96d9eda6f476-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5c87e5c6-bedb-4830-9ad3-96d9eda6f476\") " pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.878280 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c87e5c6-bedb-4830-9ad3-96d9eda6f476-config-data\") pod \"cinder-api-0\" (UID: \"5c87e5c6-bedb-4830-9ad3-96d9eda6f476\") " pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.878394 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c87e5c6-bedb-4830-9ad3-96d9eda6f476-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5c87e5c6-bedb-4830-9ad3-96d9eda6f476\") " pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.878536 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tjf5\" (UniqueName: \"kubernetes.io/projected/5c87e5c6-bedb-4830-9ad3-96d9eda6f476-kube-api-access-7tjf5\") pod \"cinder-api-0\" (UID: \"5c87e5c6-bedb-4830-9ad3-96d9eda6f476\") " pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.879301 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c87e5c6-bedb-4830-9ad3-96d9eda6f476-config-data-custom\") pod \"cinder-api-0\" (UID: \"5c87e5c6-bedb-4830-9ad3-96d9eda6f476\") " pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.879483 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5c87e5c6-bedb-4830-9ad3-96d9eda6f476-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5c87e5c6-bedb-4830-9ad3-96d9eda6f476\") " pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.879626 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5c87e5c6-bedb-4830-9ad3-96d9eda6f476-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5c87e5c6-bedb-4830-9ad3-96d9eda6f476\") " pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.883465 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c87e5c6-bedb-4830-9ad3-96d9eda6f476-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5c87e5c6-bedb-4830-9ad3-96d9eda6f476\") " pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.887542 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c87e5c6-bedb-4830-9ad3-96d9eda6f476-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5c87e5c6-bedb-4830-9ad3-96d9eda6f476\") " pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.887558 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c87e5c6-bedb-4830-9ad3-96d9eda6f476-config-data-custom\") pod \"cinder-api-0\" (UID: \"5c87e5c6-bedb-4830-9ad3-96d9eda6f476\") " pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.888402 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c87e5c6-bedb-4830-9ad3-96d9eda6f476-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5c87e5c6-bedb-4830-9ad3-96d9eda6f476\") " pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.893049 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c87e5c6-bedb-4830-9ad3-96d9eda6f476-scripts\") pod \"cinder-api-0\" (UID: \"5c87e5c6-bedb-4830-9ad3-96d9eda6f476\") " pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.899883 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c87e5c6-bedb-4830-9ad3-96d9eda6f476-config-data\") pod \"cinder-api-0\" (UID: \"5c87e5c6-bedb-4830-9ad3-96d9eda6f476\") " pod="openstack/cinder-api-0" Feb 14 19:05:51 crc kubenswrapper[4897]: I0214 19:05:51.905901 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tjf5\" (UniqueName: \"kubernetes.io/projected/5c87e5c6-bedb-4830-9ad3-96d9eda6f476-kube-api-access-7tjf5\") pod \"cinder-api-0\" (UID: \"5c87e5c6-bedb-4830-9ad3-96d9eda6f476\") " pod="openstack/cinder-api-0" Feb 14 19:05:52 crc kubenswrapper[4897]: I0214 19:05:52.059056 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 19:05:52 crc kubenswrapper[4897]: W0214 19:05:52.596103 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c87e5c6_bedb_4830_9ad3_96d9eda6f476.slice/crio-bbe5279c4121a00783fa0fa7e541742591d8ce862a9c3f316bd0a1c2334b4721 WatchSource:0}: Error finding container bbe5279c4121a00783fa0fa7e541742591d8ce862a9c3f316bd0a1c2334b4721: Status 404 returned error can't find the container with id bbe5279c4121a00783fa0fa7e541742591d8ce862a9c3f316bd0a1c2334b4721 Feb 14 19:05:52 crc kubenswrapper[4897]: I0214 19:05:52.596836 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 14 19:05:53 crc kubenswrapper[4897]: I0214 19:05:53.416465 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zxbnk" Feb 14 19:05:53 crc kubenswrapper[4897]: I0214 19:05:53.416706 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zxbnk" Feb 14 19:05:53 crc kubenswrapper[4897]: I0214 19:05:53.480906 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zxbnk" Feb 14 19:05:53 crc kubenswrapper[4897]: I0214 19:05:53.569043 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5c87e5c6-bedb-4830-9ad3-96d9eda6f476","Type":"ContainerStarted","Data":"6c4c773c58a9e8f2c66ffdb300f2a84840a124e31776d54b959fb4b7181e2edf"} Feb 14 19:05:53 crc kubenswrapper[4897]: I0214 19:05:53.569091 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5c87e5c6-bedb-4830-9ad3-96d9eda6f476","Type":"ContainerStarted","Data":"bbe5279c4121a00783fa0fa7e541742591d8ce862a9c3f316bd0a1c2334b4721"} Feb 14 19:05:53 crc kubenswrapper[4897]: I0214 19:05:53.647269 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zxbnk" Feb 14 19:05:53 crc kubenswrapper[4897]: I0214 19:05:53.727907 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zxbnk"] Feb 14 19:05:54 crc kubenswrapper[4897]: I0214 19:05:54.584728 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5c87e5c6-bedb-4830-9ad3-96d9eda6f476","Type":"ContainerStarted","Data":"5131dfc1d68dbcfcfcf6f9e25fad1664ba6f8e25934ad3b378c54da2254bbf35"} Feb 14 19:05:54 crc kubenswrapper[4897]: I0214 19:05:54.585466 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 14 19:05:54 crc kubenswrapper[4897]: I0214 19:05:54.630422 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.6303945410000003 podStartE2EDuration="3.630394541s" podCreationTimestamp="2026-02-14 19:05:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:05:54.610082744 +0000 UTC m=+1407.586491317" watchObservedRunningTime="2026-02-14 19:05:54.630394541 +0000 UTC m=+1407.606803064" Feb 14 19:05:55 crc kubenswrapper[4897]: I0214 19:05:55.603540 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zxbnk" podUID="9113bf61-0b89-4343-b4e4-93cc2f704cf9" containerName="registry-server" containerID="cri-o://06e14efc6be3cdbabe81c38b4386cca770eb38fde459f4d8e32bd8c12973dc6b" gracePeriod=2 Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.160311 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.238097 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-c4968"] Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.238599 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-c4968" podUID="5911804f-29c7-44a8-8688-0bc0fe0a46ac" containerName="dnsmasq-dns" containerID="cri-o://7fb45f764d0e47dbee23705b6d10a036878347ac28a4af43fa589645ad4eea2a" gracePeriod=10 Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.271459 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-85ff748b95-c4968" podUID="5911804f-29c7-44a8-8688-0bc0fe0a46ac" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.198:5353: connect: connection refused" Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.433155 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zxbnk" Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.492839 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.533931 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9113bf61-0b89-4343-b4e4-93cc2f704cf9-catalog-content\") pod \"9113bf61-0b89-4343-b4e4-93cc2f704cf9\" (UID: \"9113bf61-0b89-4343-b4e4-93cc2f704cf9\") " Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.533989 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ntrk\" (UniqueName: \"kubernetes.io/projected/9113bf61-0b89-4343-b4e4-93cc2f704cf9-kube-api-access-5ntrk\") pod \"9113bf61-0b89-4343-b4e4-93cc2f704cf9\" (UID: \"9113bf61-0b89-4343-b4e4-93cc2f704cf9\") " Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.534152 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9113bf61-0b89-4343-b4e4-93cc2f704cf9-utilities\") pod \"9113bf61-0b89-4343-b4e4-93cc2f704cf9\" (UID: \"9113bf61-0b89-4343-b4e4-93cc2f704cf9\") " Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.535450 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9113bf61-0b89-4343-b4e4-93cc2f704cf9-utilities" (OuterVolumeSpecName: "utilities") pod "9113bf61-0b89-4343-b4e4-93cc2f704cf9" (UID: "9113bf61-0b89-4343-b4e4-93cc2f704cf9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.554741 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9113bf61-0b89-4343-b4e4-93cc2f704cf9-kube-api-access-5ntrk" (OuterVolumeSpecName: "kube-api-access-5ntrk") pod "9113bf61-0b89-4343-b4e4-93cc2f704cf9" (UID: "9113bf61-0b89-4343-b4e4-93cc2f704cf9"). InnerVolumeSpecName "kube-api-access-5ntrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.582156 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.621274 4897 generic.go:334] "Generic (PLEG): container finished" podID="5911804f-29c7-44a8-8688-0bc0fe0a46ac" containerID="7fb45f764d0e47dbee23705b6d10a036878347ac28a4af43fa589645ad4eea2a" exitCode=0 Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.621337 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-c4968" event={"ID":"5911804f-29c7-44a8-8688-0bc0fe0a46ac","Type":"ContainerDied","Data":"7fb45f764d0e47dbee23705b6d10a036878347ac28a4af43fa589645ad4eea2a"} Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.624244 4897 generic.go:334] "Generic (PLEG): container finished" podID="9113bf61-0b89-4343-b4e4-93cc2f704cf9" containerID="06e14efc6be3cdbabe81c38b4386cca770eb38fde459f4d8e32bd8c12973dc6b" exitCode=0 Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.624430 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b40e3c50-9e19-4e83-af97-75ddf8aa8d88" containerName="cinder-scheduler" containerID="cri-o://b03e5ffab161800ddd9e3a9db87fb8e513602d6cc7ba4acb210aa13c4b454bd9" gracePeriod=30 Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.624717 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zxbnk" Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.625386 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxbnk" event={"ID":"9113bf61-0b89-4343-b4e4-93cc2f704cf9","Type":"ContainerDied","Data":"06e14efc6be3cdbabe81c38b4386cca770eb38fde459f4d8e32bd8c12973dc6b"} Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.625414 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxbnk" event={"ID":"9113bf61-0b89-4343-b4e4-93cc2f704cf9","Type":"ContainerDied","Data":"a3e0b31d5e928939ac44d82ef8ef55fc3d0773b70e77738809bce604b7b315f1"} Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.625432 4897 scope.go:117] "RemoveContainer" containerID="06e14efc6be3cdbabe81c38b4386cca770eb38fde459f4d8e32bd8c12973dc6b" Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.625782 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b40e3c50-9e19-4e83-af97-75ddf8aa8d88" containerName="probe" containerID="cri-o://d474a8ee93a1b643b6e71d96bb3fac578ae40675bde2d9f660ca216e3f6f39de" gracePeriod=30 Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.637091 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ntrk\" (UniqueName: \"kubernetes.io/projected/9113bf61-0b89-4343-b4e4-93cc2f704cf9-kube-api-access-5ntrk\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.637829 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9113bf61-0b89-4343-b4e4-93cc2f704cf9-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.658361 4897 scope.go:117] "RemoveContainer" containerID="eb6bc599b2c73789ed3a6e08c06490781b3d327bdb3a712c9afbb33fb4177bfd" Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.659355 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9113bf61-0b89-4343-b4e4-93cc2f704cf9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9113bf61-0b89-4343-b4e4-93cc2f704cf9" (UID: "9113bf61-0b89-4343-b4e4-93cc2f704cf9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.705484 4897 scope.go:117] "RemoveContainer" containerID="d3cc87f22a000ef6381c38e1a9d3d8b1aa1e2339d301b786609ebb185125ed7c" Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.740295 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9113bf61-0b89-4343-b4e4-93cc2f704cf9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.802297 4897 scope.go:117] "RemoveContainer" containerID="06e14efc6be3cdbabe81c38b4386cca770eb38fde459f4d8e32bd8c12973dc6b" Feb 14 19:05:56 crc kubenswrapper[4897]: E0214 19:05:56.803326 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06e14efc6be3cdbabe81c38b4386cca770eb38fde459f4d8e32bd8c12973dc6b\": container with ID starting with 06e14efc6be3cdbabe81c38b4386cca770eb38fde459f4d8e32bd8c12973dc6b not found: ID does not exist" containerID="06e14efc6be3cdbabe81c38b4386cca770eb38fde459f4d8e32bd8c12973dc6b" Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.803363 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06e14efc6be3cdbabe81c38b4386cca770eb38fde459f4d8e32bd8c12973dc6b"} err="failed to get container status \"06e14efc6be3cdbabe81c38b4386cca770eb38fde459f4d8e32bd8c12973dc6b\": rpc error: code = NotFound desc = could not find container \"06e14efc6be3cdbabe81c38b4386cca770eb38fde459f4d8e32bd8c12973dc6b\": container with ID starting with 06e14efc6be3cdbabe81c38b4386cca770eb38fde459f4d8e32bd8c12973dc6b not found: ID does not exist" Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.803384 4897 scope.go:117] "RemoveContainer" containerID="eb6bc599b2c73789ed3a6e08c06490781b3d327bdb3a712c9afbb33fb4177bfd" Feb 14 19:05:56 crc kubenswrapper[4897]: E0214 19:05:56.803710 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb6bc599b2c73789ed3a6e08c06490781b3d327bdb3a712c9afbb33fb4177bfd\": container with ID starting with eb6bc599b2c73789ed3a6e08c06490781b3d327bdb3a712c9afbb33fb4177bfd not found: ID does not exist" containerID="eb6bc599b2c73789ed3a6e08c06490781b3d327bdb3a712c9afbb33fb4177bfd" Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.803733 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb6bc599b2c73789ed3a6e08c06490781b3d327bdb3a712c9afbb33fb4177bfd"} err="failed to get container status \"eb6bc599b2c73789ed3a6e08c06490781b3d327bdb3a712c9afbb33fb4177bfd\": rpc error: code = NotFound desc = could not find container \"eb6bc599b2c73789ed3a6e08c06490781b3d327bdb3a712c9afbb33fb4177bfd\": container with ID starting with eb6bc599b2c73789ed3a6e08c06490781b3d327bdb3a712c9afbb33fb4177bfd not found: ID does not exist" Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.803746 4897 scope.go:117] "RemoveContainer" containerID="d3cc87f22a000ef6381c38e1a9d3d8b1aa1e2339d301b786609ebb185125ed7c" Feb 14 19:05:56 crc kubenswrapper[4897]: E0214 19:05:56.804124 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3cc87f22a000ef6381c38e1a9d3d8b1aa1e2339d301b786609ebb185125ed7c\": container with ID starting with d3cc87f22a000ef6381c38e1a9d3d8b1aa1e2339d301b786609ebb185125ed7c not found: ID does not exist" containerID="d3cc87f22a000ef6381c38e1a9d3d8b1aa1e2339d301b786609ebb185125ed7c" Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.804144 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3cc87f22a000ef6381c38e1a9d3d8b1aa1e2339d301b786609ebb185125ed7c"} err="failed to get container status \"d3cc87f22a000ef6381c38e1a9d3d8b1aa1e2339d301b786609ebb185125ed7c\": rpc error: code = NotFound desc = could not find container \"d3cc87f22a000ef6381c38e1a9d3d8b1aa1e2339d301b786609ebb185125ed7c\": container with ID starting with d3cc87f22a000ef6381c38e1a9d3d8b1aa1e2339d301b786609ebb185125ed7c not found: ID does not exist" Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.890824 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-c4968" Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.976565 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zxbnk"] Feb 14 19:05:56 crc kubenswrapper[4897]: I0214 19:05:56.991443 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zxbnk"] Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.047406 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-config\") pod \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\" (UID: \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\") " Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.047516 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-ovsdbserver-sb\") pod \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\" (UID: \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\") " Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.047563 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-dns-svc\") pod \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\" (UID: \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\") " Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.047779 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phflc\" (UniqueName: \"kubernetes.io/projected/5911804f-29c7-44a8-8688-0bc0fe0a46ac-kube-api-access-phflc\") pod \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\" (UID: \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\") " Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.047813 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-ovsdbserver-nb\") pod \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\" (UID: \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\") " Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.047853 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-dns-swift-storage-0\") pod \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\" (UID: \"5911804f-29c7-44a8-8688-0bc0fe0a46ac\") " Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.060133 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5911804f-29c7-44a8-8688-0bc0fe0a46ac-kube-api-access-phflc" (OuterVolumeSpecName: "kube-api-access-phflc") pod "5911804f-29c7-44a8-8688-0bc0fe0a46ac" (UID: "5911804f-29c7-44a8-8688-0bc0fe0a46ac"). InnerVolumeSpecName "kube-api-access-phflc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.152735 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phflc\" (UniqueName: \"kubernetes.io/projected/5911804f-29c7-44a8-8688-0bc0fe0a46ac-kube-api-access-phflc\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.168701 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5911804f-29c7-44a8-8688-0bc0fe0a46ac" (UID: "5911804f-29c7-44a8-8688-0bc0fe0a46ac"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.181417 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5911804f-29c7-44a8-8688-0bc0fe0a46ac" (UID: "5911804f-29c7-44a8-8688-0bc0fe0a46ac"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.208617 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5911804f-29c7-44a8-8688-0bc0fe0a46ac" (UID: "5911804f-29c7-44a8-8688-0bc0fe0a46ac"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.237399 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5911804f-29c7-44a8-8688-0bc0fe0a46ac" (UID: "5911804f-29c7-44a8-8688-0bc0fe0a46ac"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.245824 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-config" (OuterVolumeSpecName: "config") pod "5911804f-29c7-44a8-8688-0bc0fe0a46ac" (UID: "5911804f-29c7-44a8-8688-0bc0fe0a46ac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.255483 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.255517 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.255528 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.255536 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.255545 4897 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5911804f-29c7-44a8-8688-0bc0fe0a46ac-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.636624 4897 generic.go:334] "Generic (PLEG): container finished" podID="b40e3c50-9e19-4e83-af97-75ddf8aa8d88" containerID="d474a8ee93a1b643b6e71d96bb3fac578ae40675bde2d9f660ca216e3f6f39de" exitCode=0 Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.636703 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b40e3c50-9e19-4e83-af97-75ddf8aa8d88","Type":"ContainerDied","Data":"d474a8ee93a1b643b6e71d96bb3fac578ae40675bde2d9f660ca216e3f6f39de"} Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.637137 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.640240 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-c4968" event={"ID":"5911804f-29c7-44a8-8688-0bc0fe0a46ac","Type":"ContainerDied","Data":"37327f376d9119a7befb95a3c120918ab578c79b1ee6128a34109555fe04a698"} Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.640287 4897 scope.go:117] "RemoveContainer" containerID="7fb45f764d0e47dbee23705b6d10a036878347ac28a4af43fa589645ad4eea2a" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.640335 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-c4968" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.647810 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.681539 4897 scope.go:117] "RemoveContainer" containerID="798ee1583a2be97a6e93df3e717089c60efa4b18d25da73d8ec19ad8e4a6b419" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.694485 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-c4968"] Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.708663 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-c4968"] Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.806929 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5911804f-29c7-44a8-8688-0bc0fe0a46ac" path="/var/lib/kubelet/pods/5911804f-29c7-44a8-8688-0bc0fe0a46ac/volumes" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.807561 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9113bf61-0b89-4343-b4e4-93cc2f704cf9" path="/var/lib/kubelet/pods/9113bf61-0b89-4343-b4e4-93cc2f704cf9/volumes" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.968818 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-bb56bbbfb-v5pf9"] Feb 14 19:05:57 crc kubenswrapper[4897]: E0214 19:05:57.969419 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9113bf61-0b89-4343-b4e4-93cc2f704cf9" containerName="extract-content" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.969436 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="9113bf61-0b89-4343-b4e4-93cc2f704cf9" containerName="extract-content" Feb 14 19:05:57 crc kubenswrapper[4897]: E0214 19:05:57.969458 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9113bf61-0b89-4343-b4e4-93cc2f704cf9" containerName="extract-utilities" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.969465 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="9113bf61-0b89-4343-b4e4-93cc2f704cf9" containerName="extract-utilities" Feb 14 19:05:57 crc kubenswrapper[4897]: E0214 19:05:57.969482 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5911804f-29c7-44a8-8688-0bc0fe0a46ac" containerName="dnsmasq-dns" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.969489 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5911804f-29c7-44a8-8688-0bc0fe0a46ac" containerName="dnsmasq-dns" Feb 14 19:05:57 crc kubenswrapper[4897]: E0214 19:05:57.969501 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5911804f-29c7-44a8-8688-0bc0fe0a46ac" containerName="init" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.969506 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5911804f-29c7-44a8-8688-0bc0fe0a46ac" containerName="init" Feb 14 19:05:57 crc kubenswrapper[4897]: E0214 19:05:57.969516 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9113bf61-0b89-4343-b4e4-93cc2f704cf9" containerName="registry-server" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.969524 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="9113bf61-0b89-4343-b4e4-93cc2f704cf9" containerName="registry-server" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.969750 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="5911804f-29c7-44a8-8688-0bc0fe0a46ac" containerName="dnsmasq-dns" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.969764 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="9113bf61-0b89-4343-b4e4-93cc2f704cf9" containerName="registry-server" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.970953 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bb56bbbfb-v5pf9" Feb 14 19:05:57 crc kubenswrapper[4897]: I0214 19:05:57.983984 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-bb56bbbfb-v5pf9"] Feb 14 19:05:58 crc kubenswrapper[4897]: I0214 19:05:58.072700 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zrss\" (UniqueName: \"kubernetes.io/projected/df22fdf1-e5d3-4d8b-9385-4f3abeda71ee-kube-api-access-8zrss\") pod \"placement-bb56bbbfb-v5pf9\" (UID: \"df22fdf1-e5d3-4d8b-9385-4f3abeda71ee\") " pod="openstack/placement-bb56bbbfb-v5pf9" Feb 14 19:05:58 crc kubenswrapper[4897]: I0214 19:05:58.072975 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df22fdf1-e5d3-4d8b-9385-4f3abeda71ee-combined-ca-bundle\") pod \"placement-bb56bbbfb-v5pf9\" (UID: \"df22fdf1-e5d3-4d8b-9385-4f3abeda71ee\") " pod="openstack/placement-bb56bbbfb-v5pf9" Feb 14 19:05:58 crc kubenswrapper[4897]: I0214 19:05:58.073099 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df22fdf1-e5d3-4d8b-9385-4f3abeda71ee-scripts\") pod \"placement-bb56bbbfb-v5pf9\" (UID: \"df22fdf1-e5d3-4d8b-9385-4f3abeda71ee\") " pod="openstack/placement-bb56bbbfb-v5pf9" Feb 14 19:05:58 crc kubenswrapper[4897]: I0214 19:05:58.073119 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df22fdf1-e5d3-4d8b-9385-4f3abeda71ee-logs\") pod \"placement-bb56bbbfb-v5pf9\" (UID: \"df22fdf1-e5d3-4d8b-9385-4f3abeda71ee\") " pod="openstack/placement-bb56bbbfb-v5pf9" Feb 14 19:05:58 crc kubenswrapper[4897]: I0214 19:05:58.073181 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df22fdf1-e5d3-4d8b-9385-4f3abeda71ee-internal-tls-certs\") pod \"placement-bb56bbbfb-v5pf9\" (UID: \"df22fdf1-e5d3-4d8b-9385-4f3abeda71ee\") " pod="openstack/placement-bb56bbbfb-v5pf9" Feb 14 19:05:58 crc kubenswrapper[4897]: I0214 19:05:58.073224 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df22fdf1-e5d3-4d8b-9385-4f3abeda71ee-config-data\") pod \"placement-bb56bbbfb-v5pf9\" (UID: \"df22fdf1-e5d3-4d8b-9385-4f3abeda71ee\") " pod="openstack/placement-bb56bbbfb-v5pf9" Feb 14 19:05:58 crc kubenswrapper[4897]: I0214 19:05:58.073278 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df22fdf1-e5d3-4d8b-9385-4f3abeda71ee-public-tls-certs\") pod \"placement-bb56bbbfb-v5pf9\" (UID: \"df22fdf1-e5d3-4d8b-9385-4f3abeda71ee\") " pod="openstack/placement-bb56bbbfb-v5pf9" Feb 14 19:05:58 crc kubenswrapper[4897]: I0214 19:05:58.175142 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df22fdf1-e5d3-4d8b-9385-4f3abeda71ee-internal-tls-certs\") pod \"placement-bb56bbbfb-v5pf9\" (UID: \"df22fdf1-e5d3-4d8b-9385-4f3abeda71ee\") " pod="openstack/placement-bb56bbbfb-v5pf9" Feb 14 19:05:58 crc kubenswrapper[4897]: I0214 19:05:58.175247 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df22fdf1-e5d3-4d8b-9385-4f3abeda71ee-config-data\") pod \"placement-bb56bbbfb-v5pf9\" (UID: \"df22fdf1-e5d3-4d8b-9385-4f3abeda71ee\") " pod="openstack/placement-bb56bbbfb-v5pf9" Feb 14 19:05:58 crc kubenswrapper[4897]: I0214 19:05:58.175292 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df22fdf1-e5d3-4d8b-9385-4f3abeda71ee-public-tls-certs\") pod \"placement-bb56bbbfb-v5pf9\" (UID: \"df22fdf1-e5d3-4d8b-9385-4f3abeda71ee\") " pod="openstack/placement-bb56bbbfb-v5pf9" Feb 14 19:05:58 crc kubenswrapper[4897]: I0214 19:05:58.175334 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zrss\" (UniqueName: \"kubernetes.io/projected/df22fdf1-e5d3-4d8b-9385-4f3abeda71ee-kube-api-access-8zrss\") pod \"placement-bb56bbbfb-v5pf9\" (UID: \"df22fdf1-e5d3-4d8b-9385-4f3abeda71ee\") " pod="openstack/placement-bb56bbbfb-v5pf9" Feb 14 19:05:58 crc kubenswrapper[4897]: I0214 19:05:58.175412 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df22fdf1-e5d3-4d8b-9385-4f3abeda71ee-combined-ca-bundle\") pod \"placement-bb56bbbfb-v5pf9\" (UID: \"df22fdf1-e5d3-4d8b-9385-4f3abeda71ee\") " pod="openstack/placement-bb56bbbfb-v5pf9" Feb 14 19:05:58 crc kubenswrapper[4897]: I0214 19:05:58.175500 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df22fdf1-e5d3-4d8b-9385-4f3abeda71ee-scripts\") pod \"placement-bb56bbbfb-v5pf9\" (UID: \"df22fdf1-e5d3-4d8b-9385-4f3abeda71ee\") " pod="openstack/placement-bb56bbbfb-v5pf9" Feb 14 19:05:58 crc kubenswrapper[4897]: I0214 19:05:58.175521 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df22fdf1-e5d3-4d8b-9385-4f3abeda71ee-logs\") pod \"placement-bb56bbbfb-v5pf9\" (UID: \"df22fdf1-e5d3-4d8b-9385-4f3abeda71ee\") " pod="openstack/placement-bb56bbbfb-v5pf9" Feb 14 19:05:58 crc kubenswrapper[4897]: I0214 19:05:58.176099 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df22fdf1-e5d3-4d8b-9385-4f3abeda71ee-logs\") pod \"placement-bb56bbbfb-v5pf9\" (UID: \"df22fdf1-e5d3-4d8b-9385-4f3abeda71ee\") " pod="openstack/placement-bb56bbbfb-v5pf9" Feb 14 19:05:58 crc kubenswrapper[4897]: I0214 19:05:58.182983 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df22fdf1-e5d3-4d8b-9385-4f3abeda71ee-public-tls-certs\") pod \"placement-bb56bbbfb-v5pf9\" (UID: \"df22fdf1-e5d3-4d8b-9385-4f3abeda71ee\") " pod="openstack/placement-bb56bbbfb-v5pf9" Feb 14 19:05:58 crc kubenswrapper[4897]: I0214 19:05:58.190047 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df22fdf1-e5d3-4d8b-9385-4f3abeda71ee-internal-tls-certs\") pod \"placement-bb56bbbfb-v5pf9\" (UID: \"df22fdf1-e5d3-4d8b-9385-4f3abeda71ee\") " pod="openstack/placement-bb56bbbfb-v5pf9" Feb 14 19:05:58 crc kubenswrapper[4897]: I0214 19:05:58.190082 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df22fdf1-e5d3-4d8b-9385-4f3abeda71ee-config-data\") pod \"placement-bb56bbbfb-v5pf9\" (UID: \"df22fdf1-e5d3-4d8b-9385-4f3abeda71ee\") " pod="openstack/placement-bb56bbbfb-v5pf9" Feb 14 19:05:58 crc kubenswrapper[4897]: I0214 19:05:58.195569 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df22fdf1-e5d3-4d8b-9385-4f3abeda71ee-combined-ca-bundle\") pod \"placement-bb56bbbfb-v5pf9\" (UID: \"df22fdf1-e5d3-4d8b-9385-4f3abeda71ee\") " pod="openstack/placement-bb56bbbfb-v5pf9" Feb 14 19:05:58 crc kubenswrapper[4897]: I0214 19:05:58.193212 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df22fdf1-e5d3-4d8b-9385-4f3abeda71ee-scripts\") pod \"placement-bb56bbbfb-v5pf9\" (UID: \"df22fdf1-e5d3-4d8b-9385-4f3abeda71ee\") " pod="openstack/placement-bb56bbbfb-v5pf9" Feb 14 19:05:58 crc kubenswrapper[4897]: I0214 19:05:58.202341 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zrss\" (UniqueName: \"kubernetes.io/projected/df22fdf1-e5d3-4d8b-9385-4f3abeda71ee-kube-api-access-8zrss\") pod \"placement-bb56bbbfb-v5pf9\" (UID: \"df22fdf1-e5d3-4d8b-9385-4f3abeda71ee\") " pod="openstack/placement-bb56bbbfb-v5pf9" Feb 14 19:05:58 crc kubenswrapper[4897]: I0214 19:05:58.296198 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bb56bbbfb-v5pf9" Feb 14 19:05:58 crc kubenswrapper[4897]: I0214 19:05:58.779549 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-bb56bbbfb-v5pf9"] Feb 14 19:05:58 crc kubenswrapper[4897]: W0214 19:05:58.782864 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf22fdf1_e5d3_4d8b_9385_4f3abeda71ee.slice/crio-995e11815f03a52f7278ea17057e524bd143e2617ff61e7b165f9a87bbd4e9c0 WatchSource:0}: Error finding container 995e11815f03a52f7278ea17057e524bd143e2617ff61e7b165f9a87bbd4e9c0: Status 404 returned error can't find the container with id 995e11815f03a52f7278ea17057e524bd143e2617ff61e7b165f9a87bbd4e9c0 Feb 14 19:05:59 crc kubenswrapper[4897]: I0214 19:05:59.454538 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-798cbbdc78-n5tht" Feb 14 19:05:59 crc kubenswrapper[4897]: I0214 19:05:59.684491 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bb56bbbfb-v5pf9" event={"ID":"df22fdf1-e5d3-4d8b-9385-4f3abeda71ee","Type":"ContainerStarted","Data":"13d4743988d21ee28420d326d9f95f3a39f56600b5115f4eb159a58289283ebf"} Feb 14 19:05:59 crc kubenswrapper[4897]: I0214 19:05:59.684533 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bb56bbbfb-v5pf9" event={"ID":"df22fdf1-e5d3-4d8b-9385-4f3abeda71ee","Type":"ContainerStarted","Data":"2f3df70b913b4de94b3ecbc24e83421af7c8c9a5f963c7edc22c95a021232f46"} Feb 14 19:05:59 crc kubenswrapper[4897]: I0214 19:05:59.684542 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bb56bbbfb-v5pf9" event={"ID":"df22fdf1-e5d3-4d8b-9385-4f3abeda71ee","Type":"ContainerStarted","Data":"995e11815f03a52f7278ea17057e524bd143e2617ff61e7b165f9a87bbd4e9c0"} Feb 14 19:05:59 crc kubenswrapper[4897]: I0214 19:05:59.685867 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-bb56bbbfb-v5pf9" Feb 14 19:05:59 crc kubenswrapper[4897]: I0214 19:05:59.685893 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-bb56bbbfb-v5pf9" Feb 14 19:05:59 crc kubenswrapper[4897]: I0214 19:05:59.736096 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-bb56bbbfb-v5pf9" podStartSLOduration=2.736078099 podStartE2EDuration="2.736078099s" podCreationTimestamp="2026-02-14 19:05:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:05:59.707546073 +0000 UTC m=+1412.683954566" watchObservedRunningTime="2026-02-14 19:05:59.736078099 +0000 UTC m=+1412.712486592" Feb 14 19:06:01 crc kubenswrapper[4897]: I0214 19:06:01.726655 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:06:01 crc kubenswrapper[4897]: I0214 19:06:01.728293 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:06:01 crc kubenswrapper[4897]: I0214 19:06:01.739312 4897 generic.go:334] "Generic (PLEG): container finished" podID="b40e3c50-9e19-4e83-af97-75ddf8aa8d88" containerID="b03e5ffab161800ddd9e3a9db87fb8e513602d6cc7ba4acb210aa13c4b454bd9" exitCode=0 Feb 14 19:06:01 crc kubenswrapper[4897]: I0214 19:06:01.740212 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b40e3c50-9e19-4e83-af97-75ddf8aa8d88","Type":"ContainerDied","Data":"b03e5ffab161800ddd9e3a9db87fb8e513602d6cc7ba4acb210aa13c4b454bd9"} Feb 14 19:06:01 crc kubenswrapper[4897]: I0214 19:06:01.930257 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.083694 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-etc-machine-id\") pod \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\" (UID: \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\") " Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.083823 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b40e3c50-9e19-4e83-af97-75ddf8aa8d88" (UID: "b40e3c50-9e19-4e83-af97-75ddf8aa8d88"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.083847 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-scripts\") pod \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\" (UID: \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\") " Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.083927 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj4vg\" (UniqueName: \"kubernetes.io/projected/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-kube-api-access-hj4vg\") pod \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\" (UID: \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\") " Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.084104 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-combined-ca-bundle\") pod \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\" (UID: \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\") " Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.084146 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-config-data\") pod \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\" (UID: \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\") " Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.084165 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-config-data-custom\") pod \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\" (UID: \"b40e3c50-9e19-4e83-af97-75ddf8aa8d88\") " Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.084618 4897 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.092513 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b40e3c50-9e19-4e83-af97-75ddf8aa8d88" (UID: "b40e3c50-9e19-4e83-af97-75ddf8aa8d88"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.093127 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-scripts" (OuterVolumeSpecName: "scripts") pod "b40e3c50-9e19-4e83-af97-75ddf8aa8d88" (UID: "b40e3c50-9e19-4e83-af97-75ddf8aa8d88"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.101166 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-kube-api-access-hj4vg" (OuterVolumeSpecName: "kube-api-access-hj4vg") pod "b40e3c50-9e19-4e83-af97-75ddf8aa8d88" (UID: "b40e3c50-9e19-4e83-af97-75ddf8aa8d88"). InnerVolumeSpecName "kube-api-access-hj4vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.187615 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.187647 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj4vg\" (UniqueName: \"kubernetes.io/projected/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-kube-api-access-hj4vg\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.187656 4897 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.214252 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b40e3c50-9e19-4e83-af97-75ddf8aa8d88" (UID: "b40e3c50-9e19-4e83-af97-75ddf8aa8d88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.290583 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.304993 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-config-data" (OuterVolumeSpecName: "config-data") pod "b40e3c50-9e19-4e83-af97-75ddf8aa8d88" (UID: "b40e3c50-9e19-4e83-af97-75ddf8aa8d88"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.393010 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b40e3c50-9e19-4e83-af97-75ddf8aa8d88-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.755647 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b40e3c50-9e19-4e83-af97-75ddf8aa8d88","Type":"ContainerDied","Data":"9557d514311944daa587aca1a5b0bf39f409071ceaac780d001cca5f58c47f16"} Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.756459 4897 scope.go:117] "RemoveContainer" containerID="d474a8ee93a1b643b6e71d96bb3fac578ae40675bde2d9f660ca216e3f6f39de" Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.755720 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.786962 4897 scope.go:117] "RemoveContainer" containerID="b03e5ffab161800ddd9e3a9db87fb8e513602d6cc7ba4acb210aa13c4b454bd9" Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.830621 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.843669 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.854675 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 19:06:02 crc kubenswrapper[4897]: E0214 19:06:02.855571 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b40e3c50-9e19-4e83-af97-75ddf8aa8d88" containerName="probe" Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.855693 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b40e3c50-9e19-4e83-af97-75ddf8aa8d88" containerName="probe" Feb 14 19:06:02 crc kubenswrapper[4897]: E0214 19:06:02.855811 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b40e3c50-9e19-4e83-af97-75ddf8aa8d88" containerName="cinder-scheduler" Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.855895 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b40e3c50-9e19-4e83-af97-75ddf8aa8d88" containerName="cinder-scheduler" Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.856324 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="b40e3c50-9e19-4e83-af97-75ddf8aa8d88" containerName="cinder-scheduler" Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.856421 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="b40e3c50-9e19-4e83-af97-75ddf8aa8d88" containerName="probe" Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.857882 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.860182 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 14 19:06:02 crc kubenswrapper[4897]: I0214 19:06:02.867463 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.017593 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e95d0e1a-6046-4ec7-8422-0858aca3bca9-config-data\") pod \"cinder-scheduler-0\" (UID: \"e95d0e1a-6046-4ec7-8422-0858aca3bca9\") " pod="openstack/cinder-scheduler-0" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.017990 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e95d0e1a-6046-4ec7-8422-0858aca3bca9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e95d0e1a-6046-4ec7-8422-0858aca3bca9\") " pod="openstack/cinder-scheduler-0" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.018125 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6ngb\" (UniqueName: \"kubernetes.io/projected/e95d0e1a-6046-4ec7-8422-0858aca3bca9-kube-api-access-f6ngb\") pod \"cinder-scheduler-0\" (UID: \"e95d0e1a-6046-4ec7-8422-0858aca3bca9\") " pod="openstack/cinder-scheduler-0" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.018149 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e95d0e1a-6046-4ec7-8422-0858aca3bca9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e95d0e1a-6046-4ec7-8422-0858aca3bca9\") " pod="openstack/cinder-scheduler-0" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.018344 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e95d0e1a-6046-4ec7-8422-0858aca3bca9-scripts\") pod \"cinder-scheduler-0\" (UID: \"e95d0e1a-6046-4ec7-8422-0858aca3bca9\") " pod="openstack/cinder-scheduler-0" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.018584 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e95d0e1a-6046-4ec7-8422-0858aca3bca9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e95d0e1a-6046-4ec7-8422-0858aca3bca9\") " pod="openstack/cinder-scheduler-0" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.121058 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e95d0e1a-6046-4ec7-8422-0858aca3bca9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e95d0e1a-6046-4ec7-8422-0858aca3bca9\") " pod="openstack/cinder-scheduler-0" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.121173 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6ngb\" (UniqueName: \"kubernetes.io/projected/e95d0e1a-6046-4ec7-8422-0858aca3bca9-kube-api-access-f6ngb\") pod \"cinder-scheduler-0\" (UID: \"e95d0e1a-6046-4ec7-8422-0858aca3bca9\") " pod="openstack/cinder-scheduler-0" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.121200 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e95d0e1a-6046-4ec7-8422-0858aca3bca9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e95d0e1a-6046-4ec7-8422-0858aca3bca9\") " pod="openstack/cinder-scheduler-0" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.121241 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e95d0e1a-6046-4ec7-8422-0858aca3bca9-scripts\") pod \"cinder-scheduler-0\" (UID: \"e95d0e1a-6046-4ec7-8422-0858aca3bca9\") " pod="openstack/cinder-scheduler-0" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.121297 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e95d0e1a-6046-4ec7-8422-0858aca3bca9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e95d0e1a-6046-4ec7-8422-0858aca3bca9\") " pod="openstack/cinder-scheduler-0" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.121325 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e95d0e1a-6046-4ec7-8422-0858aca3bca9-config-data\") pod \"cinder-scheduler-0\" (UID: \"e95d0e1a-6046-4ec7-8422-0858aca3bca9\") " pod="openstack/cinder-scheduler-0" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.121565 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e95d0e1a-6046-4ec7-8422-0858aca3bca9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e95d0e1a-6046-4ec7-8422-0858aca3bca9\") " pod="openstack/cinder-scheduler-0" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.128315 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e95d0e1a-6046-4ec7-8422-0858aca3bca9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e95d0e1a-6046-4ec7-8422-0858aca3bca9\") " pod="openstack/cinder-scheduler-0" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.128806 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e95d0e1a-6046-4ec7-8422-0858aca3bca9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e95d0e1a-6046-4ec7-8422-0858aca3bca9\") " pod="openstack/cinder-scheduler-0" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.128936 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e95d0e1a-6046-4ec7-8422-0858aca3bca9-config-data\") pod \"cinder-scheduler-0\" (UID: \"e95d0e1a-6046-4ec7-8422-0858aca3bca9\") " pod="openstack/cinder-scheduler-0" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.136116 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e95d0e1a-6046-4ec7-8422-0858aca3bca9-scripts\") pod \"cinder-scheduler-0\" (UID: \"e95d0e1a-6046-4ec7-8422-0858aca3bca9\") " pod="openstack/cinder-scheduler-0" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.139307 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.140744 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.142064 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6ngb\" (UniqueName: \"kubernetes.io/projected/e95d0e1a-6046-4ec7-8422-0858aca3bca9-kube-api-access-f6ngb\") pod \"cinder-scheduler-0\" (UID: \"e95d0e1a-6046-4ec7-8422-0858aca3bca9\") " pod="openstack/cinder-scheduler-0" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.143671 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-t2bqd" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.145851 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.146018 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.153061 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.198699 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.222870 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cae53d40-a11d-48a6-933b-7e3710bd96d7-openstack-config\") pod \"openstackclient\" (UID: \"cae53d40-a11d-48a6-933b-7e3710bd96d7\") " pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.222944 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dks2v\" (UniqueName: \"kubernetes.io/projected/cae53d40-a11d-48a6-933b-7e3710bd96d7-kube-api-access-dks2v\") pod \"openstackclient\" (UID: \"cae53d40-a11d-48a6-933b-7e3710bd96d7\") " pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.223063 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae53d40-a11d-48a6-933b-7e3710bd96d7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"cae53d40-a11d-48a6-933b-7e3710bd96d7\") " pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.237751 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cae53d40-a11d-48a6-933b-7e3710bd96d7-openstack-config-secret\") pod \"openstackclient\" (UID: \"cae53d40-a11d-48a6-933b-7e3710bd96d7\") " pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.339546 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae53d40-a11d-48a6-933b-7e3710bd96d7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"cae53d40-a11d-48a6-933b-7e3710bd96d7\") " pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.339592 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cae53d40-a11d-48a6-933b-7e3710bd96d7-openstack-config-secret\") pod \"openstackclient\" (UID: \"cae53d40-a11d-48a6-933b-7e3710bd96d7\") " pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.339669 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cae53d40-a11d-48a6-933b-7e3710bd96d7-openstack-config\") pod \"openstackclient\" (UID: \"cae53d40-a11d-48a6-933b-7e3710bd96d7\") " pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.339713 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dks2v\" (UniqueName: \"kubernetes.io/projected/cae53d40-a11d-48a6-933b-7e3710bd96d7-kube-api-access-dks2v\") pod \"openstackclient\" (UID: \"cae53d40-a11d-48a6-933b-7e3710bd96d7\") " pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.340974 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cae53d40-a11d-48a6-933b-7e3710bd96d7-openstack-config\") pod \"openstackclient\" (UID: \"cae53d40-a11d-48a6-933b-7e3710bd96d7\") " pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.343484 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cae53d40-a11d-48a6-933b-7e3710bd96d7-openstack-config-secret\") pod \"openstackclient\" (UID: \"cae53d40-a11d-48a6-933b-7e3710bd96d7\") " pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.344794 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae53d40-a11d-48a6-933b-7e3710bd96d7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"cae53d40-a11d-48a6-933b-7e3710bd96d7\") " pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.354919 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dks2v\" (UniqueName: \"kubernetes.io/projected/cae53d40-a11d-48a6-933b-7e3710bd96d7-kube-api-access-dks2v\") pod \"openstackclient\" (UID: \"cae53d40-a11d-48a6-933b-7e3710bd96d7\") " pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.430618 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.453635 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.482730 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.485946 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.488596 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.496171 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 14 19:06:03 crc kubenswrapper[4897]: E0214 19:06:03.616249 4897 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 14 19:06:03 crc kubenswrapper[4897]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_cae53d40-a11d-48a6-933b-7e3710bd96d7_0(9a3c47c11584d2e900e5a53038108993672678d4059f63ee7cd50a76b5db573c): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9a3c47c11584d2e900e5a53038108993672678d4059f63ee7cd50a76b5db573c" Netns:"/var/run/netns/1f3f7b30-33e9-4f2d-b42b-c7184dce4e7c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=9a3c47c11584d2e900e5a53038108993672678d4059f63ee7cd50a76b5db573c;K8S_POD_UID=cae53d40-a11d-48a6-933b-7e3710bd96d7" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/cae53d40-a11d-48a6-933b-7e3710bd96d7]: expected pod UID "cae53d40-a11d-48a6-933b-7e3710bd96d7" but got "58bd1c73-7683-4665-92cc-2dbb8a1658a3" from Kube API Feb 14 19:06:03 crc kubenswrapper[4897]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 14 19:06:03 crc kubenswrapper[4897]: > Feb 14 19:06:03 crc kubenswrapper[4897]: E0214 19:06:03.616310 4897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 14 19:06:03 crc kubenswrapper[4897]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_cae53d40-a11d-48a6-933b-7e3710bd96d7_0(9a3c47c11584d2e900e5a53038108993672678d4059f63ee7cd50a76b5db573c): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9a3c47c11584d2e900e5a53038108993672678d4059f63ee7cd50a76b5db573c" Netns:"/var/run/netns/1f3f7b30-33e9-4f2d-b42b-c7184dce4e7c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=9a3c47c11584d2e900e5a53038108993672678d4059f63ee7cd50a76b5db573c;K8S_POD_UID=cae53d40-a11d-48a6-933b-7e3710bd96d7" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/cae53d40-a11d-48a6-933b-7e3710bd96d7]: expected pod UID "cae53d40-a11d-48a6-933b-7e3710bd96d7" but got "58bd1c73-7683-4665-92cc-2dbb8a1658a3" from Kube API Feb 14 19:06:03 crc kubenswrapper[4897]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 14 19:06:03 crc kubenswrapper[4897]: > pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.652868 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/58bd1c73-7683-4665-92cc-2dbb8a1658a3-openstack-config-secret\") pod \"openstackclient\" (UID: \"58bd1c73-7683-4665-92cc-2dbb8a1658a3\") " pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.652919 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58bd1c73-7683-4665-92cc-2dbb8a1658a3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"58bd1c73-7683-4665-92cc-2dbb8a1658a3\") " pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.652970 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/58bd1c73-7683-4665-92cc-2dbb8a1658a3-openstack-config\") pod \"openstackclient\" (UID: \"58bd1c73-7683-4665-92cc-2dbb8a1658a3\") " pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.653086 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg56d\" (UniqueName: \"kubernetes.io/projected/58bd1c73-7683-4665-92cc-2dbb8a1658a3-kube-api-access-wg56d\") pod \"openstackclient\" (UID: \"58bd1c73-7683-4665-92cc-2dbb8a1658a3\") " pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.759351 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58bd1c73-7683-4665-92cc-2dbb8a1658a3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"58bd1c73-7683-4665-92cc-2dbb8a1658a3\") " pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.759504 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/58bd1c73-7683-4665-92cc-2dbb8a1658a3-openstack-config\") pod \"openstackclient\" (UID: \"58bd1c73-7683-4665-92cc-2dbb8a1658a3\") " pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.759787 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg56d\" (UniqueName: \"kubernetes.io/projected/58bd1c73-7683-4665-92cc-2dbb8a1658a3-kube-api-access-wg56d\") pod \"openstackclient\" (UID: \"58bd1c73-7683-4665-92cc-2dbb8a1658a3\") " pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.759923 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/58bd1c73-7683-4665-92cc-2dbb8a1658a3-openstack-config-secret\") pod \"openstackclient\" (UID: \"58bd1c73-7683-4665-92cc-2dbb8a1658a3\") " pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.764286 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/58bd1c73-7683-4665-92cc-2dbb8a1658a3-openstack-config\") pod \"openstackclient\" (UID: \"58bd1c73-7683-4665-92cc-2dbb8a1658a3\") " pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.765856 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58bd1c73-7683-4665-92cc-2dbb8a1658a3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"58bd1c73-7683-4665-92cc-2dbb8a1658a3\") " pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.766380 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/58bd1c73-7683-4665-92cc-2dbb8a1658a3-openstack-config-secret\") pod \"openstackclient\" (UID: \"58bd1c73-7683-4665-92cc-2dbb8a1658a3\") " pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.767105 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.770353 4897 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="cae53d40-a11d-48a6-933b-7e3710bd96d7" podUID="58bd1c73-7683-4665-92cc-2dbb8a1658a3" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.775817 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.776085 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg56d\" (UniqueName: \"kubernetes.io/projected/58bd1c73-7683-4665-92cc-2dbb8a1658a3-kube-api-access-wg56d\") pod \"openstackclient\" (UID: \"58bd1c73-7683-4665-92cc-2dbb8a1658a3\") " pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.807851 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b40e3c50-9e19-4e83-af97-75ddf8aa8d88" path="/var/lib/kubelet/pods/b40e3c50-9e19-4e83-af97-75ddf8aa8d88/volumes" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.810015 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.946552 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.964474 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cae53d40-a11d-48a6-933b-7e3710bd96d7-openstack-config-secret\") pod \"cae53d40-a11d-48a6-933b-7e3710bd96d7\" (UID: \"cae53d40-a11d-48a6-933b-7e3710bd96d7\") " Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.964567 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae53d40-a11d-48a6-933b-7e3710bd96d7-combined-ca-bundle\") pod \"cae53d40-a11d-48a6-933b-7e3710bd96d7\" (UID: \"cae53d40-a11d-48a6-933b-7e3710bd96d7\") " Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.964598 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dks2v\" (UniqueName: \"kubernetes.io/projected/cae53d40-a11d-48a6-933b-7e3710bd96d7-kube-api-access-dks2v\") pod \"cae53d40-a11d-48a6-933b-7e3710bd96d7\" (UID: \"cae53d40-a11d-48a6-933b-7e3710bd96d7\") " Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.964744 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cae53d40-a11d-48a6-933b-7e3710bd96d7-openstack-config\") pod \"cae53d40-a11d-48a6-933b-7e3710bd96d7\" (UID: \"cae53d40-a11d-48a6-933b-7e3710bd96d7\") " Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.965765 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cae53d40-a11d-48a6-933b-7e3710bd96d7-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "cae53d40-a11d-48a6-933b-7e3710bd96d7" (UID: "cae53d40-a11d-48a6-933b-7e3710bd96d7"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.989316 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae53d40-a11d-48a6-933b-7e3710bd96d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cae53d40-a11d-48a6-933b-7e3710bd96d7" (UID: "cae53d40-a11d-48a6-933b-7e3710bd96d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:03 crc kubenswrapper[4897]: I0214 19:06:03.990565 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cae53d40-a11d-48a6-933b-7e3710bd96d7-kube-api-access-dks2v" (OuterVolumeSpecName: "kube-api-access-dks2v") pod "cae53d40-a11d-48a6-933b-7e3710bd96d7" (UID: "cae53d40-a11d-48a6-933b-7e3710bd96d7"). InnerVolumeSpecName "kube-api-access-dks2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:06:04 crc kubenswrapper[4897]: I0214 19:06:04.006911 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae53d40-a11d-48a6-933b-7e3710bd96d7-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "cae53d40-a11d-48a6-933b-7e3710bd96d7" (UID: "cae53d40-a11d-48a6-933b-7e3710bd96d7"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:04 crc kubenswrapper[4897]: I0214 19:06:04.082301 4897 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cae53d40-a11d-48a6-933b-7e3710bd96d7-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:04 crc kubenswrapper[4897]: I0214 19:06:04.082337 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae53d40-a11d-48a6-933b-7e3710bd96d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:04 crc kubenswrapper[4897]: I0214 19:06:04.082346 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dks2v\" (UniqueName: \"kubernetes.io/projected/cae53d40-a11d-48a6-933b-7e3710bd96d7-kube-api-access-dks2v\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:04 crc kubenswrapper[4897]: I0214 19:06:04.082356 4897 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cae53d40-a11d-48a6-933b-7e3710bd96d7-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:04 crc kubenswrapper[4897]: I0214 19:06:04.377494 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 14 19:06:04 crc kubenswrapper[4897]: I0214 19:06:04.700125 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 14 19:06:04 crc kubenswrapper[4897]: I0214 19:06:04.782374 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"58bd1c73-7683-4665-92cc-2dbb8a1658a3","Type":"ContainerStarted","Data":"37ec910acfdf4d9ff528dab1d7ca8bbe6d1fb318a6ef82b14b5a62f84358c905"} Feb 14 19:06:04 crc kubenswrapper[4897]: I0214 19:06:04.787916 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 14 19:06:04 crc kubenswrapper[4897]: I0214 19:06:04.788872 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e95d0e1a-6046-4ec7-8422-0858aca3bca9","Type":"ContainerStarted","Data":"cadf3a9715958640ff0d8bd4da07bbf6f34cde6693a386925b55064de2993c3b"} Feb 14 19:06:04 crc kubenswrapper[4897]: I0214 19:06:04.788924 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e95d0e1a-6046-4ec7-8422-0858aca3bca9","Type":"ContainerStarted","Data":"8b5a1335d688ae64a4dffd2603e981fe66a30e08752ace2bc274f4a10166bc7c"} Feb 14 19:06:04 crc kubenswrapper[4897]: I0214 19:06:04.807905 4897 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="cae53d40-a11d-48a6-933b-7e3710bd96d7" podUID="58bd1c73-7683-4665-92cc-2dbb8a1658a3" Feb 14 19:06:05 crc kubenswrapper[4897]: I0214 19:06:05.808179 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cae53d40-a11d-48a6-933b-7e3710bd96d7" path="/var/lib/kubelet/pods/cae53d40-a11d-48a6-933b-7e3710bd96d7/volumes" Feb 14 19:06:05 crc kubenswrapper[4897]: I0214 19:06:05.809134 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e95d0e1a-6046-4ec7-8422-0858aca3bca9","Type":"ContainerStarted","Data":"fd984264ebce60010b31371ba6b1dcf08ac2c6f93034c4848f673c1703099693"} Feb 14 19:06:05 crc kubenswrapper[4897]: I0214 19:06:05.836143 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.836121232 podStartE2EDuration="3.836121232s" podCreationTimestamp="2026-02-14 19:06:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:06:05.825402034 +0000 UTC m=+1418.801810537" watchObservedRunningTime="2026-02-14 19:06:05.836121232 +0000 UTC m=+1418.812529715" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.726575 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.743782 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8bddbd865-mxphm" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.829751 4897 generic.go:334] "Generic (PLEG): container finished" podID="b2831142-237b-4232-8433-1a71cecdc1aa" containerID="42011cc9478d5b964b4de7de2ddb08f4eb06dd4e662546321d90a436eeb366c4" exitCode=137 Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.829823 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" event={"ID":"b2831142-237b-4232-8433-1a71cecdc1aa","Type":"ContainerDied","Data":"42011cc9478d5b964b4de7de2ddb08f4eb06dd4e662546321d90a436eeb366c4"} Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.829835 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.829850 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-786dc678dd-l4rb5" event={"ID":"b2831142-237b-4232-8433-1a71cecdc1aa","Type":"ContainerDied","Data":"4fe9d51f8b225a71d7a25149bff878076501cd799bdfd04df4dc4ce50e4c4d7c"} Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.829912 4897 scope.go:117] "RemoveContainer" containerID="42011cc9478d5b964b4de7de2ddb08f4eb06dd4e662546321d90a436eeb366c4" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.834007 4897 generic.go:334] "Generic (PLEG): container finished" podID="d6708e0a-c394-435d-b408-84716a21508f" containerID="a425845eec9fec19817515b2fa74cb33b7510f33c4feca0989f5697d2a099040" exitCode=137 Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.834869 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8bddbd865-mxphm" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.834879 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8bddbd865-mxphm" event={"ID":"d6708e0a-c394-435d-b408-84716a21508f","Type":"ContainerDied","Data":"a425845eec9fec19817515b2fa74cb33b7510f33c4feca0989f5697d2a099040"} Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.834930 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8bddbd865-mxphm" event={"ID":"d6708e0a-c394-435d-b408-84716a21508f","Type":"ContainerDied","Data":"6b844b3d6d2ceb4319a55dd0d435f8bcbc4ca9892469061eae1fc3d9974d4b7b"} Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.847061 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl7zp\" (UniqueName: \"kubernetes.io/projected/b2831142-237b-4232-8433-1a71cecdc1aa-kube-api-access-fl7zp\") pod \"b2831142-237b-4232-8433-1a71cecdc1aa\" (UID: \"b2831142-237b-4232-8433-1a71cecdc1aa\") " Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.847264 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6708e0a-c394-435d-b408-84716a21508f-combined-ca-bundle\") pod \"d6708e0a-c394-435d-b408-84716a21508f\" (UID: \"d6708e0a-c394-435d-b408-84716a21508f\") " Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.848711 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2831142-237b-4232-8433-1a71cecdc1aa-config-data\") pod \"b2831142-237b-4232-8433-1a71cecdc1aa\" (UID: \"b2831142-237b-4232-8433-1a71cecdc1aa\") " Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.848781 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmz9x\" (UniqueName: \"kubernetes.io/projected/d6708e0a-c394-435d-b408-84716a21508f-kube-api-access-xmz9x\") pod \"d6708e0a-c394-435d-b408-84716a21508f\" (UID: \"d6708e0a-c394-435d-b408-84716a21508f\") " Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.848983 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2831142-237b-4232-8433-1a71cecdc1aa-config-data-custom\") pod \"b2831142-237b-4232-8433-1a71cecdc1aa\" (UID: \"b2831142-237b-4232-8433-1a71cecdc1aa\") " Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.849014 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6708e0a-c394-435d-b408-84716a21508f-config-data\") pod \"d6708e0a-c394-435d-b408-84716a21508f\" (UID: \"d6708e0a-c394-435d-b408-84716a21508f\") " Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.849049 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2831142-237b-4232-8433-1a71cecdc1aa-combined-ca-bundle\") pod \"b2831142-237b-4232-8433-1a71cecdc1aa\" (UID: \"b2831142-237b-4232-8433-1a71cecdc1aa\") " Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.849127 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6708e0a-c394-435d-b408-84716a21508f-config-data-custom\") pod \"d6708e0a-c394-435d-b408-84716a21508f\" (UID: \"d6708e0a-c394-435d-b408-84716a21508f\") " Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.849162 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2831142-237b-4232-8433-1a71cecdc1aa-logs\") pod \"b2831142-237b-4232-8433-1a71cecdc1aa\" (UID: \"b2831142-237b-4232-8433-1a71cecdc1aa\") " Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.849178 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6708e0a-c394-435d-b408-84716a21508f-logs\") pod \"d6708e0a-c394-435d-b408-84716a21508f\" (UID: \"d6708e0a-c394-435d-b408-84716a21508f\") " Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.850678 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6708e0a-c394-435d-b408-84716a21508f-logs" (OuterVolumeSpecName: "logs") pod "d6708e0a-c394-435d-b408-84716a21508f" (UID: "d6708e0a-c394-435d-b408-84716a21508f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.851430 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2831142-237b-4232-8433-1a71cecdc1aa-logs" (OuterVolumeSpecName: "logs") pod "b2831142-237b-4232-8433-1a71cecdc1aa" (UID: "b2831142-237b-4232-8433-1a71cecdc1aa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.854098 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6708e0a-c394-435d-b408-84716a21508f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d6708e0a-c394-435d-b408-84716a21508f" (UID: "d6708e0a-c394-435d-b408-84716a21508f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.855481 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6708e0a-c394-435d-b408-84716a21508f-kube-api-access-xmz9x" (OuterVolumeSpecName: "kube-api-access-xmz9x") pod "d6708e0a-c394-435d-b408-84716a21508f" (UID: "d6708e0a-c394-435d-b408-84716a21508f"). InnerVolumeSpecName "kube-api-access-xmz9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.856565 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2831142-237b-4232-8433-1a71cecdc1aa-kube-api-access-fl7zp" (OuterVolumeSpecName: "kube-api-access-fl7zp") pod "b2831142-237b-4232-8433-1a71cecdc1aa" (UID: "b2831142-237b-4232-8433-1a71cecdc1aa"). InnerVolumeSpecName "kube-api-access-fl7zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.858598 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2831142-237b-4232-8433-1a71cecdc1aa-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b2831142-237b-4232-8433-1a71cecdc1aa" (UID: "b2831142-237b-4232-8433-1a71cecdc1aa"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.875129 4897 scope.go:117] "RemoveContainer" containerID="d66d9475087a65c875d4f2ba4d212cab00fdfed9b71e525f6378b5a934609052" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.886662 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2831142-237b-4232-8433-1a71cecdc1aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2831142-237b-4232-8433-1a71cecdc1aa" (UID: "b2831142-237b-4232-8433-1a71cecdc1aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.895848 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6708e0a-c394-435d-b408-84716a21508f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6708e0a-c394-435d-b408-84716a21508f" (UID: "d6708e0a-c394-435d-b408-84716a21508f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.926858 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2831142-237b-4232-8433-1a71cecdc1aa-config-data" (OuterVolumeSpecName: "config-data") pod "b2831142-237b-4232-8433-1a71cecdc1aa" (UID: "b2831142-237b-4232-8433-1a71cecdc1aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.930146 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6708e0a-c394-435d-b408-84716a21508f-config-data" (OuterVolumeSpecName: "config-data") pod "d6708e0a-c394-435d-b408-84716a21508f" (UID: "d6708e0a-c394-435d-b408-84716a21508f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.936327 4897 scope.go:117] "RemoveContainer" containerID="42011cc9478d5b964b4de7de2ddb08f4eb06dd4e662546321d90a436eeb366c4" Feb 14 19:06:06 crc kubenswrapper[4897]: E0214 19:06:06.936938 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42011cc9478d5b964b4de7de2ddb08f4eb06dd4e662546321d90a436eeb366c4\": container with ID starting with 42011cc9478d5b964b4de7de2ddb08f4eb06dd4e662546321d90a436eeb366c4 not found: ID does not exist" containerID="42011cc9478d5b964b4de7de2ddb08f4eb06dd4e662546321d90a436eeb366c4" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.936992 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42011cc9478d5b964b4de7de2ddb08f4eb06dd4e662546321d90a436eeb366c4"} err="failed to get container status \"42011cc9478d5b964b4de7de2ddb08f4eb06dd4e662546321d90a436eeb366c4\": rpc error: code = NotFound desc = could not find container \"42011cc9478d5b964b4de7de2ddb08f4eb06dd4e662546321d90a436eeb366c4\": container with ID starting with 42011cc9478d5b964b4de7de2ddb08f4eb06dd4e662546321d90a436eeb366c4 not found: ID does not exist" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.937019 4897 scope.go:117] "RemoveContainer" containerID="d66d9475087a65c875d4f2ba4d212cab00fdfed9b71e525f6378b5a934609052" Feb 14 19:06:06 crc kubenswrapper[4897]: E0214 19:06:06.937426 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d66d9475087a65c875d4f2ba4d212cab00fdfed9b71e525f6378b5a934609052\": container with ID starting with d66d9475087a65c875d4f2ba4d212cab00fdfed9b71e525f6378b5a934609052 not found: ID does not exist" containerID="d66d9475087a65c875d4f2ba4d212cab00fdfed9b71e525f6378b5a934609052" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.937460 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d66d9475087a65c875d4f2ba4d212cab00fdfed9b71e525f6378b5a934609052"} err="failed to get container status \"d66d9475087a65c875d4f2ba4d212cab00fdfed9b71e525f6378b5a934609052\": rpc error: code = NotFound desc = could not find container \"d66d9475087a65c875d4f2ba4d212cab00fdfed9b71e525f6378b5a934609052\": container with ID starting with d66d9475087a65c875d4f2ba4d212cab00fdfed9b71e525f6378b5a934609052 not found: ID does not exist" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.937474 4897 scope.go:117] "RemoveContainer" containerID="a425845eec9fec19817515b2fa74cb33b7510f33c4feca0989f5697d2a099040" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.952513 4897 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2831142-237b-4232-8433-1a71cecdc1aa-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.952541 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6708e0a-c394-435d-b408-84716a21508f-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.952551 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2831142-237b-4232-8433-1a71cecdc1aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.952560 4897 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6708e0a-c394-435d-b408-84716a21508f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.952568 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2831142-237b-4232-8433-1a71cecdc1aa-logs\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.952577 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6708e0a-c394-435d-b408-84716a21508f-logs\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.952585 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl7zp\" (UniqueName: \"kubernetes.io/projected/b2831142-237b-4232-8433-1a71cecdc1aa-kube-api-access-fl7zp\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.952593 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6708e0a-c394-435d-b408-84716a21508f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.952601 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2831142-237b-4232-8433-1a71cecdc1aa-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.952610 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmz9x\" (UniqueName: \"kubernetes.io/projected/d6708e0a-c394-435d-b408-84716a21508f-kube-api-access-xmz9x\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:06 crc kubenswrapper[4897]: I0214 19:06:06.980182 4897 scope.go:117] "RemoveContainer" containerID="3f8294d94fc13e1d78970f43614442c964101a7b4ea7435e69b8ff0f08361831" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.008287 4897 scope.go:117] "RemoveContainer" containerID="a425845eec9fec19817515b2fa74cb33b7510f33c4feca0989f5697d2a099040" Feb 14 19:06:07 crc kubenswrapper[4897]: E0214 19:06:07.008941 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a425845eec9fec19817515b2fa74cb33b7510f33c4feca0989f5697d2a099040\": container with ID starting with a425845eec9fec19817515b2fa74cb33b7510f33c4feca0989f5697d2a099040 not found: ID does not exist" containerID="a425845eec9fec19817515b2fa74cb33b7510f33c4feca0989f5697d2a099040" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.008997 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a425845eec9fec19817515b2fa74cb33b7510f33c4feca0989f5697d2a099040"} err="failed to get container status \"a425845eec9fec19817515b2fa74cb33b7510f33c4feca0989f5697d2a099040\": rpc error: code = NotFound desc = could not find container \"a425845eec9fec19817515b2fa74cb33b7510f33c4feca0989f5697d2a099040\": container with ID starting with a425845eec9fec19817515b2fa74cb33b7510f33c4feca0989f5697d2a099040 not found: ID does not exist" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.009046 4897 scope.go:117] "RemoveContainer" containerID="3f8294d94fc13e1d78970f43614442c964101a7b4ea7435e69b8ff0f08361831" Feb 14 19:06:07 crc kubenswrapper[4897]: E0214 19:06:07.009556 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f8294d94fc13e1d78970f43614442c964101a7b4ea7435e69b8ff0f08361831\": container with ID starting with 3f8294d94fc13e1d78970f43614442c964101a7b4ea7435e69b8ff0f08361831 not found: ID does not exist" containerID="3f8294d94fc13e1d78970f43614442c964101a7b4ea7435e69b8ff0f08361831" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.009612 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f8294d94fc13e1d78970f43614442c964101a7b4ea7435e69b8ff0f08361831"} err="failed to get container status \"3f8294d94fc13e1d78970f43614442c964101a7b4ea7435e69b8ff0f08361831\": rpc error: code = NotFound desc = could not find container \"3f8294d94fc13e1d78970f43614442c964101a7b4ea7435e69b8ff0f08361831\": container with ID starting with 3f8294d94fc13e1d78970f43614442c964101a7b4ea7435e69b8ff0f08361831 not found: ID does not exist" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.189242 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-8bddbd865-mxphm"] Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.211725 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-8bddbd865-mxphm"] Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.224635 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-786dc678dd-l4rb5"] Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.238708 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-786dc678dd-l4rb5"] Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.498205 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-697fc44bdc-wm8v2"] Feb 14 19:06:07 crc kubenswrapper[4897]: E0214 19:06:07.499058 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2831142-237b-4232-8433-1a71cecdc1aa" containerName="barbican-keystone-listener" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.499078 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2831142-237b-4232-8433-1a71cecdc1aa" containerName="barbican-keystone-listener" Feb 14 19:06:07 crc kubenswrapper[4897]: E0214 19:06:07.499130 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6708e0a-c394-435d-b408-84716a21508f" containerName="barbican-worker" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.499141 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6708e0a-c394-435d-b408-84716a21508f" containerName="barbican-worker" Feb 14 19:06:07 crc kubenswrapper[4897]: E0214 19:06:07.499178 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2831142-237b-4232-8433-1a71cecdc1aa" containerName="barbican-keystone-listener-log" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.499186 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2831142-237b-4232-8433-1a71cecdc1aa" containerName="barbican-keystone-listener-log" Feb 14 19:06:07 crc kubenswrapper[4897]: E0214 19:06:07.499234 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6708e0a-c394-435d-b408-84716a21508f" containerName="barbican-worker-log" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.499242 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6708e0a-c394-435d-b408-84716a21508f" containerName="barbican-worker-log" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.499504 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2831142-237b-4232-8433-1a71cecdc1aa" containerName="barbican-keystone-listener" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.499535 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6708e0a-c394-435d-b408-84716a21508f" containerName="barbican-worker-log" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.499553 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6708e0a-c394-435d-b408-84716a21508f" containerName="barbican-worker" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.499608 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2831142-237b-4232-8433-1a71cecdc1aa" containerName="barbican-keystone-listener-log" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.501361 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.505677 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.505800 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.505889 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.526496 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-697fc44bdc-wm8v2"] Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.567811 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a7e768f3-e3b8-4197-aaeb-8b1013320b47-etc-swift\") pod \"swift-proxy-697fc44bdc-wm8v2\" (UID: \"a7e768f3-e3b8-4197-aaeb-8b1013320b47\") " pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.567862 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7e768f3-e3b8-4197-aaeb-8b1013320b47-combined-ca-bundle\") pod \"swift-proxy-697fc44bdc-wm8v2\" (UID: \"a7e768f3-e3b8-4197-aaeb-8b1013320b47\") " pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.567919 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm6vp\" (UniqueName: \"kubernetes.io/projected/a7e768f3-e3b8-4197-aaeb-8b1013320b47-kube-api-access-jm6vp\") pod \"swift-proxy-697fc44bdc-wm8v2\" (UID: \"a7e768f3-e3b8-4197-aaeb-8b1013320b47\") " pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.567949 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7e768f3-e3b8-4197-aaeb-8b1013320b47-public-tls-certs\") pod \"swift-proxy-697fc44bdc-wm8v2\" (UID: \"a7e768f3-e3b8-4197-aaeb-8b1013320b47\") " pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.568011 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7e768f3-e3b8-4197-aaeb-8b1013320b47-run-httpd\") pod \"swift-proxy-697fc44bdc-wm8v2\" (UID: \"a7e768f3-e3b8-4197-aaeb-8b1013320b47\") " pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.568061 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7e768f3-e3b8-4197-aaeb-8b1013320b47-internal-tls-certs\") pod \"swift-proxy-697fc44bdc-wm8v2\" (UID: \"a7e768f3-e3b8-4197-aaeb-8b1013320b47\") " pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.568080 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7e768f3-e3b8-4197-aaeb-8b1013320b47-config-data\") pod \"swift-proxy-697fc44bdc-wm8v2\" (UID: \"a7e768f3-e3b8-4197-aaeb-8b1013320b47\") " pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.568107 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7e768f3-e3b8-4197-aaeb-8b1013320b47-log-httpd\") pod \"swift-proxy-697fc44bdc-wm8v2\" (UID: \"a7e768f3-e3b8-4197-aaeb-8b1013320b47\") " pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.670658 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm6vp\" (UniqueName: \"kubernetes.io/projected/a7e768f3-e3b8-4197-aaeb-8b1013320b47-kube-api-access-jm6vp\") pod \"swift-proxy-697fc44bdc-wm8v2\" (UID: \"a7e768f3-e3b8-4197-aaeb-8b1013320b47\") " pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.670733 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7e768f3-e3b8-4197-aaeb-8b1013320b47-public-tls-certs\") pod \"swift-proxy-697fc44bdc-wm8v2\" (UID: \"a7e768f3-e3b8-4197-aaeb-8b1013320b47\") " pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.671543 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7e768f3-e3b8-4197-aaeb-8b1013320b47-run-httpd\") pod \"swift-proxy-697fc44bdc-wm8v2\" (UID: \"a7e768f3-e3b8-4197-aaeb-8b1013320b47\") " pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.671588 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7e768f3-e3b8-4197-aaeb-8b1013320b47-internal-tls-certs\") pod \"swift-proxy-697fc44bdc-wm8v2\" (UID: \"a7e768f3-e3b8-4197-aaeb-8b1013320b47\") " pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.671614 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7e768f3-e3b8-4197-aaeb-8b1013320b47-config-data\") pod \"swift-proxy-697fc44bdc-wm8v2\" (UID: \"a7e768f3-e3b8-4197-aaeb-8b1013320b47\") " pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.671645 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7e768f3-e3b8-4197-aaeb-8b1013320b47-log-httpd\") pod \"swift-proxy-697fc44bdc-wm8v2\" (UID: \"a7e768f3-e3b8-4197-aaeb-8b1013320b47\") " pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.671719 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a7e768f3-e3b8-4197-aaeb-8b1013320b47-etc-swift\") pod \"swift-proxy-697fc44bdc-wm8v2\" (UID: \"a7e768f3-e3b8-4197-aaeb-8b1013320b47\") " pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.671770 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7e768f3-e3b8-4197-aaeb-8b1013320b47-combined-ca-bundle\") pod \"swift-proxy-697fc44bdc-wm8v2\" (UID: \"a7e768f3-e3b8-4197-aaeb-8b1013320b47\") " pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.672075 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7e768f3-e3b8-4197-aaeb-8b1013320b47-run-httpd\") pod \"swift-proxy-697fc44bdc-wm8v2\" (UID: \"a7e768f3-e3b8-4197-aaeb-8b1013320b47\") " pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.672304 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7e768f3-e3b8-4197-aaeb-8b1013320b47-log-httpd\") pod \"swift-proxy-697fc44bdc-wm8v2\" (UID: \"a7e768f3-e3b8-4197-aaeb-8b1013320b47\") " pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.675719 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7e768f3-e3b8-4197-aaeb-8b1013320b47-config-data\") pod \"swift-proxy-697fc44bdc-wm8v2\" (UID: \"a7e768f3-e3b8-4197-aaeb-8b1013320b47\") " pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.675765 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7e768f3-e3b8-4197-aaeb-8b1013320b47-internal-tls-certs\") pod \"swift-proxy-697fc44bdc-wm8v2\" (UID: \"a7e768f3-e3b8-4197-aaeb-8b1013320b47\") " pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.676436 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7e768f3-e3b8-4197-aaeb-8b1013320b47-combined-ca-bundle\") pod \"swift-proxy-697fc44bdc-wm8v2\" (UID: \"a7e768f3-e3b8-4197-aaeb-8b1013320b47\") " pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.678012 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7e768f3-e3b8-4197-aaeb-8b1013320b47-public-tls-certs\") pod \"swift-proxy-697fc44bdc-wm8v2\" (UID: \"a7e768f3-e3b8-4197-aaeb-8b1013320b47\") " pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.683160 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a7e768f3-e3b8-4197-aaeb-8b1013320b47-etc-swift\") pod \"swift-proxy-697fc44bdc-wm8v2\" (UID: \"a7e768f3-e3b8-4197-aaeb-8b1013320b47\") " pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.687707 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm6vp\" (UniqueName: \"kubernetes.io/projected/a7e768f3-e3b8-4197-aaeb-8b1013320b47-kube-api-access-jm6vp\") pod \"swift-proxy-697fc44bdc-wm8v2\" (UID: \"a7e768f3-e3b8-4197-aaeb-8b1013320b47\") " pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.809430 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2831142-237b-4232-8433-1a71cecdc1aa" path="/var/lib/kubelet/pods/b2831142-237b-4232-8433-1a71cecdc1aa/volumes" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.810263 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6708e0a-c394-435d-b408-84716a21508f" path="/var/lib/kubelet/pods/d6708e0a-c394-435d-b408-84716a21508f/volumes" Feb 14 19:06:07 crc kubenswrapper[4897]: I0214 19:06:07.824277 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:08 crc kubenswrapper[4897]: I0214 19:06:08.199465 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 14 19:06:08 crc kubenswrapper[4897]: I0214 19:06:08.285910 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:06:08 crc kubenswrapper[4897]: I0214 19:06:08.286191 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8fb69d8d-0e17-4fce-83d7-c983dade92d9" containerName="ceilometer-central-agent" containerID="cri-o://760a6f2275c9ee6c8d45053f8eac13713f8914b73393fe564d116b644dd6e7c5" gracePeriod=30 Feb 14 19:06:08 crc kubenswrapper[4897]: I0214 19:06:08.286320 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8fb69d8d-0e17-4fce-83d7-c983dade92d9" containerName="proxy-httpd" containerID="cri-o://1ef467dc1eac14c9f1cfb39daf5dfa4b241eb9208fe08d70df48f51546b37db3" gracePeriod=30 Feb 14 19:06:08 crc kubenswrapper[4897]: I0214 19:06:08.286360 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8fb69d8d-0e17-4fce-83d7-c983dade92d9" containerName="sg-core" containerID="cri-o://ddc76b40d2e013af34001f733a82ec7a31602e292c41f23b0a0dcc2397b9bdb8" gracePeriod=30 Feb 14 19:06:08 crc kubenswrapper[4897]: I0214 19:06:08.286391 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8fb69d8d-0e17-4fce-83d7-c983dade92d9" containerName="ceilometer-notification-agent" containerID="cri-o://1d952058c55e433be40d9c8cfa8f59ce4da5b40845d30717f31a857b05b6797c" gracePeriod=30 Feb 14 19:06:08 crc kubenswrapper[4897]: I0214 19:06:08.312511 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="8fb69d8d-0e17-4fce-83d7-c983dade92d9" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 14 19:06:08 crc kubenswrapper[4897]: I0214 19:06:08.416876 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-697fc44bdc-wm8v2"] Feb 14 19:06:08 crc kubenswrapper[4897]: W0214 19:06:08.422330 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7e768f3_e3b8_4197_aaeb_8b1013320b47.slice/crio-1fdad9c12d024bb821181bf13cb9c85c0960ddc509006e489860e2e99308da5b WatchSource:0}: Error finding container 1fdad9c12d024bb821181bf13cb9c85c0960ddc509006e489860e2e99308da5b: Status 404 returned error can't find the container with id 1fdad9c12d024bb821181bf13cb9c85c0960ddc509006e489860e2e99308da5b Feb 14 19:06:08 crc kubenswrapper[4897]: I0214 19:06:08.863494 4897 generic.go:334] "Generic (PLEG): container finished" podID="8fb69d8d-0e17-4fce-83d7-c983dade92d9" containerID="1ef467dc1eac14c9f1cfb39daf5dfa4b241eb9208fe08d70df48f51546b37db3" exitCode=0 Feb 14 19:06:08 crc kubenswrapper[4897]: I0214 19:06:08.863811 4897 generic.go:334] "Generic (PLEG): container finished" podID="8fb69d8d-0e17-4fce-83d7-c983dade92d9" containerID="ddc76b40d2e013af34001f733a82ec7a31602e292c41f23b0a0dcc2397b9bdb8" exitCode=2 Feb 14 19:06:08 crc kubenswrapper[4897]: I0214 19:06:08.863822 4897 generic.go:334] "Generic (PLEG): container finished" podID="8fb69d8d-0e17-4fce-83d7-c983dade92d9" containerID="760a6f2275c9ee6c8d45053f8eac13713f8914b73393fe564d116b644dd6e7c5" exitCode=0 Feb 14 19:06:08 crc kubenswrapper[4897]: I0214 19:06:08.863857 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fb69d8d-0e17-4fce-83d7-c983dade92d9","Type":"ContainerDied","Data":"1ef467dc1eac14c9f1cfb39daf5dfa4b241eb9208fe08d70df48f51546b37db3"} Feb 14 19:06:08 crc kubenswrapper[4897]: I0214 19:06:08.863882 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fb69d8d-0e17-4fce-83d7-c983dade92d9","Type":"ContainerDied","Data":"ddc76b40d2e013af34001f733a82ec7a31602e292c41f23b0a0dcc2397b9bdb8"} Feb 14 19:06:08 crc kubenswrapper[4897]: I0214 19:06:08.863892 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fb69d8d-0e17-4fce-83d7-c983dade92d9","Type":"ContainerDied","Data":"760a6f2275c9ee6c8d45053f8eac13713f8914b73393fe564d116b644dd6e7c5"} Feb 14 19:06:08 crc kubenswrapper[4897]: I0214 19:06:08.865753 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-697fc44bdc-wm8v2" event={"ID":"a7e768f3-e3b8-4197-aaeb-8b1013320b47","Type":"ContainerStarted","Data":"07a8cbed1840b320acba23131de1ff005c2c4eae969ebeb10144d05634c5a1e4"} Feb 14 19:06:08 crc kubenswrapper[4897]: I0214 19:06:08.865778 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-697fc44bdc-wm8v2" event={"ID":"a7e768f3-e3b8-4197-aaeb-8b1013320b47","Type":"ContainerStarted","Data":"1fdad9c12d024bb821181bf13cb9c85c0960ddc509006e489860e2e99308da5b"} Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.613772 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5d7f548864-bdfgg"] Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.615628 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5d7f548864-bdfgg" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.623511 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-vv8rd" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.623800 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.624072 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.629728 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5d7f548864-bdfgg"] Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.720195 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79b9v\" (UniqueName: \"kubernetes.io/projected/89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5-kube-api-access-79b9v\") pod \"heat-engine-5d7f548864-bdfgg\" (UID: \"89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5\") " pod="openstack/heat-engine-5d7f548864-bdfgg" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.720277 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5-config-data\") pod \"heat-engine-5d7f548864-bdfgg\" (UID: \"89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5\") " pod="openstack/heat-engine-5d7f548864-bdfgg" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.720399 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5-combined-ca-bundle\") pod \"heat-engine-5d7f548864-bdfgg\" (UID: \"89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5\") " pod="openstack/heat-engine-5d7f548864-bdfgg" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.720474 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5-config-data-custom\") pod \"heat-engine-5d7f548864-bdfgg\" (UID: \"89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5\") " pod="openstack/heat-engine-5d7f548864-bdfgg" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.778092 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-kt766"] Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.779965 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-kt766" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.825161 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5-combined-ca-bundle\") pod \"heat-engine-5d7f548864-bdfgg\" (UID: \"89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5\") " pod="openstack/heat-engine-5d7f548864-bdfgg" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.825482 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqttv\" (UniqueName: \"kubernetes.io/projected/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-kube-api-access-jqttv\") pod \"dnsmasq-dns-7756b9d78c-kt766\" (UID: \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\") " pod="openstack/dnsmasq-dns-7756b9d78c-kt766" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.825548 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-kt766\" (UID: \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\") " pod="openstack/dnsmasq-dns-7756b9d78c-kt766" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.825571 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5-config-data-custom\") pod \"heat-engine-5d7f548864-bdfgg\" (UID: \"89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5\") " pod="openstack/heat-engine-5d7f548864-bdfgg" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.825617 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-kt766\" (UID: \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\") " pod="openstack/dnsmasq-dns-7756b9d78c-kt766" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.825645 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79b9v\" (UniqueName: \"kubernetes.io/projected/89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5-kube-api-access-79b9v\") pod \"heat-engine-5d7f548864-bdfgg\" (UID: \"89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5\") " pod="openstack/heat-engine-5d7f548864-bdfgg" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.825676 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-kt766\" (UID: \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\") " pod="openstack/dnsmasq-dns-7756b9d78c-kt766" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.825701 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-config\") pod \"dnsmasq-dns-7756b9d78c-kt766\" (UID: \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\") " pod="openstack/dnsmasq-dns-7756b9d78c-kt766" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.825717 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-kt766\" (UID: \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\") " pod="openstack/dnsmasq-dns-7756b9d78c-kt766" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.825753 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5-config-data\") pod \"heat-engine-5d7f548864-bdfgg\" (UID: \"89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5\") " pod="openstack/heat-engine-5d7f548864-bdfgg" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.841943 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5-config-data-custom\") pod \"heat-engine-5d7f548864-bdfgg\" (UID: \"89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5\") " pod="openstack/heat-engine-5d7f548864-bdfgg" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.848139 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5-config-data\") pod \"heat-engine-5d7f548864-bdfgg\" (UID: \"89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5\") " pod="openstack/heat-engine-5d7f548864-bdfgg" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.865141 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-kt766"] Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.868713 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5-combined-ca-bundle\") pod \"heat-engine-5d7f548864-bdfgg\" (UID: \"89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5\") " pod="openstack/heat-engine-5d7f548864-bdfgg" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.872377 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79b9v\" (UniqueName: \"kubernetes.io/projected/89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5-kube-api-access-79b9v\") pod \"heat-engine-5d7f548864-bdfgg\" (UID: \"89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5\") " pod="openstack/heat-engine-5d7f548864-bdfgg" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.929371 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-kt766\" (UID: \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\") " pod="openstack/dnsmasq-dns-7756b9d78c-kt766" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.929691 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-kt766\" (UID: \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\") " pod="openstack/dnsmasq-dns-7756b9d78c-kt766" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.929832 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-kt766\" (UID: \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\") " pod="openstack/dnsmasq-dns-7756b9d78c-kt766" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.929910 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-config\") pod \"dnsmasq-dns-7756b9d78c-kt766\" (UID: \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\") " pod="openstack/dnsmasq-dns-7756b9d78c-kt766" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.930075 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-kt766\" (UID: \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\") " pod="openstack/dnsmasq-dns-7756b9d78c-kt766" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.930350 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqttv\" (UniqueName: \"kubernetes.io/projected/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-kube-api-access-jqttv\") pod \"dnsmasq-dns-7756b9d78c-kt766\" (UID: \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\") " pod="openstack/dnsmasq-dns-7756b9d78c-kt766" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.954159 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5d7f548864-bdfgg" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.953706 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-config\") pod \"dnsmasq-dns-7756b9d78c-kt766\" (UID: \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\") " pod="openstack/dnsmasq-dns-7756b9d78c-kt766" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.956665 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-kt766\" (UID: \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\") " pod="openstack/dnsmasq-dns-7756b9d78c-kt766" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.957362 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-kt766\" (UID: \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\") " pod="openstack/dnsmasq-dns-7756b9d78c-kt766" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.958239 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-697fc44bdc-wm8v2" event={"ID":"a7e768f3-e3b8-4197-aaeb-8b1013320b47","Type":"ContainerStarted","Data":"7e925db4fc049570c03d03ddb9e2cb9c21be17df86e60dff0dcc00a937987ab9"} Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.958371 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.958471 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.962540 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-kt766\" (UID: \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\") " pod="openstack/dnsmasq-dns-7756b9d78c-kt766" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.970713 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-kt766\" (UID: \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\") " pod="openstack/dnsmasq-dns-7756b9d78c-kt766" Feb 14 19:06:09 crc kubenswrapper[4897]: I0214 19:06:09.992717 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqttv\" (UniqueName: \"kubernetes.io/projected/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-kube-api-access-jqttv\") pod \"dnsmasq-dns-7756b9d78c-kt766\" (UID: \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\") " pod="openstack/dnsmasq-dns-7756b9d78c-kt766" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.013139 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-796669b846-cd6hr"] Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.014614 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-796669b846-cd6hr" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.020724 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.035677 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e4b6b13-37e3-4061-9e06-5969de8b94f1-config-data\") pod \"heat-cfnapi-796669b846-cd6hr\" (UID: \"0e4b6b13-37e3-4061-9e06-5969de8b94f1\") " pod="openstack/heat-cfnapi-796669b846-cd6hr" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.035950 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e4b6b13-37e3-4061-9e06-5969de8b94f1-config-data-custom\") pod \"heat-cfnapi-796669b846-cd6hr\" (UID: \"0e4b6b13-37e3-4061-9e06-5969de8b94f1\") " pod="openstack/heat-cfnapi-796669b846-cd6hr" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.036040 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7p6n\" (UniqueName: \"kubernetes.io/projected/0e4b6b13-37e3-4061-9e06-5969de8b94f1-kube-api-access-g7p6n\") pod \"heat-cfnapi-796669b846-cd6hr\" (UID: \"0e4b6b13-37e3-4061-9e06-5969de8b94f1\") " pod="openstack/heat-cfnapi-796669b846-cd6hr" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.036140 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e4b6b13-37e3-4061-9e06-5969de8b94f1-combined-ca-bundle\") pod \"heat-cfnapi-796669b846-cd6hr\" (UID: \"0e4b6b13-37e3-4061-9e06-5969de8b94f1\") " pod="openstack/heat-cfnapi-796669b846-cd6hr" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.099069 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-796669b846-cd6hr"] Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.136598 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-kt766" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.138048 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e4b6b13-37e3-4061-9e06-5969de8b94f1-config-data\") pod \"heat-cfnapi-796669b846-cd6hr\" (UID: \"0e4b6b13-37e3-4061-9e06-5969de8b94f1\") " pod="openstack/heat-cfnapi-796669b846-cd6hr" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.138198 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e4b6b13-37e3-4061-9e06-5969de8b94f1-config-data-custom\") pod \"heat-cfnapi-796669b846-cd6hr\" (UID: \"0e4b6b13-37e3-4061-9e06-5969de8b94f1\") " pod="openstack/heat-cfnapi-796669b846-cd6hr" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.138272 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7p6n\" (UniqueName: \"kubernetes.io/projected/0e4b6b13-37e3-4061-9e06-5969de8b94f1-kube-api-access-g7p6n\") pod \"heat-cfnapi-796669b846-cd6hr\" (UID: \"0e4b6b13-37e3-4061-9e06-5969de8b94f1\") " pod="openstack/heat-cfnapi-796669b846-cd6hr" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.138358 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e4b6b13-37e3-4061-9e06-5969de8b94f1-combined-ca-bundle\") pod \"heat-cfnapi-796669b846-cd6hr\" (UID: \"0e4b6b13-37e3-4061-9e06-5969de8b94f1\") " pod="openstack/heat-cfnapi-796669b846-cd6hr" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.149788 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e4b6b13-37e3-4061-9e06-5969de8b94f1-combined-ca-bundle\") pod \"heat-cfnapi-796669b846-cd6hr\" (UID: \"0e4b6b13-37e3-4061-9e06-5969de8b94f1\") " pod="openstack/heat-cfnapi-796669b846-cd6hr" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.166860 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e4b6b13-37e3-4061-9e06-5969de8b94f1-config-data-custom\") pod \"heat-cfnapi-796669b846-cd6hr\" (UID: \"0e4b6b13-37e3-4061-9e06-5969de8b94f1\") " pod="openstack/heat-cfnapi-796669b846-cd6hr" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.174487 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-67c9665685-zvrsn"] Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.174692 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7p6n\" (UniqueName: \"kubernetes.io/projected/0e4b6b13-37e3-4061-9e06-5969de8b94f1-kube-api-access-g7p6n\") pod \"heat-cfnapi-796669b846-cd6hr\" (UID: \"0e4b6b13-37e3-4061-9e06-5969de8b94f1\") " pod="openstack/heat-cfnapi-796669b846-cd6hr" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.176168 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-67c9665685-zvrsn" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.178088 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e4b6b13-37e3-4061-9e06-5969de8b94f1-config-data\") pod \"heat-cfnapi-796669b846-cd6hr\" (UID: \"0e4b6b13-37e3-4061-9e06-5969de8b94f1\") " pod="openstack/heat-cfnapi-796669b846-cd6hr" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.181409 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.206687 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-67c9665685-zvrsn"] Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.242581 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c324e69-4bb9-40a6-a883-73a42e9ef646-config-data-custom\") pod \"heat-api-67c9665685-zvrsn\" (UID: \"5c324e69-4bb9-40a6-a883-73a42e9ef646\") " pod="openstack/heat-api-67c9665685-zvrsn" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.242625 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c324e69-4bb9-40a6-a883-73a42e9ef646-config-data\") pod \"heat-api-67c9665685-zvrsn\" (UID: \"5c324e69-4bb9-40a6-a883-73a42e9ef646\") " pod="openstack/heat-api-67c9665685-zvrsn" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.242654 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c324e69-4bb9-40a6-a883-73a42e9ef646-combined-ca-bundle\") pod \"heat-api-67c9665685-zvrsn\" (UID: \"5c324e69-4bb9-40a6-a883-73a42e9ef646\") " pod="openstack/heat-api-67c9665685-zvrsn" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.242686 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cqjz\" (UniqueName: \"kubernetes.io/projected/5c324e69-4bb9-40a6-a883-73a42e9ef646-kube-api-access-4cqjz\") pod \"heat-api-67c9665685-zvrsn\" (UID: \"5c324e69-4bb9-40a6-a883-73a42e9ef646\") " pod="openstack/heat-api-67c9665685-zvrsn" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.244209 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-697fc44bdc-wm8v2" podStartSLOduration=3.244184586 podStartE2EDuration="3.244184586s" podCreationTimestamp="2026-02-14 19:06:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:06:10.051773388 +0000 UTC m=+1423.028181881" watchObservedRunningTime="2026-02-14 19:06:10.244184586 +0000 UTC m=+1423.220593069" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.347326 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c324e69-4bb9-40a6-a883-73a42e9ef646-config-data-custom\") pod \"heat-api-67c9665685-zvrsn\" (UID: \"5c324e69-4bb9-40a6-a883-73a42e9ef646\") " pod="openstack/heat-api-67c9665685-zvrsn" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.347373 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c324e69-4bb9-40a6-a883-73a42e9ef646-config-data\") pod \"heat-api-67c9665685-zvrsn\" (UID: \"5c324e69-4bb9-40a6-a883-73a42e9ef646\") " pod="openstack/heat-api-67c9665685-zvrsn" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.347457 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c324e69-4bb9-40a6-a883-73a42e9ef646-combined-ca-bundle\") pod \"heat-api-67c9665685-zvrsn\" (UID: \"5c324e69-4bb9-40a6-a883-73a42e9ef646\") " pod="openstack/heat-api-67c9665685-zvrsn" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.347512 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cqjz\" (UniqueName: \"kubernetes.io/projected/5c324e69-4bb9-40a6-a883-73a42e9ef646-kube-api-access-4cqjz\") pod \"heat-api-67c9665685-zvrsn\" (UID: \"5c324e69-4bb9-40a6-a883-73a42e9ef646\") " pod="openstack/heat-api-67c9665685-zvrsn" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.353600 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c324e69-4bb9-40a6-a883-73a42e9ef646-config-data-custom\") pod \"heat-api-67c9665685-zvrsn\" (UID: \"5c324e69-4bb9-40a6-a883-73a42e9ef646\") " pod="openstack/heat-api-67c9665685-zvrsn" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.354339 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c324e69-4bb9-40a6-a883-73a42e9ef646-combined-ca-bundle\") pod \"heat-api-67c9665685-zvrsn\" (UID: \"5c324e69-4bb9-40a6-a883-73a42e9ef646\") " pod="openstack/heat-api-67c9665685-zvrsn" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.356736 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c324e69-4bb9-40a6-a883-73a42e9ef646-config-data\") pod \"heat-api-67c9665685-zvrsn\" (UID: \"5c324e69-4bb9-40a6-a883-73a42e9ef646\") " pod="openstack/heat-api-67c9665685-zvrsn" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.362450 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cqjz\" (UniqueName: \"kubernetes.io/projected/5c324e69-4bb9-40a6-a883-73a42e9ef646-kube-api-access-4cqjz\") pod \"heat-api-67c9665685-zvrsn\" (UID: \"5c324e69-4bb9-40a6-a883-73a42e9ef646\") " pod="openstack/heat-api-67c9665685-zvrsn" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.364132 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-796669b846-cd6hr" Feb 14 19:06:10 crc kubenswrapper[4897]: I0214 19:06:10.564922 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-67c9665685-zvrsn" Feb 14 19:06:12 crc kubenswrapper[4897]: I0214 19:06:12.225264 4897 generic.go:334] "Generic (PLEG): container finished" podID="8fb69d8d-0e17-4fce-83d7-c983dade92d9" containerID="1d952058c55e433be40d9c8cfa8f59ce4da5b40845d30717f31a857b05b6797c" exitCode=0 Feb 14 19:06:12 crc kubenswrapper[4897]: I0214 19:06:12.225353 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fb69d8d-0e17-4fce-83d7-c983dade92d9","Type":"ContainerDied","Data":"1d952058c55e433be40d9c8cfa8f59ce4da5b40845d30717f31a857b05b6797c"} Feb 14 19:06:13 crc kubenswrapper[4897]: I0214 19:06:13.424431 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.776620 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-dc4df654d-9w4f2"] Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.778624 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-dc4df654d-9w4f2" Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.788108 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-94b476d6c-nbxhf"] Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.790290 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-94b476d6c-nbxhf" Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.837715 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-94b476d6c-nbxhf"] Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.849140 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-dc4df654d-9w4f2"] Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.856706 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-86f48db4c-p7v4g"] Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.858404 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-86f48db4c-p7v4g" Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.878992 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec03a9b-3137-443f-b07f-eade8ffa27f5-combined-ca-bundle\") pod \"heat-api-94b476d6c-nbxhf\" (UID: \"aec03a9b-3137-443f-b07f-eade8ffa27f5\") " pod="openstack/heat-api-94b476d6c-nbxhf" Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.879059 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0485238-dabe-46e0-87b1-239d64814ef8-config-data-custom\") pod \"heat-cfnapi-86f48db4c-p7v4g\" (UID: \"c0485238-dabe-46e0-87b1-239d64814ef8\") " pod="openstack/heat-cfnapi-86f48db4c-p7v4g" Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.879159 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0485238-dabe-46e0-87b1-239d64814ef8-config-data\") pod \"heat-cfnapi-86f48db4c-p7v4g\" (UID: \"c0485238-dabe-46e0-87b1-239d64814ef8\") " pod="openstack/heat-cfnapi-86f48db4c-p7v4g" Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.879207 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6lhj\" (UniqueName: \"kubernetes.io/projected/aec03a9b-3137-443f-b07f-eade8ffa27f5-kube-api-access-d6lhj\") pod \"heat-api-94b476d6c-nbxhf\" (UID: \"aec03a9b-3137-443f-b07f-eade8ffa27f5\") " pod="openstack/heat-api-94b476d6c-nbxhf" Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.879231 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj9b2\" (UniqueName: \"kubernetes.io/projected/c0485238-dabe-46e0-87b1-239d64814ef8-kube-api-access-jj9b2\") pod \"heat-cfnapi-86f48db4c-p7v4g\" (UID: \"c0485238-dabe-46e0-87b1-239d64814ef8\") " pod="openstack/heat-cfnapi-86f48db4c-p7v4g" Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.879291 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aec03a9b-3137-443f-b07f-eade8ffa27f5-config-data-custom\") pod \"heat-api-94b476d6c-nbxhf\" (UID: \"aec03a9b-3137-443f-b07f-eade8ffa27f5\") " pod="openstack/heat-api-94b476d6c-nbxhf" Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.879323 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0485238-dabe-46e0-87b1-239d64814ef8-combined-ca-bundle\") pod \"heat-cfnapi-86f48db4c-p7v4g\" (UID: \"c0485238-dabe-46e0-87b1-239d64814ef8\") " pod="openstack/heat-cfnapi-86f48db4c-p7v4g" Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.879349 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec03a9b-3137-443f-b07f-eade8ffa27f5-config-data\") pod \"heat-api-94b476d6c-nbxhf\" (UID: \"aec03a9b-3137-443f-b07f-eade8ffa27f5\") " pod="openstack/heat-api-94b476d6c-nbxhf" Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.890260 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-86f48db4c-p7v4g"] Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.981022 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0485238-dabe-46e0-87b1-239d64814ef8-config-data\") pod \"heat-cfnapi-86f48db4c-p7v4g\" (UID: \"c0485238-dabe-46e0-87b1-239d64814ef8\") " pod="openstack/heat-cfnapi-86f48db4c-p7v4g" Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.981176 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6lhj\" (UniqueName: \"kubernetes.io/projected/aec03a9b-3137-443f-b07f-eade8ffa27f5-kube-api-access-d6lhj\") pod \"heat-api-94b476d6c-nbxhf\" (UID: \"aec03a9b-3137-443f-b07f-eade8ffa27f5\") " pod="openstack/heat-api-94b476d6c-nbxhf" Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.981207 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj9b2\" (UniqueName: \"kubernetes.io/projected/c0485238-dabe-46e0-87b1-239d64814ef8-kube-api-access-jj9b2\") pod \"heat-cfnapi-86f48db4c-p7v4g\" (UID: \"c0485238-dabe-46e0-87b1-239d64814ef8\") " pod="openstack/heat-cfnapi-86f48db4c-p7v4g" Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.981256 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6slhc\" (UniqueName: \"kubernetes.io/projected/3ff2fa58-497f-4e1c-8447-a25032ebac67-kube-api-access-6slhc\") pod \"heat-engine-dc4df654d-9w4f2\" (UID: \"3ff2fa58-497f-4e1c-8447-a25032ebac67\") " pod="openstack/heat-engine-dc4df654d-9w4f2" Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.981282 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aec03a9b-3137-443f-b07f-eade8ffa27f5-config-data-custom\") pod \"heat-api-94b476d6c-nbxhf\" (UID: \"aec03a9b-3137-443f-b07f-eade8ffa27f5\") " pod="openstack/heat-api-94b476d6c-nbxhf" Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.981305 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0485238-dabe-46e0-87b1-239d64814ef8-combined-ca-bundle\") pod \"heat-cfnapi-86f48db4c-p7v4g\" (UID: \"c0485238-dabe-46e0-87b1-239d64814ef8\") " pod="openstack/heat-cfnapi-86f48db4c-p7v4g" Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.981323 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec03a9b-3137-443f-b07f-eade8ffa27f5-config-data\") pod \"heat-api-94b476d6c-nbxhf\" (UID: \"aec03a9b-3137-443f-b07f-eade8ffa27f5\") " pod="openstack/heat-api-94b476d6c-nbxhf" Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.981423 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ff2fa58-497f-4e1c-8447-a25032ebac67-config-data\") pod \"heat-engine-dc4df654d-9w4f2\" (UID: \"3ff2fa58-497f-4e1c-8447-a25032ebac67\") " pod="openstack/heat-engine-dc4df654d-9w4f2" Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.981445 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff2fa58-497f-4e1c-8447-a25032ebac67-combined-ca-bundle\") pod \"heat-engine-dc4df654d-9w4f2\" (UID: \"3ff2fa58-497f-4e1c-8447-a25032ebac67\") " pod="openstack/heat-engine-dc4df654d-9w4f2" Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.981466 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec03a9b-3137-443f-b07f-eade8ffa27f5-combined-ca-bundle\") pod \"heat-api-94b476d6c-nbxhf\" (UID: \"aec03a9b-3137-443f-b07f-eade8ffa27f5\") " pod="openstack/heat-api-94b476d6c-nbxhf" Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.981490 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0485238-dabe-46e0-87b1-239d64814ef8-config-data-custom\") pod \"heat-cfnapi-86f48db4c-p7v4g\" (UID: \"c0485238-dabe-46e0-87b1-239d64814ef8\") " pod="openstack/heat-cfnapi-86f48db4c-p7v4g" Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.981507 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ff2fa58-497f-4e1c-8447-a25032ebac67-config-data-custom\") pod \"heat-engine-dc4df654d-9w4f2\" (UID: \"3ff2fa58-497f-4e1c-8447-a25032ebac67\") " pod="openstack/heat-engine-dc4df654d-9w4f2" Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.991272 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0485238-dabe-46e0-87b1-239d64814ef8-config-data-custom\") pod \"heat-cfnapi-86f48db4c-p7v4g\" (UID: \"c0485238-dabe-46e0-87b1-239d64814ef8\") " pod="openstack/heat-cfnapi-86f48db4c-p7v4g" Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.991956 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec03a9b-3137-443f-b07f-eade8ffa27f5-config-data\") pod \"heat-api-94b476d6c-nbxhf\" (UID: \"aec03a9b-3137-443f-b07f-eade8ffa27f5\") " pod="openstack/heat-api-94b476d6c-nbxhf" Feb 14 19:06:15 crc kubenswrapper[4897]: I0214 19:06:15.999743 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aec03a9b-3137-443f-b07f-eade8ffa27f5-config-data-custom\") pod \"heat-api-94b476d6c-nbxhf\" (UID: \"aec03a9b-3137-443f-b07f-eade8ffa27f5\") " pod="openstack/heat-api-94b476d6c-nbxhf" Feb 14 19:06:16 crc kubenswrapper[4897]: I0214 19:06:16.001604 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj9b2\" (UniqueName: \"kubernetes.io/projected/c0485238-dabe-46e0-87b1-239d64814ef8-kube-api-access-jj9b2\") pod \"heat-cfnapi-86f48db4c-p7v4g\" (UID: \"c0485238-dabe-46e0-87b1-239d64814ef8\") " pod="openstack/heat-cfnapi-86f48db4c-p7v4g" Feb 14 19:06:16 crc kubenswrapper[4897]: I0214 19:06:16.004484 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6lhj\" (UniqueName: \"kubernetes.io/projected/aec03a9b-3137-443f-b07f-eade8ffa27f5-kube-api-access-d6lhj\") pod \"heat-api-94b476d6c-nbxhf\" (UID: \"aec03a9b-3137-443f-b07f-eade8ffa27f5\") " pod="openstack/heat-api-94b476d6c-nbxhf" Feb 14 19:06:16 crc kubenswrapper[4897]: I0214 19:06:16.007072 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec03a9b-3137-443f-b07f-eade8ffa27f5-combined-ca-bundle\") pod \"heat-api-94b476d6c-nbxhf\" (UID: \"aec03a9b-3137-443f-b07f-eade8ffa27f5\") " pod="openstack/heat-api-94b476d6c-nbxhf" Feb 14 19:06:16 crc kubenswrapper[4897]: I0214 19:06:16.011905 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0485238-dabe-46e0-87b1-239d64814ef8-config-data\") pod \"heat-cfnapi-86f48db4c-p7v4g\" (UID: \"c0485238-dabe-46e0-87b1-239d64814ef8\") " pod="openstack/heat-cfnapi-86f48db4c-p7v4g" Feb 14 19:06:16 crc kubenswrapper[4897]: I0214 19:06:16.018833 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0485238-dabe-46e0-87b1-239d64814ef8-combined-ca-bundle\") pod \"heat-cfnapi-86f48db4c-p7v4g\" (UID: \"c0485238-dabe-46e0-87b1-239d64814ef8\") " pod="openstack/heat-cfnapi-86f48db4c-p7v4g" Feb 14 19:06:16 crc kubenswrapper[4897]: I0214 19:06:16.022692 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="8fb69d8d-0e17-4fce-83d7-c983dade92d9" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.206:3000/\": dial tcp 10.217.0.206:3000: connect: connection refused" Feb 14 19:06:16 crc kubenswrapper[4897]: I0214 19:06:16.083921 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ff2fa58-497f-4e1c-8447-a25032ebac67-config-data\") pod \"heat-engine-dc4df654d-9w4f2\" (UID: \"3ff2fa58-497f-4e1c-8447-a25032ebac67\") " pod="openstack/heat-engine-dc4df654d-9w4f2" Feb 14 19:06:16 crc kubenswrapper[4897]: I0214 19:06:16.083969 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff2fa58-497f-4e1c-8447-a25032ebac67-combined-ca-bundle\") pod \"heat-engine-dc4df654d-9w4f2\" (UID: \"3ff2fa58-497f-4e1c-8447-a25032ebac67\") " pod="openstack/heat-engine-dc4df654d-9w4f2" Feb 14 19:06:16 crc kubenswrapper[4897]: I0214 19:06:16.083998 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ff2fa58-497f-4e1c-8447-a25032ebac67-config-data-custom\") pod \"heat-engine-dc4df654d-9w4f2\" (UID: \"3ff2fa58-497f-4e1c-8447-a25032ebac67\") " pod="openstack/heat-engine-dc4df654d-9w4f2" Feb 14 19:06:16 crc kubenswrapper[4897]: I0214 19:06:16.084106 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6slhc\" (UniqueName: \"kubernetes.io/projected/3ff2fa58-497f-4e1c-8447-a25032ebac67-kube-api-access-6slhc\") pod \"heat-engine-dc4df654d-9w4f2\" (UID: \"3ff2fa58-497f-4e1c-8447-a25032ebac67\") " pod="openstack/heat-engine-dc4df654d-9w4f2" Feb 14 19:06:16 crc kubenswrapper[4897]: I0214 19:06:16.089983 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff2fa58-497f-4e1c-8447-a25032ebac67-combined-ca-bundle\") pod \"heat-engine-dc4df654d-9w4f2\" (UID: \"3ff2fa58-497f-4e1c-8447-a25032ebac67\") " pod="openstack/heat-engine-dc4df654d-9w4f2" Feb 14 19:06:16 crc kubenswrapper[4897]: I0214 19:06:16.091098 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ff2fa58-497f-4e1c-8447-a25032ebac67-config-data-custom\") pod \"heat-engine-dc4df654d-9w4f2\" (UID: \"3ff2fa58-497f-4e1c-8447-a25032ebac67\") " pod="openstack/heat-engine-dc4df654d-9w4f2" Feb 14 19:06:16 crc kubenswrapper[4897]: I0214 19:06:16.091902 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ff2fa58-497f-4e1c-8447-a25032ebac67-config-data\") pod \"heat-engine-dc4df654d-9w4f2\" (UID: \"3ff2fa58-497f-4e1c-8447-a25032ebac67\") " pod="openstack/heat-engine-dc4df654d-9w4f2" Feb 14 19:06:16 crc kubenswrapper[4897]: I0214 19:06:16.105890 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6slhc\" (UniqueName: \"kubernetes.io/projected/3ff2fa58-497f-4e1c-8447-a25032ebac67-kube-api-access-6slhc\") pod \"heat-engine-dc4df654d-9w4f2\" (UID: \"3ff2fa58-497f-4e1c-8447-a25032ebac67\") " pod="openstack/heat-engine-dc4df654d-9w4f2" Feb 14 19:06:16 crc kubenswrapper[4897]: I0214 19:06:16.146263 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-94b476d6c-nbxhf" Feb 14 19:06:16 crc kubenswrapper[4897]: I0214 19:06:16.181063 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-86f48db4c-p7v4g" Feb 14 19:06:16 crc kubenswrapper[4897]: I0214 19:06:16.403044 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-dc4df654d-9w4f2" Feb 14 19:06:16 crc kubenswrapper[4897]: I0214 19:06:16.677475 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-567589579f-jbtqc" Feb 14 19:06:16 crc kubenswrapper[4897]: I0214 19:06:16.742271 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5f78bcb6c6-95jr5"] Feb 14 19:06:16 crc kubenswrapper[4897]: I0214 19:06:16.742601 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5f78bcb6c6-95jr5" podUID="f02df6db-894f-46ff-9bdc-53559271efcc" containerName="neutron-httpd" containerID="cri-o://006cb9f87c8e9b82c013f350d99ca6813d52dd9db09179684551e19ec51b572f" gracePeriod=30 Feb 14 19:06:16 crc kubenswrapper[4897]: I0214 19:06:16.743855 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5f78bcb6c6-95jr5" podUID="f02df6db-894f-46ff-9bdc-53559271efcc" containerName="neutron-api" containerID="cri-o://17a52e8a3fe8f070db20a61f31504b23b5fcfe692a2e99fbc22c1cc12e743d63" gracePeriod=30 Feb 14 19:06:16 crc kubenswrapper[4897]: E0214 19:06:16.943525 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf02df6db_894f_46ff_9bdc_53559271efcc.slice/crio-006cb9f87c8e9b82c013f350d99ca6813d52dd9db09179684551e19ec51b572f.scope\": RecentStats: unable to find data in memory cache]" Feb 14 19:06:17 crc kubenswrapper[4897]: I0214 19:06:17.286356 4897 generic.go:334] "Generic (PLEG): container finished" podID="f02df6db-894f-46ff-9bdc-53559271efcc" containerID="006cb9f87c8e9b82c013f350d99ca6813d52dd9db09179684551e19ec51b572f" exitCode=0 Feb 14 19:06:17 crc kubenswrapper[4897]: I0214 19:06:17.286399 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f78bcb6c6-95jr5" event={"ID":"f02df6db-894f-46ff-9bdc-53559271efcc","Type":"ContainerDied","Data":"006cb9f87c8e9b82c013f350d99ca6813d52dd9db09179684551e19ec51b572f"} Feb 14 19:06:17 crc kubenswrapper[4897]: I0214 19:06:17.839644 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:17 crc kubenswrapper[4897]: I0214 19:06:17.854676 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-67c9665685-zvrsn"] Feb 14 19:06:17 crc kubenswrapper[4897]: I0214 19:06:17.865381 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-697fc44bdc-wm8v2" Feb 14 19:06:17 crc kubenswrapper[4897]: I0214 19:06:17.907010 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-796669b846-cd6hr"] Feb 14 19:06:17 crc kubenswrapper[4897]: I0214 19:06:17.934522 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5fc95b4d56-9mkgz"] Feb 14 19:06:17 crc kubenswrapper[4897]: I0214 19:06:17.936507 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5fc95b4d56-9mkgz" Feb 14 19:06:17 crc kubenswrapper[4897]: I0214 19:06:17.940292 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 14 19:06:17 crc kubenswrapper[4897]: I0214 19:06:17.943617 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 14 19:06:17 crc kubenswrapper[4897]: I0214 19:06:17.964070 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5fc95b4d56-9mkgz"] Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.027101 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-84bd5445c4-lf5pt"] Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.028982 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.030260 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-combined-ca-bundle\") pod \"heat-api-5fc95b4d56-9mkgz\" (UID: \"a2149326-55f7-405e-a005-d2b44e58342c\") " pod="openstack/heat-api-5fc95b4d56-9mkgz" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.030381 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-internal-tls-certs\") pod \"heat-api-5fc95b4d56-9mkgz\" (UID: \"a2149326-55f7-405e-a005-d2b44e58342c\") " pod="openstack/heat-api-5fc95b4d56-9mkgz" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.030437 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7rrv\" (UniqueName: \"kubernetes.io/projected/a2149326-55f7-405e-a005-d2b44e58342c-kube-api-access-l7rrv\") pod \"heat-api-5fc95b4d56-9mkgz\" (UID: \"a2149326-55f7-405e-a005-d2b44e58342c\") " pod="openstack/heat-api-5fc95b4d56-9mkgz" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.030505 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-public-tls-certs\") pod \"heat-api-5fc95b4d56-9mkgz\" (UID: \"a2149326-55f7-405e-a005-d2b44e58342c\") " pod="openstack/heat-api-5fc95b4d56-9mkgz" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.030572 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-config-data-custom\") pod \"heat-api-5fc95b4d56-9mkgz\" (UID: \"a2149326-55f7-405e-a005-d2b44e58342c\") " pod="openstack/heat-api-5fc95b4d56-9mkgz" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.030599 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-config-data\") pod \"heat-api-5fc95b4d56-9mkgz\" (UID: \"a2149326-55f7-405e-a005-d2b44e58342c\") " pod="openstack/heat-api-5fc95b4d56-9mkgz" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.035370 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.035608 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.054470 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-84bd5445c4-lf5pt"] Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.133283 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7rrv\" (UniqueName: \"kubernetes.io/projected/a2149326-55f7-405e-a005-d2b44e58342c-kube-api-access-l7rrv\") pod \"heat-api-5fc95b4d56-9mkgz\" (UID: \"a2149326-55f7-405e-a005-d2b44e58342c\") " pod="openstack/heat-api-5fc95b4d56-9mkgz" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.133338 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-994sd\" (UniqueName: \"kubernetes.io/projected/62ecb4f3-ad3f-4146-99b6-be063902ea75-kube-api-access-994sd\") pod \"heat-cfnapi-84bd5445c4-lf5pt\" (UID: \"62ecb4f3-ad3f-4146-99b6-be063902ea75\") " pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.133373 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-config-data-custom\") pod \"heat-cfnapi-84bd5445c4-lf5pt\" (UID: \"62ecb4f3-ad3f-4146-99b6-be063902ea75\") " pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.133406 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-public-tls-certs\") pod \"heat-api-5fc95b4d56-9mkgz\" (UID: \"a2149326-55f7-405e-a005-d2b44e58342c\") " pod="openstack/heat-api-5fc95b4d56-9mkgz" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.133459 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-config-data-custom\") pod \"heat-api-5fc95b4d56-9mkgz\" (UID: \"a2149326-55f7-405e-a005-d2b44e58342c\") " pod="openstack/heat-api-5fc95b4d56-9mkgz" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.133495 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-config-data\") pod \"heat-api-5fc95b4d56-9mkgz\" (UID: \"a2149326-55f7-405e-a005-d2b44e58342c\") " pod="openstack/heat-api-5fc95b4d56-9mkgz" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.133526 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-internal-tls-certs\") pod \"heat-cfnapi-84bd5445c4-lf5pt\" (UID: \"62ecb4f3-ad3f-4146-99b6-be063902ea75\") " pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.133561 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-combined-ca-bundle\") pod \"heat-api-5fc95b4d56-9mkgz\" (UID: \"a2149326-55f7-405e-a005-d2b44e58342c\") " pod="openstack/heat-api-5fc95b4d56-9mkgz" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.133621 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-public-tls-certs\") pod \"heat-cfnapi-84bd5445c4-lf5pt\" (UID: \"62ecb4f3-ad3f-4146-99b6-be063902ea75\") " pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.133661 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-combined-ca-bundle\") pod \"heat-cfnapi-84bd5445c4-lf5pt\" (UID: \"62ecb4f3-ad3f-4146-99b6-be063902ea75\") " pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.133678 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-config-data\") pod \"heat-cfnapi-84bd5445c4-lf5pt\" (UID: \"62ecb4f3-ad3f-4146-99b6-be063902ea75\") " pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.133697 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-internal-tls-certs\") pod \"heat-api-5fc95b4d56-9mkgz\" (UID: \"a2149326-55f7-405e-a005-d2b44e58342c\") " pod="openstack/heat-api-5fc95b4d56-9mkgz" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.141874 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-config-data-custom\") pod \"heat-api-5fc95b4d56-9mkgz\" (UID: \"a2149326-55f7-405e-a005-d2b44e58342c\") " pod="openstack/heat-api-5fc95b4d56-9mkgz" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.147369 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-config-data\") pod \"heat-api-5fc95b4d56-9mkgz\" (UID: \"a2149326-55f7-405e-a005-d2b44e58342c\") " pod="openstack/heat-api-5fc95b4d56-9mkgz" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.151167 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-combined-ca-bundle\") pod \"heat-api-5fc95b4d56-9mkgz\" (UID: \"a2149326-55f7-405e-a005-d2b44e58342c\") " pod="openstack/heat-api-5fc95b4d56-9mkgz" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.152955 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-internal-tls-certs\") pod \"heat-api-5fc95b4d56-9mkgz\" (UID: \"a2149326-55f7-405e-a005-d2b44e58342c\") " pod="openstack/heat-api-5fc95b4d56-9mkgz" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.161776 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-public-tls-certs\") pod \"heat-api-5fc95b4d56-9mkgz\" (UID: \"a2149326-55f7-405e-a005-d2b44e58342c\") " pod="openstack/heat-api-5fc95b4d56-9mkgz" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.188729 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7rrv\" (UniqueName: \"kubernetes.io/projected/a2149326-55f7-405e-a005-d2b44e58342c-kube-api-access-l7rrv\") pod \"heat-api-5fc95b4d56-9mkgz\" (UID: \"a2149326-55f7-405e-a005-d2b44e58342c\") " pod="openstack/heat-api-5fc95b4d56-9mkgz" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.235498 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-combined-ca-bundle\") pod \"heat-cfnapi-84bd5445c4-lf5pt\" (UID: \"62ecb4f3-ad3f-4146-99b6-be063902ea75\") " pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.276796 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-config-data\") pod \"heat-cfnapi-84bd5445c4-lf5pt\" (UID: \"62ecb4f3-ad3f-4146-99b6-be063902ea75\") " pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.277442 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-994sd\" (UniqueName: \"kubernetes.io/projected/62ecb4f3-ad3f-4146-99b6-be063902ea75-kube-api-access-994sd\") pod \"heat-cfnapi-84bd5445c4-lf5pt\" (UID: \"62ecb4f3-ad3f-4146-99b6-be063902ea75\") " pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.277501 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-config-data-custom\") pod \"heat-cfnapi-84bd5445c4-lf5pt\" (UID: \"62ecb4f3-ad3f-4146-99b6-be063902ea75\") " pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.277700 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-internal-tls-certs\") pod \"heat-cfnapi-84bd5445c4-lf5pt\" (UID: \"62ecb4f3-ad3f-4146-99b6-be063902ea75\") " pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.277863 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-public-tls-certs\") pod \"heat-cfnapi-84bd5445c4-lf5pt\" (UID: \"62ecb4f3-ad3f-4146-99b6-be063902ea75\") " pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.283566 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-internal-tls-certs\") pod \"heat-cfnapi-84bd5445c4-lf5pt\" (UID: \"62ecb4f3-ad3f-4146-99b6-be063902ea75\") " pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.285692 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-config-data-custom\") pod \"heat-cfnapi-84bd5445c4-lf5pt\" (UID: \"62ecb4f3-ad3f-4146-99b6-be063902ea75\") " pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.288707 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-combined-ca-bundle\") pod \"heat-cfnapi-84bd5445c4-lf5pt\" (UID: \"62ecb4f3-ad3f-4146-99b6-be063902ea75\") " pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.299110 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-config-data\") pod \"heat-cfnapi-84bd5445c4-lf5pt\" (UID: \"62ecb4f3-ad3f-4146-99b6-be063902ea75\") " pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.301171 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-994sd\" (UniqueName: \"kubernetes.io/projected/62ecb4f3-ad3f-4146-99b6-be063902ea75-kube-api-access-994sd\") pod \"heat-cfnapi-84bd5445c4-lf5pt\" (UID: \"62ecb4f3-ad3f-4146-99b6-be063902ea75\") " pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.316010 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-public-tls-certs\") pod \"heat-cfnapi-84bd5445c4-lf5pt\" (UID: \"62ecb4f3-ad3f-4146-99b6-be063902ea75\") " pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.319871 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5fc95b4d56-9mkgz" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.361622 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.590237 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.601387 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fb69d8d-0e17-4fce-83d7-c983dade92d9-run-httpd\") pod \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.601438 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fb69d8d-0e17-4fce-83d7-c983dade92d9-log-httpd\") pod \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.601468 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fb69d8d-0e17-4fce-83d7-c983dade92d9-config-data\") pod \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.601497 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fb69d8d-0e17-4fce-83d7-c983dade92d9-combined-ca-bundle\") pod \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.601520 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fb69d8d-0e17-4fce-83d7-c983dade92d9-scripts\") pod \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.601552 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8fb69d8d-0e17-4fce-83d7-c983dade92d9-sg-core-conf-yaml\") pod \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.601576 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn7jd\" (UniqueName: \"kubernetes.io/projected/8fb69d8d-0e17-4fce-83d7-c983dade92d9-kube-api-access-wn7jd\") pod \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\" (UID: \"8fb69d8d-0e17-4fce-83d7-c983dade92d9\") " Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.602588 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fb69d8d-0e17-4fce-83d7-c983dade92d9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8fb69d8d-0e17-4fce-83d7-c983dade92d9" (UID: "8fb69d8d-0e17-4fce-83d7-c983dade92d9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.602943 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fb69d8d-0e17-4fce-83d7-c983dade92d9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8fb69d8d-0e17-4fce-83d7-c983dade92d9" (UID: "8fb69d8d-0e17-4fce-83d7-c983dade92d9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.607695 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fb69d8d-0e17-4fce-83d7-c983dade92d9-scripts" (OuterVolumeSpecName: "scripts") pod "8fb69d8d-0e17-4fce-83d7-c983dade92d9" (UID: "8fb69d8d-0e17-4fce-83d7-c983dade92d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.618129 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fb69d8d-0e17-4fce-83d7-c983dade92d9-kube-api-access-wn7jd" (OuterVolumeSpecName: "kube-api-access-wn7jd") pod "8fb69d8d-0e17-4fce-83d7-c983dade92d9" (UID: "8fb69d8d-0e17-4fce-83d7-c983dade92d9"). InnerVolumeSpecName "kube-api-access-wn7jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.678939 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fb69d8d-0e17-4fce-83d7-c983dade92d9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8fb69d8d-0e17-4fce-83d7-c983dade92d9" (UID: "8fb69d8d-0e17-4fce-83d7-c983dade92d9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.705184 4897 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fb69d8d-0e17-4fce-83d7-c983dade92d9-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.705217 4897 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fb69d8d-0e17-4fce-83d7-c983dade92d9-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.705228 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fb69d8d-0e17-4fce-83d7-c983dade92d9-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.705236 4897 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8fb69d8d-0e17-4fce-83d7-c983dade92d9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.705246 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn7jd\" (UniqueName: \"kubernetes.io/projected/8fb69d8d-0e17-4fce-83d7-c983dade92d9-kube-api-access-wn7jd\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.904717 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fb69d8d-0e17-4fce-83d7-c983dade92d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fb69d8d-0e17-4fce-83d7-c983dade92d9" (UID: "8fb69d8d-0e17-4fce-83d7-c983dade92d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.912845 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fb69d8d-0e17-4fce-83d7-c983dade92d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.915205 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fb69d8d-0e17-4fce-83d7-c983dade92d9-config-data" (OuterVolumeSpecName: "config-data") pod "8fb69d8d-0e17-4fce-83d7-c983dade92d9" (UID: "8fb69d8d-0e17-4fce-83d7-c983dade92d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:18 crc kubenswrapper[4897]: W0214 19:06:18.948015 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode56dc64b_fe4e_4e4c_9266_cb073ab171e8.slice/crio-45a8a84ffb21aec9f7f04f5d839c5755d87a161f60fc5d3f56eac86256c4745a WatchSource:0}: Error finding container 45a8a84ffb21aec9f7f04f5d839c5755d87a161f60fc5d3f56eac86256c4745a: Status 404 returned error can't find the container with id 45a8a84ffb21aec9f7f04f5d839c5755d87a161f60fc5d3f56eac86256c4745a Feb 14 19:06:18 crc kubenswrapper[4897]: I0214 19:06:18.951640 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-kt766"] Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.017281 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fb69d8d-0e17-4fce-83d7-c983dade92d9-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.107806 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-796669b846-cd6hr"] Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.416812 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.416817 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fb69d8d-0e17-4fce-83d7-c983dade92d9","Type":"ContainerDied","Data":"cdf761d947d703368c31790fd8fd4b55ff0eb0660771cba73b485c113b8d11c9"} Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.417220 4897 scope.go:117] "RemoveContainer" containerID="1ef467dc1eac14c9f1cfb39daf5dfa4b241eb9208fe08d70df48f51546b37db3" Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.422376 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-796669b846-cd6hr" event={"ID":"0e4b6b13-37e3-4061-9e06-5969de8b94f1","Type":"ContainerStarted","Data":"e6a2738446421840499033d33a66ce41da52dc5653fc79e54176bbcd554a531a"} Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.437720 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"58bd1c73-7683-4665-92cc-2dbb8a1658a3","Type":"ContainerStarted","Data":"a54f1eaeaa73f4ef7aecf86fd881065626fd078002d22c6bcd512e20df4cdc71"} Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.452374 4897 generic.go:334] "Generic (PLEG): container finished" podID="e56dc64b-fe4e-4e4c-9266-cb073ab171e8" containerID="e3e9114fc9373e26b71f0639699cd234267efb30b593aac5d5850cf5a83642d9" exitCode=0 Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.452437 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-kt766" event={"ID":"e56dc64b-fe4e-4e4c-9266-cb073ab171e8","Type":"ContainerDied","Data":"e3e9114fc9373e26b71f0639699cd234267efb30b593aac5d5850cf5a83642d9"} Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.452471 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-kt766" event={"ID":"e56dc64b-fe4e-4e4c-9266-cb073ab171e8","Type":"ContainerStarted","Data":"45a8a84ffb21aec9f7f04f5d839c5755d87a161f60fc5d3f56eac86256c4745a"} Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.499253 4897 scope.go:117] "RemoveContainer" containerID="ddc76b40d2e013af34001f733a82ec7a31602e292c41f23b0a0dcc2397b9bdb8" Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.610067 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.820765109 podStartE2EDuration="16.610048257s" podCreationTimestamp="2026-02-14 19:06:03 +0000 UTC" firstStartedPulling="2026-02-14 19:06:04.39545331 +0000 UTC m=+1417.371861803" lastFinishedPulling="2026-02-14 19:06:18.184736468 +0000 UTC m=+1431.161144951" observedRunningTime="2026-02-14 19:06:19.4606857 +0000 UTC m=+1432.437094193" watchObservedRunningTime="2026-02-14 19:06:19.610048257 +0000 UTC m=+1432.586456740" Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.636399 4897 scope.go:117] "RemoveContainer" containerID="1d952058c55e433be40d9c8cfa8f59ce4da5b40845d30717f31a857b05b6797c" Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.782119 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.798471 4897 scope.go:117] "RemoveContainer" containerID="760a6f2275c9ee6c8d45053f8eac13713f8914b73393fe564d116b644dd6e7c5" Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.863851 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.863908 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-84bd5445c4-lf5pt"] Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.863921 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-67c9665685-zvrsn"] Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.863932 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:06:19 crc kubenswrapper[4897]: E0214 19:06:19.864333 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb69d8d-0e17-4fce-83d7-c983dade92d9" containerName="sg-core" Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.864365 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb69d8d-0e17-4fce-83d7-c983dade92d9" containerName="sg-core" Feb 14 19:06:19 crc kubenswrapper[4897]: E0214 19:06:19.864395 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb69d8d-0e17-4fce-83d7-c983dade92d9" containerName="proxy-httpd" Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.864402 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb69d8d-0e17-4fce-83d7-c983dade92d9" containerName="proxy-httpd" Feb 14 19:06:19 crc kubenswrapper[4897]: E0214 19:06:19.864441 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb69d8d-0e17-4fce-83d7-c983dade92d9" containerName="ceilometer-central-agent" Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.864448 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb69d8d-0e17-4fce-83d7-c983dade92d9" containerName="ceilometer-central-agent" Feb 14 19:06:19 crc kubenswrapper[4897]: E0214 19:06:19.864464 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb69d8d-0e17-4fce-83d7-c983dade92d9" containerName="ceilometer-notification-agent" Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.864470 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb69d8d-0e17-4fce-83d7-c983dade92d9" containerName="ceilometer-notification-agent" Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.864808 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fb69d8d-0e17-4fce-83d7-c983dade92d9" containerName="sg-core" Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.864924 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fb69d8d-0e17-4fce-83d7-c983dade92d9" containerName="proxy-httpd" Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.864940 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fb69d8d-0e17-4fce-83d7-c983dade92d9" containerName="ceilometer-central-agent" Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.864949 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fb69d8d-0e17-4fce-83d7-c983dade92d9" containerName="ceilometer-notification-agent" Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.883841 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.891735 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.891946 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 19:06:19 crc kubenswrapper[4897]: I0214 19:06:19.948089 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-94b476d6c-nbxhf"] Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.006768 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-dc4df654d-9w4f2"] Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.046514 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5d7f548864-bdfgg"] Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.070711 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8ec536ba-5940-41a9-8334-b622eeb2e669-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " pod="openstack/ceilometer-0" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.070769 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ec536ba-5940-41a9-8334-b622eeb2e669-scripts\") pod \"ceilometer-0\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " pod="openstack/ceilometer-0" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.070862 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ec536ba-5940-41a9-8334-b622eeb2e669-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " pod="openstack/ceilometer-0" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.070906 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ec536ba-5940-41a9-8334-b622eeb2e669-config-data\") pod \"ceilometer-0\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " pod="openstack/ceilometer-0" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.071082 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ec536ba-5940-41a9-8334-b622eeb2e669-log-httpd\") pod \"ceilometer-0\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " pod="openstack/ceilometer-0" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.071120 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t45m4\" (UniqueName: \"kubernetes.io/projected/8ec536ba-5940-41a9-8334-b622eeb2e669-kube-api-access-t45m4\") pod \"ceilometer-0\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " pod="openstack/ceilometer-0" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.071172 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ec536ba-5940-41a9-8334-b622eeb2e669-run-httpd\") pod \"ceilometer-0\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " pod="openstack/ceilometer-0" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.076505 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-86f48db4c-p7v4g"] Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.091076 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.109474 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5fc95b4d56-9mkgz"] Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.173040 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ec536ba-5940-41a9-8334-b622eeb2e669-scripts\") pod \"ceilometer-0\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " pod="openstack/ceilometer-0" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.173086 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ec536ba-5940-41a9-8334-b622eeb2e669-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " pod="openstack/ceilometer-0" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.173128 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ec536ba-5940-41a9-8334-b622eeb2e669-config-data\") pod \"ceilometer-0\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " pod="openstack/ceilometer-0" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.173255 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ec536ba-5940-41a9-8334-b622eeb2e669-log-httpd\") pod \"ceilometer-0\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " pod="openstack/ceilometer-0" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.173284 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t45m4\" (UniqueName: \"kubernetes.io/projected/8ec536ba-5940-41a9-8334-b622eeb2e669-kube-api-access-t45m4\") pod \"ceilometer-0\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " pod="openstack/ceilometer-0" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.173328 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ec536ba-5940-41a9-8334-b622eeb2e669-run-httpd\") pod \"ceilometer-0\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " pod="openstack/ceilometer-0" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.173355 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8ec536ba-5940-41a9-8334-b622eeb2e669-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " pod="openstack/ceilometer-0" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.180043 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ec536ba-5940-41a9-8334-b622eeb2e669-scripts\") pod \"ceilometer-0\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " pod="openstack/ceilometer-0" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.180323 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ec536ba-5940-41a9-8334-b622eeb2e669-config-data\") pod \"ceilometer-0\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " pod="openstack/ceilometer-0" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.180606 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ec536ba-5940-41a9-8334-b622eeb2e669-log-httpd\") pod \"ceilometer-0\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " pod="openstack/ceilometer-0" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.180831 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ec536ba-5940-41a9-8334-b622eeb2e669-run-httpd\") pod \"ceilometer-0\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " pod="openstack/ceilometer-0" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.195354 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8ec536ba-5940-41a9-8334-b622eeb2e669-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " pod="openstack/ceilometer-0" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.197070 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ec536ba-5940-41a9-8334-b622eeb2e669-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " pod="openstack/ceilometer-0" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.213442 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t45m4\" (UniqueName: \"kubernetes.io/projected/8ec536ba-5940-41a9-8334-b622eeb2e669-kube-api-access-t45m4\") pod \"ceilometer-0\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " pod="openstack/ceilometer-0" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.357730 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.471246 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-94b476d6c-nbxhf" event={"ID":"aec03a9b-3137-443f-b07f-eade8ffa27f5","Type":"ContainerStarted","Data":"af1013973a62dac0475707c831692c272c8ec84d5dd1d9fc0a6aa265047d4e27"} Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.473009 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" event={"ID":"62ecb4f3-ad3f-4146-99b6-be063902ea75","Type":"ContainerStarted","Data":"cfd8c199990154c0b31159d75ba1f7ac009cbd37c3177db06842bda6dca9e4fe"} Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.474791 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-dc4df654d-9w4f2" event={"ID":"3ff2fa58-497f-4e1c-8447-a25032ebac67","Type":"ContainerStarted","Data":"323bab4d02197e6aa185d59bbda6d52187e4463538dc106202c3c2861baaa857"} Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.474850 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-dc4df654d-9w4f2" event={"ID":"3ff2fa58-497f-4e1c-8447-a25032ebac67","Type":"ContainerStarted","Data":"691ea19880ab14d79ac61e482efcb2b940fd315495537416e3dfc7d8b5586d47"} Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.475073 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-dc4df654d-9w4f2" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.476070 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5d7f548864-bdfgg" event={"ID":"89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5","Type":"ContainerStarted","Data":"35276787e444e0dba8fbe84b677288f7946efc97c106da56dcafe88909d203d9"} Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.476094 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5d7f548864-bdfgg" event={"ID":"89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5","Type":"ContainerStarted","Data":"72652d35d5f1e8c7b0d13fa03db67857d8526182a6473dbbd04f2ca4e958c746"} Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.476717 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5d7f548864-bdfgg" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.488810 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5fc95b4d56-9mkgz" event={"ID":"a2149326-55f7-405e-a005-d2b44e58342c","Type":"ContainerStarted","Data":"24776381426f3ceb275ffdbb9213fcd9be95374ca27842b95cdc009f1c5a3c7b"} Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.490464 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-67c9665685-zvrsn" event={"ID":"5c324e69-4bb9-40a6-a883-73a42e9ef646","Type":"ContainerStarted","Data":"a06c3362e4464483b6928fcf6a6fe143c6f71b45995909c06cd95294d4309c1b"} Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.510642 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-dc4df654d-9w4f2" podStartSLOduration=5.510624299 podStartE2EDuration="5.510624299s" podCreationTimestamp="2026-02-14 19:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:06:20.499904153 +0000 UTC m=+1433.476312636" watchObservedRunningTime="2026-02-14 19:06:20.510624299 +0000 UTC m=+1433.487032782" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.518760 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-kt766" event={"ID":"e56dc64b-fe4e-4e4c-9266-cb073ab171e8","Type":"ContainerStarted","Data":"83274da8965f06d985e62ea5e9947a8492df9f55c1a320626386a00ae230fdc1"} Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.518805 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-kt766" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.530391 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5d7f548864-bdfgg" podStartSLOduration=11.530369479 podStartE2EDuration="11.530369479s" podCreationTimestamp="2026-02-14 19:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:06:20.529625935 +0000 UTC m=+1433.506034418" watchObservedRunningTime="2026-02-14 19:06:20.530369479 +0000 UTC m=+1433.506777962" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.533112 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-86f48db4c-p7v4g" event={"ID":"c0485238-dabe-46e0-87b1-239d64814ef8","Type":"ContainerStarted","Data":"faef5c8425d0d3bf8cc5341fe39d4e47b9fa63eab88061cc1470e10bccb9e09e"} Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.899487 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-kt766" podStartSLOduration=11.899469182 podStartE2EDuration="11.899469182s" podCreationTimestamp="2026-02-14 19:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:06:20.562089305 +0000 UTC m=+1433.538497808" watchObservedRunningTime="2026-02-14 19:06:20.899469182 +0000 UTC m=+1433.875877665" Feb 14 19:06:20 crc kubenswrapper[4897]: I0214 19:06:20.907621 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:06:20 crc kubenswrapper[4897]: W0214 19:06:20.938153 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ec536ba_5940_41a9_8334_b622eeb2e669.slice/crio-e54637865acd89719a36f3585313958f513d42db19f48a71db6a50e78b523507 WatchSource:0}: Error finding container e54637865acd89719a36f3585313958f513d42db19f48a71db6a50e78b523507: Status 404 returned error can't find the container with id e54637865acd89719a36f3585313958f513d42db19f48a71db6a50e78b523507 Feb 14 19:06:21 crc kubenswrapper[4897]: I0214 19:06:21.427853 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:06:21 crc kubenswrapper[4897]: I0214 19:06:21.544605 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ec536ba-5940-41a9-8334-b622eeb2e669","Type":"ContainerStarted","Data":"e54637865acd89719a36f3585313958f513d42db19f48a71db6a50e78b523507"} Feb 14 19:06:21 crc kubenswrapper[4897]: I0214 19:06:21.822996 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fb69d8d-0e17-4fce-83d7-c983dade92d9" path="/var/lib/kubelet/pods/8fb69d8d-0e17-4fce-83d7-c983dade92d9/volumes" Feb 14 19:06:21 crc kubenswrapper[4897]: I0214 19:06:21.823770 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 19:06:21 crc kubenswrapper[4897]: I0214 19:06:21.823986 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b47b5146-8110-4b6d-972a-e3d08f5c7e3c" containerName="glance-log" containerID="cri-o://b727a440edf144138b581af5aa46095cb32e7eaec0e1bc03747739a8061943c7" gracePeriod=30 Feb 14 19:06:21 crc kubenswrapper[4897]: I0214 19:06:21.824083 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b47b5146-8110-4b6d-972a-e3d08f5c7e3c" containerName="glance-httpd" containerID="cri-o://ec8027176c9ecec33fecc0f1fb9f29f8ca9c4068b270aa33aaf3c3d639304bd9" gracePeriod=30 Feb 14 19:06:22 crc kubenswrapper[4897]: I0214 19:06:22.627124 4897 generic.go:334] "Generic (PLEG): container finished" podID="f02df6db-894f-46ff-9bdc-53559271efcc" containerID="17a52e8a3fe8f070db20a61f31504b23b5fcfe692a2e99fbc22c1cc12e743d63" exitCode=0 Feb 14 19:06:22 crc kubenswrapper[4897]: I0214 19:06:22.627345 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f78bcb6c6-95jr5" event={"ID":"f02df6db-894f-46ff-9bdc-53559271efcc","Type":"ContainerDied","Data":"17a52e8a3fe8f070db20a61f31504b23b5fcfe692a2e99fbc22c1cc12e743d63"} Feb 14 19:06:22 crc kubenswrapper[4897]: I0214 19:06:22.632398 4897 generic.go:334] "Generic (PLEG): container finished" podID="b47b5146-8110-4b6d-972a-e3d08f5c7e3c" containerID="b727a440edf144138b581af5aa46095cb32e7eaec0e1bc03747739a8061943c7" exitCode=143 Feb 14 19:06:22 crc kubenswrapper[4897]: I0214 19:06:22.633037 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b47b5146-8110-4b6d-972a-e3d08f5c7e3c","Type":"ContainerDied","Data":"b727a440edf144138b581af5aa46095cb32e7eaec0e1bc03747739a8061943c7"} Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.067442 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f78bcb6c6-95jr5" Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.169825 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f02df6db-894f-46ff-9bdc-53559271efcc-config\") pod \"f02df6db-894f-46ff-9bdc-53559271efcc\" (UID: \"f02df6db-894f-46ff-9bdc-53559271efcc\") " Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.169876 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f02df6db-894f-46ff-9bdc-53559271efcc-httpd-config\") pod \"f02df6db-894f-46ff-9bdc-53559271efcc\" (UID: \"f02df6db-894f-46ff-9bdc-53559271efcc\") " Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.169903 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl9gr\" (UniqueName: \"kubernetes.io/projected/f02df6db-894f-46ff-9bdc-53559271efcc-kube-api-access-wl9gr\") pod \"f02df6db-894f-46ff-9bdc-53559271efcc\" (UID: \"f02df6db-894f-46ff-9bdc-53559271efcc\") " Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.170107 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f02df6db-894f-46ff-9bdc-53559271efcc-ovndb-tls-certs\") pod \"f02df6db-894f-46ff-9bdc-53559271efcc\" (UID: \"f02df6db-894f-46ff-9bdc-53559271efcc\") " Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.170138 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f02df6db-894f-46ff-9bdc-53559271efcc-combined-ca-bundle\") pod \"f02df6db-894f-46ff-9bdc-53559271efcc\" (UID: \"f02df6db-894f-46ff-9bdc-53559271efcc\") " Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.187054 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f02df6db-894f-46ff-9bdc-53559271efcc-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "f02df6db-894f-46ff-9bdc-53559271efcc" (UID: "f02df6db-894f-46ff-9bdc-53559271efcc"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.189334 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f02df6db-894f-46ff-9bdc-53559271efcc-kube-api-access-wl9gr" (OuterVolumeSpecName: "kube-api-access-wl9gr") pod "f02df6db-894f-46ff-9bdc-53559271efcc" (UID: "f02df6db-894f-46ff-9bdc-53559271efcc"). InnerVolumeSpecName "kube-api-access-wl9gr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.273627 4897 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f02df6db-894f-46ff-9bdc-53559271efcc-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.273658 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl9gr\" (UniqueName: \"kubernetes.io/projected/f02df6db-894f-46ff-9bdc-53559271efcc-kube-api-access-wl9gr\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.398345 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.398563 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="27b24061-39f4-4ddd-aa33-bdd4da0e90bd" containerName="glance-log" containerID="cri-o://0148c16d6c818afd1210fd9f66d1e08ddc906dda9c37b68da948287b3ca66b8b" gracePeriod=30 Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.398691 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="27b24061-39f4-4ddd-aa33-bdd4da0e90bd" containerName="glance-httpd" containerID="cri-o://ebdbb10eebc8deea4b7f629fcf730b38457933a0731c74ceac878a4d4864ca1c" gracePeriod=30 Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.445155 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f02df6db-894f-46ff-9bdc-53559271efcc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f02df6db-894f-46ff-9bdc-53559271efcc" (UID: "f02df6db-894f-46ff-9bdc-53559271efcc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.478583 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f02df6db-894f-46ff-9bdc-53559271efcc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.536095 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f02df6db-894f-46ff-9bdc-53559271efcc-config" (OuterVolumeSpecName: "config") pod "f02df6db-894f-46ff-9bdc-53559271efcc" (UID: "f02df6db-894f-46ff-9bdc-53559271efcc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.581572 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f02df6db-894f-46ff-9bdc-53559271efcc-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.588620 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f02df6db-894f-46ff-9bdc-53559271efcc-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "f02df6db-894f-46ff-9bdc-53559271efcc" (UID: "f02df6db-894f-46ff-9bdc-53559271efcc"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.686916 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5fc95b4d56-9mkgz" event={"ID":"a2149326-55f7-405e-a005-d2b44e58342c","Type":"ContainerStarted","Data":"23d04ad7640a980ddcfd7c7faf1400d8b46afa5ac4578c55dedf9535aeca02f1"} Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.687306 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5fc95b4d56-9mkgz" Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.687195 4897 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f02df6db-894f-46ff-9bdc-53559271efcc-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.716731 4897 generic.go:334] "Generic (PLEG): container finished" podID="27b24061-39f4-4ddd-aa33-bdd4da0e90bd" containerID="0148c16d6c818afd1210fd9f66d1e08ddc906dda9c37b68da948287b3ca66b8b" exitCode=143 Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.716825 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"27b24061-39f4-4ddd-aa33-bdd4da0e90bd","Type":"ContainerDied","Data":"0148c16d6c818afd1210fd9f66d1e08ddc906dda9c37b68da948287b3ca66b8b"} Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.718187 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-67c9665685-zvrsn" event={"ID":"5c324e69-4bb9-40a6-a883-73a42e9ef646","Type":"ContainerStarted","Data":"b35f1aa14f1851f164f687abb79d1f63e8e52c9a978deac806af0cc8eed1ecee"} Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.718305 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-67c9665685-zvrsn" podUID="5c324e69-4bb9-40a6-a883-73a42e9ef646" containerName="heat-api" containerID="cri-o://b35f1aa14f1851f164f687abb79d1f63e8e52c9a978deac806af0cc8eed1ecee" gracePeriod=60 Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.718541 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-67c9665685-zvrsn" Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.730092 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-94b476d6c-nbxhf" event={"ID":"aec03a9b-3137-443f-b07f-eade8ffa27f5","Type":"ContainerStarted","Data":"d74ac673b3dbbe6dc1e7848146fe849ff55ae8eb3660efe4fd2d3b6eb4fc81b9"} Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.730833 4897 scope.go:117] "RemoveContainer" containerID="d74ac673b3dbbe6dc1e7848146fe849ff55ae8eb3660efe4fd2d3b6eb4fc81b9" Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.731319 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5fc95b4d56-9mkgz" podStartSLOduration=4.059931708 podStartE2EDuration="6.731283351s" podCreationTimestamp="2026-02-14 19:06:17 +0000 UTC" firstStartedPulling="2026-02-14 19:06:19.798592404 +0000 UTC m=+1432.775000887" lastFinishedPulling="2026-02-14 19:06:22.469944047 +0000 UTC m=+1435.446352530" observedRunningTime="2026-02-14 19:06:23.710528639 +0000 UTC m=+1436.686937142" watchObservedRunningTime="2026-02-14 19:06:23.731283351 +0000 UTC m=+1436.707691834" Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.746468 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" event={"ID":"62ecb4f3-ad3f-4146-99b6-be063902ea75","Type":"ContainerStarted","Data":"71ac1f7e244a6daa7ed1c05a06b434a1c59fb9e3ac756c95f6d8f81ae1fc1090"} Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.747413 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.748880 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-796669b846-cd6hr" event={"ID":"0e4b6b13-37e3-4061-9e06-5969de8b94f1","Type":"ContainerStarted","Data":"fd8ad42385fae6c852c6f28bd7b595e48e8887c1c45aea9f2448664a558aa78c"} Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.748980 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-796669b846-cd6hr" podUID="0e4b6b13-37e3-4061-9e06-5969de8b94f1" containerName="heat-cfnapi" containerID="cri-o://fd8ad42385fae6c852c6f28bd7b595e48e8887c1c45aea9f2448664a558aa78c" gracePeriod=60 Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.749288 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-796669b846-cd6hr" Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.757908 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-67c9665685-zvrsn" podStartSLOduration=12.002901108 podStartE2EDuration="14.757885495s" podCreationTimestamp="2026-02-14 19:06:09 +0000 UTC" firstStartedPulling="2026-02-14 19:06:19.715456685 +0000 UTC m=+1432.691865168" lastFinishedPulling="2026-02-14 19:06:22.470441082 +0000 UTC m=+1435.446849555" observedRunningTime="2026-02-14 19:06:23.737454594 +0000 UTC m=+1436.713863067" watchObservedRunningTime="2026-02-14 19:06:23.757885495 +0000 UTC m=+1436.734293978" Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.770396 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f78bcb6c6-95jr5" event={"ID":"f02df6db-894f-46ff-9bdc-53559271efcc","Type":"ContainerDied","Data":"1a2a911e6d2267c919e2501d34254837a337323019e73ff8ac2f248ee9799bbc"} Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.770448 4897 scope.go:117] "RemoveContainer" containerID="006cb9f87c8e9b82c013f350d99ca6813d52dd9db09179684551e19ec51b572f" Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.770595 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f78bcb6c6-95jr5" Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.776695 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-86f48db4c-p7v4g" event={"ID":"c0485238-dabe-46e0-87b1-239d64814ef8","Type":"ContainerStarted","Data":"d8c558f35dcfcb203f006f27add6bbbfc69bc9cc55fc9bb509ec23dd210917a2"} Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.788454 4897 scope.go:117] "RemoveContainer" containerID="d8c558f35dcfcb203f006f27add6bbbfc69bc9cc55fc9bb509ec23dd210917a2" Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.814565 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" podStartSLOduration=4.150777938 podStartE2EDuration="6.814545434s" podCreationTimestamp="2026-02-14 19:06:17 +0000 UTC" firstStartedPulling="2026-02-14 19:06:19.7111829 +0000 UTC m=+1432.687591383" lastFinishedPulling="2026-02-14 19:06:22.374950396 +0000 UTC m=+1435.351358879" observedRunningTime="2026-02-14 19:06:23.790456987 +0000 UTC m=+1436.766865480" watchObservedRunningTime="2026-02-14 19:06:23.814545434 +0000 UTC m=+1436.790953917" Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.839232 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-796669b846-cd6hr" podStartSLOduration=11.58235252 podStartE2EDuration="14.839213508s" podCreationTimestamp="2026-02-14 19:06:09 +0000 UTC" firstStartedPulling="2026-02-14 19:06:19.120454352 +0000 UTC m=+1432.096862835" lastFinishedPulling="2026-02-14 19:06:22.37731534 +0000 UTC m=+1435.353723823" observedRunningTime="2026-02-14 19:06:23.817526187 +0000 UTC m=+1436.793934670" watchObservedRunningTime="2026-02-14 19:06:23.839213508 +0000 UTC m=+1436.815621991" Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.850461 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ec536ba-5940-41a9-8334-b622eeb2e669","Type":"ContainerStarted","Data":"3b14b98294fbe1ca48f5d2ddc6cd466a686b102b7d42c2937b59eba4f84ed7ee"} Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.921966 4897 scope.go:117] "RemoveContainer" containerID="17a52e8a3fe8f070db20a61f31504b23b5fcfe692a2e99fbc22c1cc12e743d63" Feb 14 19:06:23 crc kubenswrapper[4897]: I0214 19:06:23.966928 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5f78bcb6c6-95jr5"] Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.035160 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5f78bcb6c6-95jr5"] Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.128639 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-sc5s2"] Feb 14 19:06:24 crc kubenswrapper[4897]: E0214 19:06:24.130142 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f02df6db-894f-46ff-9bdc-53559271efcc" containerName="neutron-api" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.130178 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f02df6db-894f-46ff-9bdc-53559271efcc" containerName="neutron-api" Feb 14 19:06:24 crc kubenswrapper[4897]: E0214 19:06:24.130211 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f02df6db-894f-46ff-9bdc-53559271efcc" containerName="neutron-httpd" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.130221 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f02df6db-894f-46ff-9bdc-53559271efcc" containerName="neutron-httpd" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.131550 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f02df6db-894f-46ff-9bdc-53559271efcc" containerName="neutron-httpd" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.131679 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f02df6db-894f-46ff-9bdc-53559271efcc" containerName="neutron-api" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.134374 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sc5s2" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.187761 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-sc5s2"] Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.203702 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-c9dv9"] Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.208110 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-c9dv9" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.243516 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-c9dv9"] Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.257410 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/312c2219-c7db-4a28-901f-1d03a379e088-operator-scripts\") pod \"nova-cell0-db-create-c9dv9\" (UID: \"312c2219-c7db-4a28-901f-1d03a379e088\") " pod="openstack/nova-cell0-db-create-c9dv9" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.257483 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s8cl\" (UniqueName: \"kubernetes.io/projected/8f8b79c5-fdc5-49a7-8da5-278bbc982740-kube-api-access-8s8cl\") pod \"nova-api-db-create-sc5s2\" (UID: \"8f8b79c5-fdc5-49a7-8da5-278bbc982740\") " pod="openstack/nova-api-db-create-sc5s2" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.257567 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f8b79c5-fdc5-49a7-8da5-278bbc982740-operator-scripts\") pod \"nova-api-db-create-sc5s2\" (UID: \"8f8b79c5-fdc5-49a7-8da5-278bbc982740\") " pod="openstack/nova-api-db-create-sc5s2" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.257636 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5vbz\" (UniqueName: \"kubernetes.io/projected/312c2219-c7db-4a28-901f-1d03a379e088-kube-api-access-p5vbz\") pod \"nova-cell0-db-create-c9dv9\" (UID: \"312c2219-c7db-4a28-901f-1d03a379e088\") " pod="openstack/nova-cell0-db-create-c9dv9" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.269070 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-f35f-account-create-update-nxqdw"] Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.270449 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f35f-account-create-update-nxqdw" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.273417 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.292364 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f35f-account-create-update-nxqdw"] Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.364406 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c892fc72-2d4f-4417-9078-65f0519fcc2d-operator-scripts\") pod \"nova-api-f35f-account-create-update-nxqdw\" (UID: \"c892fc72-2d4f-4417-9078-65f0519fcc2d\") " pod="openstack/nova-api-f35f-account-create-update-nxqdw" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.364486 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f8b79c5-fdc5-49a7-8da5-278bbc982740-operator-scripts\") pod \"nova-api-db-create-sc5s2\" (UID: \"8f8b79c5-fdc5-49a7-8da5-278bbc982740\") " pod="openstack/nova-api-db-create-sc5s2" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.364522 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49x6f\" (UniqueName: \"kubernetes.io/projected/c892fc72-2d4f-4417-9078-65f0519fcc2d-kube-api-access-49x6f\") pod \"nova-api-f35f-account-create-update-nxqdw\" (UID: \"c892fc72-2d4f-4417-9078-65f0519fcc2d\") " pod="openstack/nova-api-f35f-account-create-update-nxqdw" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.364583 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5vbz\" (UniqueName: \"kubernetes.io/projected/312c2219-c7db-4a28-901f-1d03a379e088-kube-api-access-p5vbz\") pod \"nova-cell0-db-create-c9dv9\" (UID: \"312c2219-c7db-4a28-901f-1d03a379e088\") " pod="openstack/nova-cell0-db-create-c9dv9" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.364645 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/312c2219-c7db-4a28-901f-1d03a379e088-operator-scripts\") pod \"nova-cell0-db-create-c9dv9\" (UID: \"312c2219-c7db-4a28-901f-1d03a379e088\") " pod="openstack/nova-cell0-db-create-c9dv9" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.364692 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s8cl\" (UniqueName: \"kubernetes.io/projected/8f8b79c5-fdc5-49a7-8da5-278bbc982740-kube-api-access-8s8cl\") pod \"nova-api-db-create-sc5s2\" (UID: \"8f8b79c5-fdc5-49a7-8da5-278bbc982740\") " pod="openstack/nova-api-db-create-sc5s2" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.365286 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f8b79c5-fdc5-49a7-8da5-278bbc982740-operator-scripts\") pod \"nova-api-db-create-sc5s2\" (UID: \"8f8b79c5-fdc5-49a7-8da5-278bbc982740\") " pod="openstack/nova-api-db-create-sc5s2" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.365724 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/312c2219-c7db-4a28-901f-1d03a379e088-operator-scripts\") pod \"nova-cell0-db-create-c9dv9\" (UID: \"312c2219-c7db-4a28-901f-1d03a379e088\") " pod="openstack/nova-cell0-db-create-c9dv9" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.410224 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5vbz\" (UniqueName: \"kubernetes.io/projected/312c2219-c7db-4a28-901f-1d03a379e088-kube-api-access-p5vbz\") pod \"nova-cell0-db-create-c9dv9\" (UID: \"312c2219-c7db-4a28-901f-1d03a379e088\") " pod="openstack/nova-cell0-db-create-c9dv9" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.417588 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s8cl\" (UniqueName: \"kubernetes.io/projected/8f8b79c5-fdc5-49a7-8da5-278bbc982740-kube-api-access-8s8cl\") pod \"nova-api-db-create-sc5s2\" (UID: \"8f8b79c5-fdc5-49a7-8da5-278bbc982740\") " pod="openstack/nova-api-db-create-sc5s2" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.449722 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-5q8wx"] Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.451200 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-5q8wx" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.466514 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c892fc72-2d4f-4417-9078-65f0519fcc2d-operator-scripts\") pod \"nova-api-f35f-account-create-update-nxqdw\" (UID: \"c892fc72-2d4f-4417-9078-65f0519fcc2d\") " pod="openstack/nova-api-f35f-account-create-update-nxqdw" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.466592 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49x6f\" (UniqueName: \"kubernetes.io/projected/c892fc72-2d4f-4417-9078-65f0519fcc2d-kube-api-access-49x6f\") pod \"nova-api-f35f-account-create-update-nxqdw\" (UID: \"c892fc72-2d4f-4417-9078-65f0519fcc2d\") " pod="openstack/nova-api-f35f-account-create-update-nxqdw" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.467773 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c892fc72-2d4f-4417-9078-65f0519fcc2d-operator-scripts\") pod \"nova-api-f35f-account-create-update-nxqdw\" (UID: \"c892fc72-2d4f-4417-9078-65f0519fcc2d\") " pod="openstack/nova-api-f35f-account-create-update-nxqdw" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.486231 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-e2ee-account-create-update-g2kt2"] Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.487788 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e2ee-account-create-update-g2kt2" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.493323 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.499044 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sc5s2" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.503983 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49x6f\" (UniqueName: \"kubernetes.io/projected/c892fc72-2d4f-4417-9078-65f0519fcc2d-kube-api-access-49x6f\") pod \"nova-api-f35f-account-create-update-nxqdw\" (UID: \"c892fc72-2d4f-4417-9078-65f0519fcc2d\") " pod="openstack/nova-api-f35f-account-create-update-nxqdw" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.512233 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-5q8wx"] Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.531194 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e2ee-account-create-update-g2kt2"] Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.535743 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-c9dv9" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.568981 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e276c7c0-3036-4f26-8971-92a5c22b7840-operator-scripts\") pod \"nova-cell0-e2ee-account-create-update-g2kt2\" (UID: \"e276c7c0-3036-4f26-8971-92a5c22b7840\") " pod="openstack/nova-cell0-e2ee-account-create-update-g2kt2" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.569138 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wdj9\" (UniqueName: \"kubernetes.io/projected/adaee017-ddec-4818-acc9-54a5caa1571f-kube-api-access-6wdj9\") pod \"nova-cell1-db-create-5q8wx\" (UID: \"adaee017-ddec-4818-acc9-54a5caa1571f\") " pod="openstack/nova-cell1-db-create-5q8wx" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.569381 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adaee017-ddec-4818-acc9-54a5caa1571f-operator-scripts\") pod \"nova-cell1-db-create-5q8wx\" (UID: \"adaee017-ddec-4818-acc9-54a5caa1571f\") " pod="openstack/nova-cell1-db-create-5q8wx" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.569426 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwth5\" (UniqueName: \"kubernetes.io/projected/e276c7c0-3036-4f26-8971-92a5c22b7840-kube-api-access-wwth5\") pod \"nova-cell0-e2ee-account-create-update-g2kt2\" (UID: \"e276c7c0-3036-4f26-8971-92a5c22b7840\") " pod="openstack/nova-cell0-e2ee-account-create-update-g2kt2" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.629426 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-5d42-account-create-update-kw2zk"] Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.631835 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5d42-account-create-update-kw2zk" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.638542 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.652217 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5d42-account-create-update-kw2zk"] Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.671583 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wdj9\" (UniqueName: \"kubernetes.io/projected/adaee017-ddec-4818-acc9-54a5caa1571f-kube-api-access-6wdj9\") pod \"nova-cell1-db-create-5q8wx\" (UID: \"adaee017-ddec-4818-acc9-54a5caa1571f\") " pod="openstack/nova-cell1-db-create-5q8wx" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.671922 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adaee017-ddec-4818-acc9-54a5caa1571f-operator-scripts\") pod \"nova-cell1-db-create-5q8wx\" (UID: \"adaee017-ddec-4818-acc9-54a5caa1571f\") " pod="openstack/nova-cell1-db-create-5q8wx" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.671945 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwth5\" (UniqueName: \"kubernetes.io/projected/e276c7c0-3036-4f26-8971-92a5c22b7840-kube-api-access-wwth5\") pod \"nova-cell0-e2ee-account-create-update-g2kt2\" (UID: \"e276c7c0-3036-4f26-8971-92a5c22b7840\") " pod="openstack/nova-cell0-e2ee-account-create-update-g2kt2" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.672013 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e276c7c0-3036-4f26-8971-92a5c22b7840-operator-scripts\") pod \"nova-cell0-e2ee-account-create-update-g2kt2\" (UID: \"e276c7c0-3036-4f26-8971-92a5c22b7840\") " pod="openstack/nova-cell0-e2ee-account-create-update-g2kt2" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.672052 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5ksx\" (UniqueName: \"kubernetes.io/projected/6fd09d35-34e4-4a37-ac93-455f2f12b0d5-kube-api-access-p5ksx\") pod \"nova-cell1-5d42-account-create-update-kw2zk\" (UID: \"6fd09d35-34e4-4a37-ac93-455f2f12b0d5\") " pod="openstack/nova-cell1-5d42-account-create-update-kw2zk" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.672096 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd09d35-34e4-4a37-ac93-455f2f12b0d5-operator-scripts\") pod \"nova-cell1-5d42-account-create-update-kw2zk\" (UID: \"6fd09d35-34e4-4a37-ac93-455f2f12b0d5\") " pod="openstack/nova-cell1-5d42-account-create-update-kw2zk" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.673113 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adaee017-ddec-4818-acc9-54a5caa1571f-operator-scripts\") pod \"nova-cell1-db-create-5q8wx\" (UID: \"adaee017-ddec-4818-acc9-54a5caa1571f\") " pod="openstack/nova-cell1-db-create-5q8wx" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.676374 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e276c7c0-3036-4f26-8971-92a5c22b7840-operator-scripts\") pod \"nova-cell0-e2ee-account-create-update-g2kt2\" (UID: \"e276c7c0-3036-4f26-8971-92a5c22b7840\") " pod="openstack/nova-cell0-e2ee-account-create-update-g2kt2" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.694588 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwth5\" (UniqueName: \"kubernetes.io/projected/e276c7c0-3036-4f26-8971-92a5c22b7840-kube-api-access-wwth5\") pod \"nova-cell0-e2ee-account-create-update-g2kt2\" (UID: \"e276c7c0-3036-4f26-8971-92a5c22b7840\") " pod="openstack/nova-cell0-e2ee-account-create-update-g2kt2" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.694734 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wdj9\" (UniqueName: \"kubernetes.io/projected/adaee017-ddec-4818-acc9-54a5caa1571f-kube-api-access-6wdj9\") pod \"nova-cell1-db-create-5q8wx\" (UID: \"adaee017-ddec-4818-acc9-54a5caa1571f\") " pod="openstack/nova-cell1-db-create-5q8wx" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.707763 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f35f-account-create-update-nxqdw" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.745990 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-5q8wx" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.774749 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5ksx\" (UniqueName: \"kubernetes.io/projected/6fd09d35-34e4-4a37-ac93-455f2f12b0d5-kube-api-access-p5ksx\") pod \"nova-cell1-5d42-account-create-update-kw2zk\" (UID: \"6fd09d35-34e4-4a37-ac93-455f2f12b0d5\") " pod="openstack/nova-cell1-5d42-account-create-update-kw2zk" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.774930 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd09d35-34e4-4a37-ac93-455f2f12b0d5-operator-scripts\") pod \"nova-cell1-5d42-account-create-update-kw2zk\" (UID: \"6fd09d35-34e4-4a37-ac93-455f2f12b0d5\") " pod="openstack/nova-cell1-5d42-account-create-update-kw2zk" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.776463 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd09d35-34e4-4a37-ac93-455f2f12b0d5-operator-scripts\") pod \"nova-cell1-5d42-account-create-update-kw2zk\" (UID: \"6fd09d35-34e4-4a37-ac93-455f2f12b0d5\") " pod="openstack/nova-cell1-5d42-account-create-update-kw2zk" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.776833 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e2ee-account-create-update-g2kt2" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.802595 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5ksx\" (UniqueName: \"kubernetes.io/projected/6fd09d35-34e4-4a37-ac93-455f2f12b0d5-kube-api-access-p5ksx\") pod \"nova-cell1-5d42-account-create-update-kw2zk\" (UID: \"6fd09d35-34e4-4a37-ac93-455f2f12b0d5\") " pod="openstack/nova-cell1-5d42-account-create-update-kw2zk" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.806350 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5d42-account-create-update-kw2zk" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.894314 4897 generic.go:334] "Generic (PLEG): container finished" podID="c0485238-dabe-46e0-87b1-239d64814ef8" containerID="d8c558f35dcfcb203f006f27add6bbbfc69bc9cc55fc9bb509ec23dd210917a2" exitCode=1 Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.894973 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-86f48db4c-p7v4g" event={"ID":"c0485238-dabe-46e0-87b1-239d64814ef8","Type":"ContainerDied","Data":"d8c558f35dcfcb203f006f27add6bbbfc69bc9cc55fc9bb509ec23dd210917a2"} Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.895010 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-86f48db4c-p7v4g" event={"ID":"c0485238-dabe-46e0-87b1-239d64814ef8","Type":"ContainerStarted","Data":"1f8e3c84a02abbe6b32cf0c18e94bce0d5786ac21eefc77a4e0137444b085692"} Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.895044 4897 scope.go:117] "RemoveContainer" containerID="d8c558f35dcfcb203f006f27add6bbbfc69bc9cc55fc9bb509ec23dd210917a2" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.895289 4897 scope.go:117] "RemoveContainer" containerID="1f8e3c84a02abbe6b32cf0c18e94bce0d5786ac21eefc77a4e0137444b085692" Feb 14 19:06:24 crc kubenswrapper[4897]: E0214 19:06:24.895518 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-86f48db4c-p7v4g_openstack(c0485238-dabe-46e0-87b1-239d64814ef8)\"" pod="openstack/heat-cfnapi-86f48db4c-p7v4g" podUID="c0485238-dabe-46e0-87b1-239d64814ef8" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.913214 4897 generic.go:334] "Generic (PLEG): container finished" podID="aec03a9b-3137-443f-b07f-eade8ffa27f5" containerID="d74ac673b3dbbe6dc1e7848146fe849ff55ae8eb3660efe4fd2d3b6eb4fc81b9" exitCode=1 Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.913292 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-94b476d6c-nbxhf" event={"ID":"aec03a9b-3137-443f-b07f-eade8ffa27f5","Type":"ContainerDied","Data":"d74ac673b3dbbe6dc1e7848146fe849ff55ae8eb3660efe4fd2d3b6eb4fc81b9"} Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.913320 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-94b476d6c-nbxhf" event={"ID":"aec03a9b-3137-443f-b07f-eade8ffa27f5","Type":"ContainerStarted","Data":"21fbb0abd09182bae16abea458fb5c9b72e68d2ce410f58956a9fc6fa25a949c"} Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.914016 4897 scope.go:117] "RemoveContainer" containerID="21fbb0abd09182bae16abea458fb5c9b72e68d2ce410f58956a9fc6fa25a949c" Feb 14 19:06:24 crc kubenswrapper[4897]: E0214 19:06:24.914294 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-94b476d6c-nbxhf_openstack(aec03a9b-3137-443f-b07f-eade8ffa27f5)\"" pod="openstack/heat-api-94b476d6c-nbxhf" podUID="aec03a9b-3137-443f-b07f-eade8ffa27f5" Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.952485 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ec536ba-5940-41a9-8334-b622eeb2e669","Type":"ContainerStarted","Data":"d8ac3e5b7ea4f0ecdf02355d8e670cc5ba9a020b266b7bdf645f666a5afd9b78"} Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.963363 4897 generic.go:334] "Generic (PLEG): container finished" podID="0e4b6b13-37e3-4061-9e06-5969de8b94f1" containerID="fd8ad42385fae6c852c6f28bd7b595e48e8887c1c45aea9f2448664a558aa78c" exitCode=0 Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.963433 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-796669b846-cd6hr" event={"ID":"0e4b6b13-37e3-4061-9e06-5969de8b94f1","Type":"ContainerDied","Data":"fd8ad42385fae6c852c6f28bd7b595e48e8887c1c45aea9f2448664a558aa78c"} Feb 14 19:06:24 crc kubenswrapper[4897]: I0214 19:06:24.993066 4897 scope.go:117] "RemoveContainer" containerID="d74ac673b3dbbe6dc1e7848146fe849ff55ae8eb3660efe4fd2d3b6eb4fc81b9" Feb 14 19:06:25 crc kubenswrapper[4897]: I0214 19:06:25.149249 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-kt766" Feb 14 19:06:25 crc kubenswrapper[4897]: I0214 19:06:25.161705 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-sc5s2"] Feb 14 19:06:25 crc kubenswrapper[4897]: I0214 19:06:25.244225 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t4f5v"] Feb 14 19:06:25 crc kubenswrapper[4897]: I0214 19:06:25.244442 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" podUID="e04401f8-3fac-42bb-924b-1235cb127ed3" containerName="dnsmasq-dns" containerID="cri-o://34c914b71c349cb7c38e42c539e03e76c6eec67c09b0d60f9805530d85c70491" gracePeriod=10 Feb 14 19:06:25 crc kubenswrapper[4897]: I0214 19:06:25.306351 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-c9dv9"] Feb 14 19:06:25 crc kubenswrapper[4897]: I0214 19:06:25.831605 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f02df6db-894f-46ff-9bdc-53559271efcc" path="/var/lib/kubelet/pods/f02df6db-894f-46ff-9bdc-53559271efcc/volumes" Feb 14 19:06:25 crc kubenswrapper[4897]: I0214 19:06:25.988197 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-796669b846-cd6hr" Feb 14 19:06:25 crc kubenswrapper[4897]: I0214 19:06:25.997668 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-c9dv9" event={"ID":"312c2219-c7db-4a28-901f-1d03a379e088","Type":"ContainerStarted","Data":"d2884b7d71638754c4eff7e3367ecf1fb5048a2cf0c8826f3a571c84eefe7713"} Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.006264 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7p6n\" (UniqueName: \"kubernetes.io/projected/0e4b6b13-37e3-4061-9e06-5969de8b94f1-kube-api-access-g7p6n\") pod \"0e4b6b13-37e3-4061-9e06-5969de8b94f1\" (UID: \"0e4b6b13-37e3-4061-9e06-5969de8b94f1\") " Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.006386 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e4b6b13-37e3-4061-9e06-5969de8b94f1-config-data\") pod \"0e4b6b13-37e3-4061-9e06-5969de8b94f1\" (UID: \"0e4b6b13-37e3-4061-9e06-5969de8b94f1\") " Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.006684 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e4b6b13-37e3-4061-9e06-5969de8b94f1-combined-ca-bundle\") pod \"0e4b6b13-37e3-4061-9e06-5969de8b94f1\" (UID: \"0e4b6b13-37e3-4061-9e06-5969de8b94f1\") " Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.006706 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e4b6b13-37e3-4061-9e06-5969de8b94f1-config-data-custom\") pod \"0e4b6b13-37e3-4061-9e06-5969de8b94f1\" (UID: \"0e4b6b13-37e3-4061-9e06-5969de8b94f1\") " Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.013074 4897 generic.go:334] "Generic (PLEG): container finished" podID="b47b5146-8110-4b6d-972a-e3d08f5c7e3c" containerID="ec8027176c9ecec33fecc0f1fb9f29f8ca9c4068b270aa33aaf3c3d639304bd9" exitCode=0 Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.013181 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b47b5146-8110-4b6d-972a-e3d08f5c7e3c","Type":"ContainerDied","Data":"ec8027176c9ecec33fecc0f1fb9f29f8ca9c4068b270aa33aaf3c3d639304bd9"} Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.016165 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e4b6b13-37e3-4061-9e06-5969de8b94f1-kube-api-access-g7p6n" (OuterVolumeSpecName: "kube-api-access-g7p6n") pod "0e4b6b13-37e3-4061-9e06-5969de8b94f1" (UID: "0e4b6b13-37e3-4061-9e06-5969de8b94f1"). InnerVolumeSpecName "kube-api-access-g7p6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.027198 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f35f-account-create-update-nxqdw"] Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.028716 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e4b6b13-37e3-4061-9e06-5969de8b94f1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0e4b6b13-37e3-4061-9e06-5969de8b94f1" (UID: "0e4b6b13-37e3-4061-9e06-5969de8b94f1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.041242 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sc5s2" event={"ID":"8f8b79c5-fdc5-49a7-8da5-278bbc982740","Type":"ContainerStarted","Data":"d44b65540a14b38bc434bfcb225316b32acaf10f73a82eef8a61ea2478482b1e"} Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.041283 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sc5s2" event={"ID":"8f8b79c5-fdc5-49a7-8da5-278bbc982740","Type":"ContainerStarted","Data":"5281b1fad13a16f0e691194a1889804239fd00499e3ec1686ea7f60bcb16f61b"} Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.047469 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-796669b846-cd6hr" event={"ID":"0e4b6b13-37e3-4061-9e06-5969de8b94f1","Type":"ContainerDied","Data":"e6a2738446421840499033d33a66ce41da52dc5653fc79e54176bbcd554a531a"} Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.047521 4897 scope.go:117] "RemoveContainer" containerID="fd8ad42385fae6c852c6f28bd7b595e48e8887c1c45aea9f2448664a558aa78c" Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.047652 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-796669b846-cd6hr" Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.071778 4897 generic.go:334] "Generic (PLEG): container finished" podID="5c324e69-4bb9-40a6-a883-73a42e9ef646" containerID="b35f1aa14f1851f164f687abb79d1f63e8e52c9a978deac806af0cc8eed1ecee" exitCode=0 Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.071853 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-67c9665685-zvrsn" event={"ID":"5c324e69-4bb9-40a6-a883-73a42e9ef646","Type":"ContainerDied","Data":"b35f1aa14f1851f164f687abb79d1f63e8e52c9a978deac806af0cc8eed1ecee"} Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.093642 4897 generic.go:334] "Generic (PLEG): container finished" podID="e04401f8-3fac-42bb-924b-1235cb127ed3" containerID="34c914b71c349cb7c38e42c539e03e76c6eec67c09b0d60f9805530d85c70491" exitCode=0 Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.093709 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" event={"ID":"e04401f8-3fac-42bb-924b-1235cb127ed3","Type":"ContainerDied","Data":"34c914b71c349cb7c38e42c539e03e76c6eec67c09b0d60f9805530d85c70491"} Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.098275 4897 generic.go:334] "Generic (PLEG): container finished" podID="c0485238-dabe-46e0-87b1-239d64814ef8" containerID="1f8e3c84a02abbe6b32cf0c18e94bce0d5786ac21eefc77a4e0137444b085692" exitCode=1 Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.099107 4897 scope.go:117] "RemoveContainer" containerID="1f8e3c84a02abbe6b32cf0c18e94bce0d5786ac21eefc77a4e0137444b085692" Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.099313 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-86f48db4c-p7v4g" event={"ID":"c0485238-dabe-46e0-87b1-239d64814ef8","Type":"ContainerDied","Data":"1f8e3c84a02abbe6b32cf0c18e94bce0d5786ac21eefc77a4e0137444b085692"} Feb 14 19:06:26 crc kubenswrapper[4897]: E0214 19:06:26.099344 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-86f48db4c-p7v4g_openstack(c0485238-dabe-46e0-87b1-239d64814ef8)\"" pod="openstack/heat-cfnapi-86f48db4c-p7v4g" podUID="c0485238-dabe-46e0-87b1-239d64814ef8" Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.108287 4897 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e4b6b13-37e3-4061-9e06-5969de8b94f1-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.108357 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7p6n\" (UniqueName: \"kubernetes.io/projected/0e4b6b13-37e3-4061-9e06-5969de8b94f1-kube-api-access-g7p6n\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.113708 4897 generic.go:334] "Generic (PLEG): container finished" podID="aec03a9b-3137-443f-b07f-eade8ffa27f5" containerID="21fbb0abd09182bae16abea458fb5c9b72e68d2ce410f58956a9fc6fa25a949c" exitCode=1 Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.113857 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-94b476d6c-nbxhf" event={"ID":"aec03a9b-3137-443f-b07f-eade8ffa27f5","Type":"ContainerDied","Data":"21fbb0abd09182bae16abea458fb5c9b72e68d2ce410f58956a9fc6fa25a949c"} Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.114551 4897 scope.go:117] "RemoveContainer" containerID="21fbb0abd09182bae16abea458fb5c9b72e68d2ce410f58956a9fc6fa25a949c" Feb 14 19:06:26 crc kubenswrapper[4897]: E0214 19:06:26.114795 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-94b476d6c-nbxhf_openstack(aec03a9b-3137-443f-b07f-eade8ffa27f5)\"" pod="openstack/heat-api-94b476d6c-nbxhf" podUID="aec03a9b-3137-443f-b07f-eade8ffa27f5" Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.148230 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-94b476d6c-nbxhf" Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.148757 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-94b476d6c-nbxhf" Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.159359 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" podUID="e04401f8-3fac-42bb-924b-1235cb127ed3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.208:5353: connect: connection refused" Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.161931 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-sc5s2" podStartSLOduration=3.161909339 podStartE2EDuration="3.161909339s" podCreationTimestamp="2026-02-14 19:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:06:26.064806612 +0000 UTC m=+1439.041215095" watchObservedRunningTime="2026-02-14 19:06:26.161909339 +0000 UTC m=+1439.138317822" Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.182330 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-86f48db4c-p7v4g" Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.182372 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-86f48db4c-p7v4g" Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.372943 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e4b6b13-37e3-4061-9e06-5969de8b94f1-config-data" (OuterVolumeSpecName: "config-data") pod "0e4b6b13-37e3-4061-9e06-5969de8b94f1" (UID: "0e4b6b13-37e3-4061-9e06-5969de8b94f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.415298 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e4b6b13-37e3-4061-9e06-5969de8b94f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e4b6b13-37e3-4061-9e06-5969de8b94f1" (UID: "0e4b6b13-37e3-4061-9e06-5969de8b94f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.416487 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e4b6b13-37e3-4061-9e06-5969de8b94f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.416501 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e4b6b13-37e3-4061-9e06-5969de8b94f1-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.759286 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e2ee-account-create-update-g2kt2"] Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.783313 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-5q8wx"] Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.927075 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5d42-account-create-update-kw2zk"] Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.943369 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.970685 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-796669b846-cd6hr"] Feb 14 19:06:26 crc kubenswrapper[4897]: I0214 19:06:26.979451 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-796669b846-cd6hr"] Feb 14 19:06:26 crc kubenswrapper[4897]: W0214 19:06:26.980440 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fd09d35_34e4_4a37_ac93_455f2f12b0d5.slice/crio-938b8df2238e6f9623953781b3a6f5eba47986aefd882218af2e3e1073ba7a64 WatchSource:0}: Error finding container 938b8df2238e6f9623953781b3a6f5eba47986aefd882218af2e3e1073ba7a64: Status 404 returned error can't find the container with id 938b8df2238e6f9623953781b3a6f5eba47986aefd882218af2e3e1073ba7a64 Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.027912 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.037143 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c4846d-e178-48b1-80da-0604a66e3200\") pod \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.037215 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-dns-svc\") pod \"e04401f8-3fac-42bb-924b-1235cb127ed3\" (UID: \"e04401f8-3fac-42bb-924b-1235cb127ed3\") " Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.037329 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-httpd-run\") pod \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.037393 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-dns-swift-storage-0\") pod \"e04401f8-3fac-42bb-924b-1235cb127ed3\" (UID: \"e04401f8-3fac-42bb-924b-1235cb127ed3\") " Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.044599 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b47b5146-8110-4b6d-972a-e3d08f5c7e3c" (UID: "b47b5146-8110-4b6d-972a-e3d08f5c7e3c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.045323 4897 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.054852 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-67c9665685-zvrsn" Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.145309 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-5q8wx" event={"ID":"adaee017-ddec-4818-acc9-54a5caa1571f","Type":"ContainerStarted","Data":"bd6d1360f950d9031569802a0666973297a129c704f0d9cc1cc7252ec9731521"} Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.145995 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-ovsdbserver-sb\") pod \"e04401f8-3fac-42bb-924b-1235cb127ed3\" (UID: \"e04401f8-3fac-42bb-924b-1235cb127ed3\") " Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.146047 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-scripts\") pod \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.146132 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c324e69-4bb9-40a6-a883-73a42e9ef646-combined-ca-bundle\") pod \"5c324e69-4bb9-40a6-a883-73a42e9ef646\" (UID: \"5c324e69-4bb9-40a6-a883-73a42e9ef646\") " Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.146367 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj5rt\" (UniqueName: \"kubernetes.io/projected/e04401f8-3fac-42bb-924b-1235cb127ed3-kube-api-access-lj5rt\") pod \"e04401f8-3fac-42bb-924b-1235cb127ed3\" (UID: \"e04401f8-3fac-42bb-924b-1235cb127ed3\") " Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.146421 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-ovsdbserver-nb\") pod \"e04401f8-3fac-42bb-924b-1235cb127ed3\" (UID: \"e04401f8-3fac-42bb-924b-1235cb127ed3\") " Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.146451 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cqjz\" (UniqueName: \"kubernetes.io/projected/5c324e69-4bb9-40a6-a883-73a42e9ef646-kube-api-access-4cqjz\") pod \"5c324e69-4bb9-40a6-a883-73a42e9ef646\" (UID: \"5c324e69-4bb9-40a6-a883-73a42e9ef646\") " Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.146495 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-config-data\") pod \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.146519 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjqr6\" (UniqueName: \"kubernetes.io/projected/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-kube-api-access-gjqr6\") pod \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.146536 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-config\") pod \"e04401f8-3fac-42bb-924b-1235cb127ed3\" (UID: \"e04401f8-3fac-42bb-924b-1235cb127ed3\") " Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.146594 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-combined-ca-bundle\") pod \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.146629 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c324e69-4bb9-40a6-a883-73a42e9ef646-config-data-custom\") pod \"5c324e69-4bb9-40a6-a883-73a42e9ef646\" (UID: \"5c324e69-4bb9-40a6-a883-73a42e9ef646\") " Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.146655 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-logs\") pod \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.146672 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-public-tls-certs\") pod \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\" (UID: \"b47b5146-8110-4b6d-972a-e3d08f5c7e3c\") " Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.146694 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c324e69-4bb9-40a6-a883-73a42e9ef646-config-data\") pod \"5c324e69-4bb9-40a6-a883-73a42e9ef646\" (UID: \"5c324e69-4bb9-40a6-a883-73a42e9ef646\") " Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.153397 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-logs" (OuterVolumeSpecName: "logs") pod "b47b5146-8110-4b6d-972a-e3d08f5c7e3c" (UID: "b47b5146-8110-4b6d-972a-e3d08f5c7e3c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.159295 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e2ee-account-create-update-g2kt2" event={"ID":"e276c7c0-3036-4f26-8971-92a5c22b7840","Type":"ContainerStarted","Data":"58fa1bd0f64a3aca15c93a14857e52909a82c2df529f71bdcc7c05ab40125c5d"} Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.216466 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b47b5146-8110-4b6d-972a-e3d08f5c7e3c","Type":"ContainerDied","Data":"1a2e26ee20e4599c836563b332449dd61fb4d694d2fb8e93118b941151217085"} Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.216518 4897 scope.go:117] "RemoveContainer" containerID="ec8027176c9ecec33fecc0f1fb9f29f8ca9c4068b270aa33aaf3c3d639304bd9" Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.216634 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.228443 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f35f-account-create-update-nxqdw" event={"ID":"c892fc72-2d4f-4417-9078-65f0519fcc2d","Type":"ContainerStarted","Data":"0a0b3c262e416aa480cc223b9d58998fbe0039550bfa686625055983b1fb03ef"} Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.228486 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f35f-account-create-update-nxqdw" event={"ID":"c892fc72-2d4f-4417-9078-65f0519fcc2d","Type":"ContainerStarted","Data":"839cca079ddd7ce193685b69838fe2aeb11941d397de955b338690b1a6baea47"} Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.249460 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-logs\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.253271 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5d42-account-create-update-kw2zk" event={"ID":"6fd09d35-34e4-4a37-ac93-455f2f12b0d5","Type":"ContainerStarted","Data":"938b8df2238e6f9623953781b3a6f5eba47986aefd882218af2e3e1073ba7a64"} Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.260870 4897 scope.go:117] "RemoveContainer" containerID="b727a440edf144138b581af5aa46095cb32e7eaec0e1bc03747739a8061943c7" Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.264455 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-f35f-account-create-update-nxqdw" podStartSLOduration=3.264437528 podStartE2EDuration="3.264437528s" podCreationTimestamp="2026-02-14 19:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:06:27.259871885 +0000 UTC m=+1440.236280368" watchObservedRunningTime="2026-02-14 19:06:27.264437528 +0000 UTC m=+1440.240846001" Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.277238 4897 generic.go:334] "Generic (PLEG): container finished" podID="8f8b79c5-fdc5-49a7-8da5-278bbc982740" containerID="d44b65540a14b38bc434bfcb225316b32acaf10f73a82eef8a61ea2478482b1e" exitCode=0 Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.277726 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sc5s2" event={"ID":"8f8b79c5-fdc5-49a7-8da5-278bbc982740","Type":"ContainerDied","Data":"d44b65540a14b38bc434bfcb225316b32acaf10f73a82eef8a61ea2478482b1e"} Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.297640 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e04401f8-3fac-42bb-924b-1235cb127ed3-kube-api-access-lj5rt" (OuterVolumeSpecName: "kube-api-access-lj5rt") pod "e04401f8-3fac-42bb-924b-1235cb127ed3" (UID: "e04401f8-3fac-42bb-924b-1235cb127ed3"). InnerVolumeSpecName "kube-api-access-lj5rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.299330 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ec536ba-5940-41a9-8334-b622eeb2e669","Type":"ContainerStarted","Data":"cd6dced0b7a1957b3c75577a5f4243c6657324eb755ea4f3eb2fd3e0db10a3ac"} Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.301183 4897 generic.go:334] "Generic (PLEG): container finished" podID="27b24061-39f4-4ddd-aa33-bdd4da0e90bd" containerID="ebdbb10eebc8deea4b7f629fcf730b38457933a0731c74ceac878a4d4864ca1c" exitCode=0 Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.301241 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"27b24061-39f4-4ddd-aa33-bdd4da0e90bd","Type":"ContainerDied","Data":"ebdbb10eebc8deea4b7f629fcf730b38457933a0731c74ceac878a4d4864ca1c"} Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.309520 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-67c9665685-zvrsn" event={"ID":"5c324e69-4bb9-40a6-a883-73a42e9ef646","Type":"ContainerDied","Data":"a06c3362e4464483b6928fcf6a6fe143c6f71b45995909c06cd95294d4309c1b"} Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.309585 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-67c9665685-zvrsn" Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.320142 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-kube-api-access-gjqr6" (OuterVolumeSpecName: "kube-api-access-gjqr6") pod "b47b5146-8110-4b6d-972a-e3d08f5c7e3c" (UID: "b47b5146-8110-4b6d-972a-e3d08f5c7e3c"). InnerVolumeSpecName "kube-api-access-gjqr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.334805 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c324e69-4bb9-40a6-a883-73a42e9ef646-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5c324e69-4bb9-40a6-a883-73a42e9ef646" (UID: "5c324e69-4bb9-40a6-a883-73a42e9ef646"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.338195 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c324e69-4bb9-40a6-a883-73a42e9ef646-kube-api-access-4cqjz" (OuterVolumeSpecName: "kube-api-access-4cqjz") pod "5c324e69-4bb9-40a6-a883-73a42e9ef646" (UID: "5c324e69-4bb9-40a6-a883-73a42e9ef646"). InnerVolumeSpecName "kube-api-access-4cqjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.342722 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" event={"ID":"e04401f8-3fac-42bb-924b-1235cb127ed3","Type":"ContainerDied","Data":"ad1b6c7679c3c3da5a6922c9fce4e458226ae29d698d8deee28e371711aaf297"} Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.344770 4897 scope.go:117] "RemoveContainer" containerID="b35f1aa14f1851f164f687abb79d1f63e8e52c9a978deac806af0cc8eed1ecee" Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.345445 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-t4f5v" Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.348774 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-scripts" (OuterVolumeSpecName: "scripts") pod "b47b5146-8110-4b6d-972a-e3d08f5c7e3c" (UID: "b47b5146-8110-4b6d-972a-e3d08f5c7e3c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.355818 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj5rt\" (UniqueName: \"kubernetes.io/projected/e04401f8-3fac-42bb-924b-1235cb127ed3-kube-api-access-lj5rt\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.355839 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cqjz\" (UniqueName: \"kubernetes.io/projected/5c324e69-4bb9-40a6-a883-73a42e9ef646-kube-api-access-4cqjz\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.355848 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjqr6\" (UniqueName: \"kubernetes.io/projected/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-kube-api-access-gjqr6\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.355858 4897 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c324e69-4bb9-40a6-a883-73a42e9ef646-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.355868 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.365200 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-c9dv9" event={"ID":"312c2219-c7db-4a28-901f-1d03a379e088","Type":"ContainerStarted","Data":"2ba67fcb195e8f8d2f528735bbb600ef91d9a9fc692c8bd8b1c99e45aa5a6068"} Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.365752 4897 scope.go:117] "RemoveContainer" containerID="1f8e3c84a02abbe6b32cf0c18e94bce0d5786ac21eefc77a4e0137444b085692" Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.366186 4897 scope.go:117] "RemoveContainer" containerID="21fbb0abd09182bae16abea458fb5c9b72e68d2ce410f58956a9fc6fa25a949c" Feb 14 19:06:27 crc kubenswrapper[4897]: E0214 19:06:27.366255 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-86f48db4c-p7v4g_openstack(c0485238-dabe-46e0-87b1-239d64814ef8)\"" pod="openstack/heat-cfnapi-86f48db4c-p7v4g" podUID="c0485238-dabe-46e0-87b1-239d64814ef8" Feb 14 19:06:27 crc kubenswrapper[4897]: E0214 19:06:27.366460 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-94b476d6c-nbxhf_openstack(aec03a9b-3137-443f-b07f-eade8ffa27f5)\"" pod="openstack/heat-api-94b476d6c-nbxhf" podUID="aec03a9b-3137-443f-b07f-eade8ffa27f5" Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.461054 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c4846d-e178-48b1-80da-0604a66e3200" (OuterVolumeSpecName: "glance") pod "b47b5146-8110-4b6d-972a-e3d08f5c7e3c" (UID: "b47b5146-8110-4b6d-972a-e3d08f5c7e3c"). InnerVolumeSpecName "pvc-c2c4846d-e178-48b1-80da-0604a66e3200". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.561008 4897 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c2c4846d-e178-48b1-80da-0604a66e3200\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c4846d-e178-48b1-80da-0604a66e3200\") on node \"crc\" " Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.848006 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e4b6b13-37e3-4061-9e06-5969de8b94f1" path="/var/lib/kubelet/pods/0e4b6b13-37e3-4061-9e06-5969de8b94f1/volumes" Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.987937 4897 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 19:06:27 crc kubenswrapper[4897]: I0214 19:06:27.988691 4897 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c2c4846d-e178-48b1-80da-0604a66e3200" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c4846d-e178-48b1-80da-0604a66e3200") on node "crc" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.002591 4897 reconciler_common.go:293] "Volume detached for volume \"pvc-c2c4846d-e178-48b1-80da-0604a66e3200\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c4846d-e178-48b1-80da-0604a66e3200\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.005720 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e04401f8-3fac-42bb-924b-1235cb127ed3" (UID: "e04401f8-3fac-42bb-924b-1235cb127ed3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.017493 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e04401f8-3fac-42bb-924b-1235cb127ed3" (UID: "e04401f8-3fac-42bb-924b-1235cb127ed3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.026324 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c324e69-4bb9-40a6-a883-73a42e9ef646-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c324e69-4bb9-40a6-a883-73a42e9ef646" (UID: "5c324e69-4bb9-40a6-a883-73a42e9ef646"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.031788 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b47b5146-8110-4b6d-972a-e3d08f5c7e3c" (UID: "b47b5146-8110-4b6d-972a-e3d08f5c7e3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.072437 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e04401f8-3fac-42bb-924b-1235cb127ed3" (UID: "e04401f8-3fac-42bb-924b-1235cb127ed3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.082350 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c324e69-4bb9-40a6-a883-73a42e9ef646-config-data" (OuterVolumeSpecName: "config-data") pod "5c324e69-4bb9-40a6-a883-73a42e9ef646" (UID: "5c324e69-4bb9-40a6-a883-73a42e9ef646"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.093254 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b47b5146-8110-4b6d-972a-e3d08f5c7e3c" (UID: "b47b5146-8110-4b6d-972a-e3d08f5c7e3c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.100768 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-config-data" (OuterVolumeSpecName: "config-data") pod "b47b5146-8110-4b6d-972a-e3d08f5c7e3c" (UID: "b47b5146-8110-4b6d-972a-e3d08f5c7e3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.104490 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.104522 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.104533 4897 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b47b5146-8110-4b6d-972a-e3d08f5c7e3c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.104543 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c324e69-4bb9-40a6-a883-73a42e9ef646-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.104550 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.104559 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c324e69-4bb9-40a6-a883-73a42e9ef646-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.104567 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.104577 4897 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.113817 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-config" (OuterVolumeSpecName: "config") pod "e04401f8-3fac-42bb-924b-1235cb127ed3" (UID: "e04401f8-3fac-42bb-924b-1235cb127ed3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.115456 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e04401f8-3fac-42bb-924b-1235cb127ed3" (UID: "e04401f8-3fac-42bb-924b-1235cb127ed3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.207047 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.207077 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e04401f8-3fac-42bb-924b-1235cb127ed3-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.375749 4897 generic.go:334] "Generic (PLEG): container finished" podID="e276c7c0-3036-4f26-8971-92a5c22b7840" containerID="09bce61846694409c52d5561b533845c9a9af05db94f0dffac6228107bde0ee9" exitCode=0 Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.379582 4897 generic.go:334] "Generic (PLEG): container finished" podID="312c2219-c7db-4a28-901f-1d03a379e088" containerID="2ba67fcb195e8f8d2f528735bbb600ef91d9a9fc692c8bd8b1c99e45aa5a6068" exitCode=0 Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.382890 4897 generic.go:334] "Generic (PLEG): container finished" podID="c892fc72-2d4f-4417-9078-65f0519fcc2d" containerID="0a0b3c262e416aa480cc223b9d58998fbe0039550bfa686625055983b1fb03ef" exitCode=0 Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.389449 4897 generic.go:334] "Generic (PLEG): container finished" podID="adaee017-ddec-4818-acc9-54a5caa1571f" containerID="51f15af252060cf0e1250230839d179aa78a4e866fd1898e7e767a9d820f37fe" exitCode=0 Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.390226 4897 scope.go:117] "RemoveContainer" containerID="1f8e3c84a02abbe6b32cf0c18e94bce0d5786ac21eefc77a4e0137444b085692" Feb 14 19:06:28 crc kubenswrapper[4897]: E0214 19:06:28.390577 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-86f48db4c-p7v4g_openstack(c0485238-dabe-46e0-87b1-239d64814ef8)\"" pod="openstack/heat-cfnapi-86f48db4c-p7v4g" podUID="c0485238-dabe-46e0-87b1-239d64814ef8" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.391587 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e2ee-account-create-update-g2kt2" event={"ID":"e276c7c0-3036-4f26-8971-92a5c22b7840","Type":"ContainerDied","Data":"09bce61846694409c52d5561b533845c9a9af05db94f0dffac6228107bde0ee9"} Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.391620 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"27b24061-39f4-4ddd-aa33-bdd4da0e90bd","Type":"ContainerDied","Data":"3e8b09179bb7899d5fb2c19bb5fec0a461f3cfead54f71b840acfc3506b4b12e"} Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.391633 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e8b09179bb7899d5fb2c19bb5fec0a461f3cfead54f71b840acfc3506b4b12e" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.391641 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-c9dv9" event={"ID":"312c2219-c7db-4a28-901f-1d03a379e088","Type":"ContainerDied","Data":"2ba67fcb195e8f8d2f528735bbb600ef91d9a9fc692c8bd8b1c99e45aa5a6068"} Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.391664 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f35f-account-create-update-nxqdw" event={"ID":"c892fc72-2d4f-4417-9078-65f0519fcc2d","Type":"ContainerDied","Data":"0a0b3c262e416aa480cc223b9d58998fbe0039550bfa686625055983b1fb03ef"} Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.391676 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-5q8wx" event={"ID":"adaee017-ddec-4818-acc9-54a5caa1571f","Type":"ContainerDied","Data":"51f15af252060cf0e1250230839d179aa78a4e866fd1898e7e767a9d820f37fe"} Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.398767 4897 scope.go:117] "RemoveContainer" containerID="21fbb0abd09182bae16abea458fb5c9b72e68d2ce410f58956a9fc6fa25a949c" Feb 14 19:06:28 crc kubenswrapper[4897]: E0214 19:06:28.399491 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-94b476d6c-nbxhf_openstack(aec03a9b-3137-443f-b07f-eade8ffa27f5)\"" pod="openstack/heat-api-94b476d6c-nbxhf" podUID="aec03a9b-3137-443f-b07f-eade8ffa27f5" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.426643 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.445551 4897 scope.go:117] "RemoveContainer" containerID="34c914b71c349cb7c38e42c539e03e76c6eec67c09b0d60f9805530d85c70491" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.479259 4897 scope.go:117] "RemoveContainer" containerID="5793d7b05973908f0526bd3adac4a3c62d4e21ec11d577c92127c1b132491be6" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.514551 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-internal-tls-certs\") pod \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.514626 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-httpd-run\") pod \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.514659 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-logs\") pod \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.514702 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-scripts\") pod \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.514942 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-config-data\") pod \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.515007 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-combined-ca-bundle\") pod \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.515037 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m57r\" (UniqueName: \"kubernetes.io/projected/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-kube-api-access-9m57r\") pod \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.515608 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b289847-29c6-4db3-8215-32600f200b4c\") pod \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\" (UID: \"27b24061-39f4-4ddd-aa33-bdd4da0e90bd\") " Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.521909 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "27b24061-39f4-4ddd-aa33-bdd4da0e90bd" (UID: "27b24061-39f4-4ddd-aa33-bdd4da0e90bd"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.536251 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-logs" (OuterVolumeSpecName: "logs") pod "27b24061-39f4-4ddd-aa33-bdd4da0e90bd" (UID: "27b24061-39f4-4ddd-aa33-bdd4da0e90bd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.537694 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-kube-api-access-9m57r" (OuterVolumeSpecName: "kube-api-access-9m57r") pod "27b24061-39f4-4ddd-aa33-bdd4da0e90bd" (UID: "27b24061-39f4-4ddd-aa33-bdd4da0e90bd"). InnerVolumeSpecName "kube-api-access-9m57r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.548123 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-scripts" (OuterVolumeSpecName: "scripts") pod "27b24061-39f4-4ddd-aa33-bdd4da0e90bd" (UID: "27b24061-39f4-4ddd-aa33-bdd4da0e90bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.574436 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t4f5v"] Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.616883 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t4f5v"] Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.619627 4897 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.619647 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-logs\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.619657 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.619666 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m57r\" (UniqueName: \"kubernetes.io/projected/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-kube-api-access-9m57r\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.669086 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-67c9665685-zvrsn"] Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.678223 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-67c9665685-zvrsn"] Feb 14 19:06:28 crc kubenswrapper[4897]: I0214 19:06:28.956720 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b289847-29c6-4db3-8215-32600f200b4c" (OuterVolumeSpecName: "glance") pod "27b24061-39f4-4ddd-aa33-bdd4da0e90bd" (UID: "27b24061-39f4-4ddd-aa33-bdd4da0e90bd"). InnerVolumeSpecName "pvc-6b289847-29c6-4db3-8215-32600f200b4c". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.043400 4897 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6b289847-29c6-4db3-8215-32600f200b4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b289847-29c6-4db3-8215-32600f200b4c\") on node \"crc\" " Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.168545 4897 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.168712 4897 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6b289847-29c6-4db3-8215-32600f200b4c" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b289847-29c6-4db3-8215-32600f200b4c") on node "crc" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.197569 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27b24061-39f4-4ddd-aa33-bdd4da0e90bd" (UID: "27b24061-39f4-4ddd-aa33-bdd4da0e90bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.217138 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-config-data" (OuterVolumeSpecName: "config-data") pod "27b24061-39f4-4ddd-aa33-bdd4da0e90bd" (UID: "27b24061-39f4-4ddd-aa33-bdd4da0e90bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.234093 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "27b24061-39f4-4ddd-aa33-bdd4da0e90bd" (UID: "27b24061-39f4-4ddd-aa33-bdd4da0e90bd"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.253576 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.253608 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.253621 4897 reconciler_common.go:293] "Volume detached for volume \"pvc-6b289847-29c6-4db3-8215-32600f200b4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b289847-29c6-4db3-8215-32600f200b4c\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.253629 4897 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/27b24061-39f4-4ddd-aa33-bdd4da0e90bd-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.393660 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-c9dv9" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.409197 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sc5s2" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.409565 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-c9dv9" event={"ID":"312c2219-c7db-4a28-901f-1d03a379e088","Type":"ContainerDied","Data":"d2884b7d71638754c4eff7e3367ecf1fb5048a2cf0c8826f3a571c84eefe7713"} Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.409638 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2884b7d71638754c4eff7e3367ecf1fb5048a2cf0c8826f3a571c84eefe7713" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.409728 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-c9dv9" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.412066 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sc5s2" event={"ID":"8f8b79c5-fdc5-49a7-8da5-278bbc982740","Type":"ContainerDied","Data":"5281b1fad13a16f0e691194a1889804239fd00499e3ec1686ea7f60bcb16f61b"} Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.412100 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5281b1fad13a16f0e691194a1889804239fd00499e3ec1686ea7f60bcb16f61b" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.412164 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sc5s2" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.426519 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ec536ba-5940-41a9-8334-b622eeb2e669","Type":"ContainerStarted","Data":"bc5208014e029d4719ee7e9c813e802970240b70564afc243fb3949e0128533e"} Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.426735 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8ec536ba-5940-41a9-8334-b622eeb2e669" containerName="ceilometer-central-agent" containerID="cri-o://3b14b98294fbe1ca48f5d2ddc6cd466a686b102b7d42c2937b59eba4f84ed7ee" gracePeriod=30 Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.426793 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.426827 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8ec536ba-5940-41a9-8334-b622eeb2e669" containerName="proxy-httpd" containerID="cri-o://bc5208014e029d4719ee7e9c813e802970240b70564afc243fb3949e0128533e" gracePeriod=30 Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.426870 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8ec536ba-5940-41a9-8334-b622eeb2e669" containerName="sg-core" containerID="cri-o://cd6dced0b7a1957b3c75577a5f4243c6657324eb755ea4f3eb2fd3e0db10a3ac" gracePeriod=30 Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.426910 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8ec536ba-5940-41a9-8334-b622eeb2e669" containerName="ceilometer-notification-agent" containerID="cri-o://d8ac3e5b7ea4f0ecdf02355d8e670cc5ba9a020b266b7bdf645f666a5afd9b78" gracePeriod=30 Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.438816 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5d42-account-create-update-kw2zk" event={"ID":"6fd09d35-34e4-4a37-ac93-455f2f12b0d5","Type":"ContainerStarted","Data":"1aee4ac646dc92ceb127741e10d83251661e23f5271fd7774954da5da9967412"} Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.446339 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.463926 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s8cl\" (UniqueName: \"kubernetes.io/projected/8f8b79c5-fdc5-49a7-8da5-278bbc982740-kube-api-access-8s8cl\") pod \"8f8b79c5-fdc5-49a7-8da5-278bbc982740\" (UID: \"8f8b79c5-fdc5-49a7-8da5-278bbc982740\") " Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.465786 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5vbz\" (UniqueName: \"kubernetes.io/projected/312c2219-c7db-4a28-901f-1d03a379e088-kube-api-access-p5vbz\") pod \"312c2219-c7db-4a28-901f-1d03a379e088\" (UID: \"312c2219-c7db-4a28-901f-1d03a379e088\") " Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.465830 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f8b79c5-fdc5-49a7-8da5-278bbc982740-operator-scripts\") pod \"8f8b79c5-fdc5-49a7-8da5-278bbc982740\" (UID: \"8f8b79c5-fdc5-49a7-8da5-278bbc982740\") " Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.466441 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/312c2219-c7db-4a28-901f-1d03a379e088-operator-scripts\") pod \"312c2219-c7db-4a28-901f-1d03a379e088\" (UID: \"312c2219-c7db-4a28-901f-1d03a379e088\") " Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.467188 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f8b79c5-fdc5-49a7-8da5-278bbc982740-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f8b79c5-fdc5-49a7-8da5-278bbc982740" (UID: "8f8b79c5-fdc5-49a7-8da5-278bbc982740"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.471579 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312c2219-c7db-4a28-901f-1d03a379e088-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "312c2219-c7db-4a28-901f-1d03a379e088" (UID: "312c2219-c7db-4a28-901f-1d03a379e088"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.475968 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/312c2219-c7db-4a28-901f-1d03a379e088-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.476014 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f8b79c5-fdc5-49a7-8da5-278bbc982740-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.482122 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f8b79c5-fdc5-49a7-8da5-278bbc982740-kube-api-access-8s8cl" (OuterVolumeSpecName: "kube-api-access-8s8cl") pod "8f8b79c5-fdc5-49a7-8da5-278bbc982740" (UID: "8f8b79c5-fdc5-49a7-8da5-278bbc982740"). InnerVolumeSpecName "kube-api-access-8s8cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.492350 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/312c2219-c7db-4a28-901f-1d03a379e088-kube-api-access-p5vbz" (OuterVolumeSpecName: "kube-api-access-p5vbz") pod "312c2219-c7db-4a28-901f-1d03a379e088" (UID: "312c2219-c7db-4a28-901f-1d03a379e088"). InnerVolumeSpecName "kube-api-access-p5vbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.566330 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.803305428 podStartE2EDuration="10.566311646s" podCreationTimestamp="2026-02-14 19:06:19 +0000 UTC" firstStartedPulling="2026-02-14 19:06:20.943159633 +0000 UTC m=+1433.919568116" lastFinishedPulling="2026-02-14 19:06:27.706165851 +0000 UTC m=+1440.682574334" observedRunningTime="2026-02-14 19:06:29.447533119 +0000 UTC m=+1442.423941612" watchObservedRunningTime="2026-02-14 19:06:29.566311646 +0000 UTC m=+1442.542720129" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.578948 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s8cl\" (UniqueName: \"kubernetes.io/projected/8f8b79c5-fdc5-49a7-8da5-278bbc982740-kube-api-access-8s8cl\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.579182 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5vbz\" (UniqueName: \"kubernetes.io/projected/312c2219-c7db-4a28-901f-1d03a379e088-kube-api-access-p5vbz\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.580545 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-5d42-account-create-update-kw2zk" podStartSLOduration=5.580524262 podStartE2EDuration="5.580524262s" podCreationTimestamp="2026-02-14 19:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:06:29.478738519 +0000 UTC m=+1442.455146992" watchObservedRunningTime="2026-02-14 19:06:29.580524262 +0000 UTC m=+1442.556932745" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.675707 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.691084 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.720080 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 19:06:29 crc kubenswrapper[4897]: E0214 19:06:29.720748 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="312c2219-c7db-4a28-901f-1d03a379e088" containerName="mariadb-database-create" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.720834 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="312c2219-c7db-4a28-901f-1d03a379e088" containerName="mariadb-database-create" Feb 14 19:06:29 crc kubenswrapper[4897]: E0214 19:06:29.720911 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c324e69-4bb9-40a6-a883-73a42e9ef646" containerName="heat-api" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.720963 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c324e69-4bb9-40a6-a883-73a42e9ef646" containerName="heat-api" Feb 14 19:06:29 crc kubenswrapper[4897]: E0214 19:06:29.721018 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e4b6b13-37e3-4061-9e06-5969de8b94f1" containerName="heat-cfnapi" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.721099 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e4b6b13-37e3-4061-9e06-5969de8b94f1" containerName="heat-cfnapi" Feb 14 19:06:29 crc kubenswrapper[4897]: E0214 19:06:29.721152 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f8b79c5-fdc5-49a7-8da5-278bbc982740" containerName="mariadb-database-create" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.721204 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f8b79c5-fdc5-49a7-8da5-278bbc982740" containerName="mariadb-database-create" Feb 14 19:06:29 crc kubenswrapper[4897]: E0214 19:06:29.721261 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e04401f8-3fac-42bb-924b-1235cb127ed3" containerName="dnsmasq-dns" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.721348 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="e04401f8-3fac-42bb-924b-1235cb127ed3" containerName="dnsmasq-dns" Feb 14 19:06:29 crc kubenswrapper[4897]: E0214 19:06:29.721412 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27b24061-39f4-4ddd-aa33-bdd4da0e90bd" containerName="glance-log" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.721461 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="27b24061-39f4-4ddd-aa33-bdd4da0e90bd" containerName="glance-log" Feb 14 19:06:29 crc kubenswrapper[4897]: E0214 19:06:29.721538 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e04401f8-3fac-42bb-924b-1235cb127ed3" containerName="init" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.721590 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="e04401f8-3fac-42bb-924b-1235cb127ed3" containerName="init" Feb 14 19:06:29 crc kubenswrapper[4897]: E0214 19:06:29.721650 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b47b5146-8110-4b6d-972a-e3d08f5c7e3c" containerName="glance-httpd" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.721700 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b47b5146-8110-4b6d-972a-e3d08f5c7e3c" containerName="glance-httpd" Feb 14 19:06:29 crc kubenswrapper[4897]: E0214 19:06:29.721757 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27b24061-39f4-4ddd-aa33-bdd4da0e90bd" containerName="glance-httpd" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.721808 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="27b24061-39f4-4ddd-aa33-bdd4da0e90bd" containerName="glance-httpd" Feb 14 19:06:29 crc kubenswrapper[4897]: E0214 19:06:29.721872 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b47b5146-8110-4b6d-972a-e3d08f5c7e3c" containerName="glance-log" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.721928 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b47b5146-8110-4b6d-972a-e3d08f5c7e3c" containerName="glance-log" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.722288 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="312c2219-c7db-4a28-901f-1d03a379e088" containerName="mariadb-database-create" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.722358 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="e04401f8-3fac-42bb-924b-1235cb127ed3" containerName="dnsmasq-dns" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.722436 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="27b24061-39f4-4ddd-aa33-bdd4da0e90bd" containerName="glance-httpd" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.722490 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="b47b5146-8110-4b6d-972a-e3d08f5c7e3c" containerName="glance-httpd" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.722554 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f8b79c5-fdc5-49a7-8da5-278bbc982740" containerName="mariadb-database-create" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.722618 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e4b6b13-37e3-4061-9e06-5969de8b94f1" containerName="heat-cfnapi" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.722675 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c324e69-4bb9-40a6-a883-73a42e9ef646" containerName="heat-api" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.722727 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="b47b5146-8110-4b6d-972a-e3d08f5c7e3c" containerName="glance-log" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.722778 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="27b24061-39f4-4ddd-aa33-bdd4da0e90bd" containerName="glance-log" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.724090 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.728014 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.728345 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.728353 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.728605 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wcdfs" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.730053 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.791545 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46937bb4-8832-4a52-a593-bee2fc6e292b-logs\") pod \"glance-default-internal-api-0\" (UID: \"46937bb4-8832-4a52-a593-bee2fc6e292b\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.791607 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/46937bb4-8832-4a52-a593-bee2fc6e292b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"46937bb4-8832-4a52-a593-bee2fc6e292b\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.791654 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46937bb4-8832-4a52-a593-bee2fc6e292b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"46937bb4-8832-4a52-a593-bee2fc6e292b\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.791688 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6b289847-29c6-4db3-8215-32600f200b4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b289847-29c6-4db3-8215-32600f200b4c\") pod \"glance-default-internal-api-0\" (UID: \"46937bb4-8832-4a52-a593-bee2fc6e292b\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.791754 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46937bb4-8832-4a52-a593-bee2fc6e292b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"46937bb4-8832-4a52-a593-bee2fc6e292b\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.791786 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46937bb4-8832-4a52-a593-bee2fc6e292b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"46937bb4-8832-4a52-a593-bee2fc6e292b\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.791839 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtclr\" (UniqueName: \"kubernetes.io/projected/46937bb4-8832-4a52-a593-bee2fc6e292b-kube-api-access-gtclr\") pod \"glance-default-internal-api-0\" (UID: \"46937bb4-8832-4a52-a593-bee2fc6e292b\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.791989 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46937bb4-8832-4a52-a593-bee2fc6e292b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"46937bb4-8832-4a52-a593-bee2fc6e292b\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.819299 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27b24061-39f4-4ddd-aa33-bdd4da0e90bd" path="/var/lib/kubelet/pods/27b24061-39f4-4ddd-aa33-bdd4da0e90bd/volumes" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.820332 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c324e69-4bb9-40a6-a883-73a42e9ef646" path="/var/lib/kubelet/pods/5c324e69-4bb9-40a6-a883-73a42e9ef646/volumes" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.820917 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e04401f8-3fac-42bb-924b-1235cb127ed3" path="/var/lib/kubelet/pods/e04401f8-3fac-42bb-924b-1235cb127ed3/volumes" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.898572 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46937bb4-8832-4a52-a593-bee2fc6e292b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"46937bb4-8832-4a52-a593-bee2fc6e292b\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.898655 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6b289847-29c6-4db3-8215-32600f200b4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b289847-29c6-4db3-8215-32600f200b4c\") pod \"glance-default-internal-api-0\" (UID: \"46937bb4-8832-4a52-a593-bee2fc6e292b\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.898789 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46937bb4-8832-4a52-a593-bee2fc6e292b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"46937bb4-8832-4a52-a593-bee2fc6e292b\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.898842 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46937bb4-8832-4a52-a593-bee2fc6e292b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"46937bb4-8832-4a52-a593-bee2fc6e292b\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.898927 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtclr\" (UniqueName: \"kubernetes.io/projected/46937bb4-8832-4a52-a593-bee2fc6e292b-kube-api-access-gtclr\") pod \"glance-default-internal-api-0\" (UID: \"46937bb4-8832-4a52-a593-bee2fc6e292b\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.898970 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46937bb4-8832-4a52-a593-bee2fc6e292b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"46937bb4-8832-4a52-a593-bee2fc6e292b\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.899094 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46937bb4-8832-4a52-a593-bee2fc6e292b-logs\") pod \"glance-default-internal-api-0\" (UID: \"46937bb4-8832-4a52-a593-bee2fc6e292b\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.899148 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/46937bb4-8832-4a52-a593-bee2fc6e292b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"46937bb4-8832-4a52-a593-bee2fc6e292b\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.904791 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/46937bb4-8832-4a52-a593-bee2fc6e292b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"46937bb4-8832-4a52-a593-bee2fc6e292b\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.905769 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46937bb4-8832-4a52-a593-bee2fc6e292b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"46937bb4-8832-4a52-a593-bee2fc6e292b\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.906563 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46937bb4-8832-4a52-a593-bee2fc6e292b-logs\") pod \"glance-default-internal-api-0\" (UID: \"46937bb4-8832-4a52-a593-bee2fc6e292b\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.909629 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46937bb4-8832-4a52-a593-bee2fc6e292b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"46937bb4-8832-4a52-a593-bee2fc6e292b\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.922985 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46937bb4-8832-4a52-a593-bee2fc6e292b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"46937bb4-8832-4a52-a593-bee2fc6e292b\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.923022 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.923066 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6b289847-29c6-4db3-8215-32600f200b4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b289847-29c6-4db3-8215-32600f200b4c\") pod \"glance-default-internal-api-0\" (UID: \"46937bb4-8832-4a52-a593-bee2fc6e292b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e62bbcc549b1e49eee9b1b5ff653b97ed37b658653a03b79e94b1d5ec308d580/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.928606 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46937bb4-8832-4a52-a593-bee2fc6e292b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"46937bb4-8832-4a52-a593-bee2fc6e292b\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:06:29 crc kubenswrapper[4897]: I0214 19:06:29.932990 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtclr\" (UniqueName: \"kubernetes.io/projected/46937bb4-8832-4a52-a593-bee2fc6e292b-kube-api-access-gtclr\") pod \"glance-default-internal-api-0\" (UID: \"46937bb4-8832-4a52-a593-bee2fc6e292b\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.064375 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6b289847-29c6-4db3-8215-32600f200b4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b289847-29c6-4db3-8215-32600f200b4c\") pod \"glance-default-internal-api-0\" (UID: \"46937bb4-8832-4a52-a593-bee2fc6e292b\") " pod="openstack/glance-default-internal-api-0" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.105502 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-5q8wx" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.205536 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adaee017-ddec-4818-acc9-54a5caa1571f-operator-scripts\") pod \"adaee017-ddec-4818-acc9-54a5caa1571f\" (UID: \"adaee017-ddec-4818-acc9-54a5caa1571f\") " Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.205611 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wdj9\" (UniqueName: \"kubernetes.io/projected/adaee017-ddec-4818-acc9-54a5caa1571f-kube-api-access-6wdj9\") pod \"adaee017-ddec-4818-acc9-54a5caa1571f\" (UID: \"adaee017-ddec-4818-acc9-54a5caa1571f\") " Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.207541 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adaee017-ddec-4818-acc9-54a5caa1571f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "adaee017-ddec-4818-acc9-54a5caa1571f" (UID: "adaee017-ddec-4818-acc9-54a5caa1571f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.212136 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adaee017-ddec-4818-acc9-54a5caa1571f-kube-api-access-6wdj9" (OuterVolumeSpecName: "kube-api-access-6wdj9") pod "adaee017-ddec-4818-acc9-54a5caa1571f" (UID: "adaee017-ddec-4818-acc9-54a5caa1571f"). InnerVolumeSpecName "kube-api-access-6wdj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.220467 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f35f-account-create-update-nxqdw" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.227394 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e2ee-account-create-update-g2kt2" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.310079 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49x6f\" (UniqueName: \"kubernetes.io/projected/c892fc72-2d4f-4417-9078-65f0519fcc2d-kube-api-access-49x6f\") pod \"c892fc72-2d4f-4417-9078-65f0519fcc2d\" (UID: \"c892fc72-2d4f-4417-9078-65f0519fcc2d\") " Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.310199 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c892fc72-2d4f-4417-9078-65f0519fcc2d-operator-scripts\") pod \"c892fc72-2d4f-4417-9078-65f0519fcc2d\" (UID: \"c892fc72-2d4f-4417-9078-65f0519fcc2d\") " Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.311060 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adaee017-ddec-4818-acc9-54a5caa1571f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.311079 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wdj9\" (UniqueName: \"kubernetes.io/projected/adaee017-ddec-4818-acc9-54a5caa1571f-kube-api-access-6wdj9\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.311578 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c892fc72-2d4f-4417-9078-65f0519fcc2d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c892fc72-2d4f-4417-9078-65f0519fcc2d" (UID: "c892fc72-2d4f-4417-9078-65f0519fcc2d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.316194 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c892fc72-2d4f-4417-9078-65f0519fcc2d-kube-api-access-49x6f" (OuterVolumeSpecName: "kube-api-access-49x6f") pod "c892fc72-2d4f-4417-9078-65f0519fcc2d" (UID: "c892fc72-2d4f-4417-9078-65f0519fcc2d"). InnerVolumeSpecName "kube-api-access-49x6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.367505 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.412235 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwth5\" (UniqueName: \"kubernetes.io/projected/e276c7c0-3036-4f26-8971-92a5c22b7840-kube-api-access-wwth5\") pod \"e276c7c0-3036-4f26-8971-92a5c22b7840\" (UID: \"e276c7c0-3036-4f26-8971-92a5c22b7840\") " Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.412428 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e276c7c0-3036-4f26-8971-92a5c22b7840-operator-scripts\") pod \"e276c7c0-3036-4f26-8971-92a5c22b7840\" (UID: \"e276c7c0-3036-4f26-8971-92a5c22b7840\") " Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.413060 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49x6f\" (UniqueName: \"kubernetes.io/projected/c892fc72-2d4f-4417-9078-65f0519fcc2d-kube-api-access-49x6f\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.413075 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c892fc72-2d4f-4417-9078-65f0519fcc2d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.417421 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e276c7c0-3036-4f26-8971-92a5c22b7840-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e276c7c0-3036-4f26-8971-92a5c22b7840" (UID: "e276c7c0-3036-4f26-8971-92a5c22b7840"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.418573 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e276c7c0-3036-4f26-8971-92a5c22b7840-kube-api-access-wwth5" (OuterVolumeSpecName: "kube-api-access-wwth5") pod "e276c7c0-3036-4f26-8971-92a5c22b7840" (UID: "e276c7c0-3036-4f26-8971-92a5c22b7840"). InnerVolumeSpecName "kube-api-access-wwth5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.472607 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-5q8wx" event={"ID":"adaee017-ddec-4818-acc9-54a5caa1571f","Type":"ContainerDied","Data":"bd6d1360f950d9031569802a0666973297a129c704f0d9cc1cc7252ec9731521"} Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.472646 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd6d1360f950d9031569802a0666973297a129c704f0d9cc1cc7252ec9731521" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.472701 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-5q8wx" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.485491 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e2ee-account-create-update-g2kt2" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.485483 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e2ee-account-create-update-g2kt2" event={"ID":"e276c7c0-3036-4f26-8971-92a5c22b7840","Type":"ContainerDied","Data":"58fa1bd0f64a3aca15c93a14857e52909a82c2df529f71bdcc7c05ab40125c5d"} Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.486181 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58fa1bd0f64a3aca15c93a14857e52909a82c2df529f71bdcc7c05ab40125c5d" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.500910 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f35f-account-create-update-nxqdw" event={"ID":"c892fc72-2d4f-4417-9078-65f0519fcc2d","Type":"ContainerDied","Data":"839cca079ddd7ce193685b69838fe2aeb11941d397de955b338690b1a6baea47"} Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.500953 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="839cca079ddd7ce193685b69838fe2aeb11941d397de955b338690b1a6baea47" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.501015 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f35f-account-create-update-nxqdw" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.506176 4897 generic.go:334] "Generic (PLEG): container finished" podID="8ec536ba-5940-41a9-8334-b622eeb2e669" containerID="bc5208014e029d4719ee7e9c813e802970240b70564afc243fb3949e0128533e" exitCode=0 Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.506210 4897 generic.go:334] "Generic (PLEG): container finished" podID="8ec536ba-5940-41a9-8334-b622eeb2e669" containerID="cd6dced0b7a1957b3c75577a5f4243c6657324eb755ea4f3eb2fd3e0db10a3ac" exitCode=2 Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.506217 4897 generic.go:334] "Generic (PLEG): container finished" podID="8ec536ba-5940-41a9-8334-b622eeb2e669" containerID="d8ac3e5b7ea4f0ecdf02355d8e670cc5ba9a020b266b7bdf645f666a5afd9b78" exitCode=0 Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.506250 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ec536ba-5940-41a9-8334-b622eeb2e669","Type":"ContainerDied","Data":"bc5208014e029d4719ee7e9c813e802970240b70564afc243fb3949e0128533e"} Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.506275 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ec536ba-5940-41a9-8334-b622eeb2e669","Type":"ContainerDied","Data":"cd6dced0b7a1957b3c75577a5f4243c6657324eb755ea4f3eb2fd3e0db10a3ac"} Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.506286 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ec536ba-5940-41a9-8334-b622eeb2e669","Type":"ContainerDied","Data":"d8ac3e5b7ea4f0ecdf02355d8e670cc5ba9a020b266b7bdf645f666a5afd9b78"} Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.510012 4897 generic.go:334] "Generic (PLEG): container finished" podID="6fd09d35-34e4-4a37-ac93-455f2f12b0d5" containerID="1aee4ac646dc92ceb127741e10d83251661e23f5271fd7774954da5da9967412" exitCode=0 Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.510119 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5d42-account-create-update-kw2zk" event={"ID":"6fd09d35-34e4-4a37-ac93-455f2f12b0d5","Type":"ContainerDied","Data":"1aee4ac646dc92ceb127741e10d83251661e23f5271fd7774954da5da9967412"} Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.514698 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwth5\" (UniqueName: \"kubernetes.io/projected/e276c7c0-3036-4f26-8971-92a5c22b7840-kube-api-access-wwth5\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.514728 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e276c7c0-3036-4f26-8971-92a5c22b7840-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.639621 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-bb56bbbfb-v5pf9" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.805936 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5fc95b4d56-9mkgz" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.841223 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.863882 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-94b476d6c-nbxhf"] Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.867233 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-bb56bbbfb-v5pf9" Feb 14 19:06:30 crc kubenswrapper[4897]: I0214 19:06:30.945496 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-86f48db4c-p7v4g"] Feb 14 19:06:30 crc kubenswrapper[4897]: E0214 19:06:30.969192 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc892fc72_2d4f_4417_9078_65f0519fcc2d.slice/crio-839cca079ddd7ce193685b69838fe2aeb11941d397de955b338690b1a6baea47\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc892fc72_2d4f_4417_9078_65f0519fcc2d.slice\": RecentStats: unable to find data in memory cache]" Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.050079 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6fc586c7b4-8x7qx"] Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.050329 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6fc586c7b4-8x7qx" podUID="1639a907-9497-4dea-a153-945921c79337" containerName="placement-log" containerID="cri-o://82dd2f2688766725650bee1eb8d63c5a544e36fef4aa12e9f5c39f2fa22c5032" gracePeriod=30 Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.050627 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6fc586c7b4-8x7qx" podUID="1639a907-9497-4dea-a153-945921c79337" containerName="placement-api" containerID="cri-o://50173ab05d1c9f56f9b06b808ec984f5e26ec10f3a8eaf8d7c6e65e628bf172a" gracePeriod=30 Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.089778 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 19:06:31 crc kubenswrapper[4897]: W0214 19:06:31.126806 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46937bb4_8832_4a52_a593_bee2fc6e292b.slice/crio-d2fca7a665b07aab699b07fd2706eeb476638dcc1e36bdc7574dc942b62f5aa2 WatchSource:0}: Error finding container d2fca7a665b07aab699b07fd2706eeb476638dcc1e36bdc7574dc942b62f5aa2: Status 404 returned error can't find the container with id d2fca7a665b07aab699b07fd2706eeb476638dcc1e36bdc7574dc942b62f5aa2 Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.516405 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-94b476d6c-nbxhf" Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.523077 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-86f48db4c-p7v4g" Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.523483 4897 generic.go:334] "Generic (PLEG): container finished" podID="1639a907-9497-4dea-a153-945921c79337" containerID="82dd2f2688766725650bee1eb8d63c5a544e36fef4aa12e9f5c39f2fa22c5032" exitCode=143 Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.523546 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6fc586c7b4-8x7qx" event={"ID":"1639a907-9497-4dea-a153-945921c79337","Type":"ContainerDied","Data":"82dd2f2688766725650bee1eb8d63c5a544e36fef4aa12e9f5c39f2fa22c5032"} Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.526794 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-86f48db4c-p7v4g" event={"ID":"c0485238-dabe-46e0-87b1-239d64814ef8","Type":"ContainerDied","Data":"faef5c8425d0d3bf8cc5341fe39d4e47b9fa63eab88061cc1470e10bccb9e09e"} Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.526808 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-86f48db4c-p7v4g" Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.526847 4897 scope.go:117] "RemoveContainer" containerID="1f8e3c84a02abbe6b32cf0c18e94bce0d5786ac21eefc77a4e0137444b085692" Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.529799 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-94b476d6c-nbxhf" event={"ID":"aec03a9b-3137-443f-b07f-eade8ffa27f5","Type":"ContainerDied","Data":"af1013973a62dac0475707c831692c272c8ec84d5dd1d9fc0a6aa265047d4e27"} Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.529851 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-94b476d6c-nbxhf" Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.534190 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"46937bb4-8832-4a52-a593-bee2fc6e292b","Type":"ContainerStarted","Data":"d2fca7a665b07aab699b07fd2706eeb476638dcc1e36bdc7574dc942b62f5aa2"} Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.648663 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj9b2\" (UniqueName: \"kubernetes.io/projected/c0485238-dabe-46e0-87b1-239d64814ef8-kube-api-access-jj9b2\") pod \"c0485238-dabe-46e0-87b1-239d64814ef8\" (UID: \"c0485238-dabe-46e0-87b1-239d64814ef8\") " Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.648785 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0485238-dabe-46e0-87b1-239d64814ef8-config-data-custom\") pod \"c0485238-dabe-46e0-87b1-239d64814ef8\" (UID: \"c0485238-dabe-46e0-87b1-239d64814ef8\") " Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.648816 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6lhj\" (UniqueName: \"kubernetes.io/projected/aec03a9b-3137-443f-b07f-eade8ffa27f5-kube-api-access-d6lhj\") pod \"aec03a9b-3137-443f-b07f-eade8ffa27f5\" (UID: \"aec03a9b-3137-443f-b07f-eade8ffa27f5\") " Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.648855 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aec03a9b-3137-443f-b07f-eade8ffa27f5-config-data-custom\") pod \"aec03a9b-3137-443f-b07f-eade8ffa27f5\" (UID: \"aec03a9b-3137-443f-b07f-eade8ffa27f5\") " Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.648984 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0485238-dabe-46e0-87b1-239d64814ef8-combined-ca-bundle\") pod \"c0485238-dabe-46e0-87b1-239d64814ef8\" (UID: \"c0485238-dabe-46e0-87b1-239d64814ef8\") " Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.649090 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0485238-dabe-46e0-87b1-239d64814ef8-config-data\") pod \"c0485238-dabe-46e0-87b1-239d64814ef8\" (UID: \"c0485238-dabe-46e0-87b1-239d64814ef8\") " Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.649154 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec03a9b-3137-443f-b07f-eade8ffa27f5-combined-ca-bundle\") pod \"aec03a9b-3137-443f-b07f-eade8ffa27f5\" (UID: \"aec03a9b-3137-443f-b07f-eade8ffa27f5\") " Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.649198 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec03a9b-3137-443f-b07f-eade8ffa27f5-config-data\") pod \"aec03a9b-3137-443f-b07f-eade8ffa27f5\" (UID: \"aec03a9b-3137-443f-b07f-eade8ffa27f5\") " Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.652364 4897 scope.go:117] "RemoveContainer" containerID="21fbb0abd09182bae16abea458fb5c9b72e68d2ce410f58956a9fc6fa25a949c" Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.660136 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0485238-dabe-46e0-87b1-239d64814ef8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c0485238-dabe-46e0-87b1-239d64814ef8" (UID: "c0485238-dabe-46e0-87b1-239d64814ef8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.675051 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aec03a9b-3137-443f-b07f-eade8ffa27f5-kube-api-access-d6lhj" (OuterVolumeSpecName: "kube-api-access-d6lhj") pod "aec03a9b-3137-443f-b07f-eade8ffa27f5" (UID: "aec03a9b-3137-443f-b07f-eade8ffa27f5"). InnerVolumeSpecName "kube-api-access-d6lhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.676725 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0485238-dabe-46e0-87b1-239d64814ef8-kube-api-access-jj9b2" (OuterVolumeSpecName: "kube-api-access-jj9b2") pod "c0485238-dabe-46e0-87b1-239d64814ef8" (UID: "c0485238-dabe-46e0-87b1-239d64814ef8"). InnerVolumeSpecName "kube-api-access-jj9b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.686502 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec03a9b-3137-443f-b07f-eade8ffa27f5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "aec03a9b-3137-443f-b07f-eade8ffa27f5" (UID: "aec03a9b-3137-443f-b07f-eade8ffa27f5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.717088 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec03a9b-3137-443f-b07f-eade8ffa27f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aec03a9b-3137-443f-b07f-eade8ffa27f5" (UID: "aec03a9b-3137-443f-b07f-eade8ffa27f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.725659 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.725716 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.725757 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.726676 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"235f7e04d5c8603ba95b93f15134ed139784ade9cf49c6bd1886aa661c14e66a"} pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.726743 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" containerID="cri-o://235f7e04d5c8603ba95b93f15134ed139784ade9cf49c6bd1886aa661c14e66a" gracePeriod=600 Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.749306 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0485238-dabe-46e0-87b1-239d64814ef8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0485238-dabe-46e0-87b1-239d64814ef8" (UID: "c0485238-dabe-46e0-87b1-239d64814ef8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.751567 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6lhj\" (UniqueName: \"kubernetes.io/projected/aec03a9b-3137-443f-b07f-eade8ffa27f5-kube-api-access-d6lhj\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.751594 4897 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aec03a9b-3137-443f-b07f-eade8ffa27f5-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.751603 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0485238-dabe-46e0-87b1-239d64814ef8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.751611 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec03a9b-3137-443f-b07f-eade8ffa27f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.751620 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj9b2\" (UniqueName: \"kubernetes.io/projected/c0485238-dabe-46e0-87b1-239d64814ef8-kube-api-access-jj9b2\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.751628 4897 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0485238-dabe-46e0-87b1-239d64814ef8-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.905486 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec03a9b-3137-443f-b07f-eade8ffa27f5-config-data" (OuterVolumeSpecName: "config-data") pod "aec03a9b-3137-443f-b07f-eade8ffa27f5" (UID: "aec03a9b-3137-443f-b07f-eade8ffa27f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:31 crc kubenswrapper[4897]: I0214 19:06:31.990175 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec03a9b-3137-443f-b07f-eade8ffa27f5-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:32 crc kubenswrapper[4897]: I0214 19:06:32.080673 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0485238-dabe-46e0-87b1-239d64814ef8-config-data" (OuterVolumeSpecName: "config-data") pod "c0485238-dabe-46e0-87b1-239d64814ef8" (UID: "c0485238-dabe-46e0-87b1-239d64814ef8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:32 crc kubenswrapper[4897]: I0214 19:06:32.084535 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5d42-account-create-update-kw2zk" Feb 14 19:06:32 crc kubenswrapper[4897]: I0214 19:06:32.093588 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0485238-dabe-46e0-87b1-239d64814ef8-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:32 crc kubenswrapper[4897]: I0214 19:06:32.170430 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-86f48db4c-p7v4g"] Feb 14 19:06:32 crc kubenswrapper[4897]: I0214 19:06:32.187923 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-86f48db4c-p7v4g"] Feb 14 19:06:32 crc kubenswrapper[4897]: I0214 19:06:32.198776 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd09d35-34e4-4a37-ac93-455f2f12b0d5-operator-scripts\") pod \"6fd09d35-34e4-4a37-ac93-455f2f12b0d5\" (UID: \"6fd09d35-34e4-4a37-ac93-455f2f12b0d5\") " Feb 14 19:06:32 crc kubenswrapper[4897]: I0214 19:06:32.199419 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd09d35-34e4-4a37-ac93-455f2f12b0d5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fd09d35-34e4-4a37-ac93-455f2f12b0d5" (UID: "6fd09d35-34e4-4a37-ac93-455f2f12b0d5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:06:32 crc kubenswrapper[4897]: I0214 19:06:32.200572 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5ksx\" (UniqueName: \"kubernetes.io/projected/6fd09d35-34e4-4a37-ac93-455f2f12b0d5-kube-api-access-p5ksx\") pod \"6fd09d35-34e4-4a37-ac93-455f2f12b0d5\" (UID: \"6fd09d35-34e4-4a37-ac93-455f2f12b0d5\") " Feb 14 19:06:32 crc kubenswrapper[4897]: I0214 19:06:32.201695 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd09d35-34e4-4a37-ac93-455f2f12b0d5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:32 crc kubenswrapper[4897]: I0214 19:06:32.206419 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd09d35-34e4-4a37-ac93-455f2f12b0d5-kube-api-access-p5ksx" (OuterVolumeSpecName: "kube-api-access-p5ksx") pod "6fd09d35-34e4-4a37-ac93-455f2f12b0d5" (UID: "6fd09d35-34e4-4a37-ac93-455f2f12b0d5"). InnerVolumeSpecName "kube-api-access-p5ksx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:06:32 crc kubenswrapper[4897]: I0214 19:06:32.209131 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-94b476d6c-nbxhf"] Feb 14 19:06:32 crc kubenswrapper[4897]: I0214 19:06:32.223708 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-94b476d6c-nbxhf"] Feb 14 19:06:32 crc kubenswrapper[4897]: I0214 19:06:32.303779 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5ksx\" (UniqueName: \"kubernetes.io/projected/6fd09d35-34e4-4a37-ac93-455f2f12b0d5-kube-api-access-p5ksx\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:32 crc kubenswrapper[4897]: I0214 19:06:32.597399 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"46937bb4-8832-4a52-a593-bee2fc6e292b","Type":"ContainerStarted","Data":"f59efa31aeccf29105189c1bae7a6e6ad670ccb0c90d271662293ecbb5d02d3e"} Feb 14 19:06:32 crc kubenswrapper[4897]: I0214 19:06:32.609942 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5d42-account-create-update-kw2zk" Feb 14 19:06:32 crc kubenswrapper[4897]: I0214 19:06:32.609943 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5d42-account-create-update-kw2zk" event={"ID":"6fd09d35-34e4-4a37-ac93-455f2f12b0d5","Type":"ContainerDied","Data":"938b8df2238e6f9623953781b3a6f5eba47986aefd882218af2e3e1073ba7a64"} Feb 14 19:06:32 crc kubenswrapper[4897]: I0214 19:06:32.610058 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="938b8df2238e6f9623953781b3a6f5eba47986aefd882218af2e3e1073ba7a64" Feb 14 19:06:32 crc kubenswrapper[4897]: I0214 19:06:32.614465 4897 generic.go:334] "Generic (PLEG): container finished" podID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerID="235f7e04d5c8603ba95b93f15134ed139784ade9cf49c6bd1886aa661c14e66a" exitCode=0 Feb 14 19:06:32 crc kubenswrapper[4897]: I0214 19:06:32.614514 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerDied","Data":"235f7e04d5c8603ba95b93f15134ed139784ade9cf49c6bd1886aa661c14e66a"} Feb 14 19:06:32 crc kubenswrapper[4897]: I0214 19:06:32.614541 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerStarted","Data":"66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6"} Feb 14 19:06:32 crc kubenswrapper[4897]: I0214 19:06:32.614560 4897 scope.go:117] "RemoveContainer" containerID="68d22528009a2caf1cd383d357574b535616ffbac78d6b95052fe2b58aa80740" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.382454 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.526281 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ec536ba-5940-41a9-8334-b622eeb2e669-scripts\") pod \"8ec536ba-5940-41a9-8334-b622eeb2e669\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.526337 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t45m4\" (UniqueName: \"kubernetes.io/projected/8ec536ba-5940-41a9-8334-b622eeb2e669-kube-api-access-t45m4\") pod \"8ec536ba-5940-41a9-8334-b622eeb2e669\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.526406 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ec536ba-5940-41a9-8334-b622eeb2e669-config-data\") pod \"8ec536ba-5940-41a9-8334-b622eeb2e669\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.526490 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8ec536ba-5940-41a9-8334-b622eeb2e669-sg-core-conf-yaml\") pod \"8ec536ba-5940-41a9-8334-b622eeb2e669\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.526561 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ec536ba-5940-41a9-8334-b622eeb2e669-log-httpd\") pod \"8ec536ba-5940-41a9-8334-b622eeb2e669\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.526643 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ec536ba-5940-41a9-8334-b622eeb2e669-combined-ca-bundle\") pod \"8ec536ba-5940-41a9-8334-b622eeb2e669\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.526678 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ec536ba-5940-41a9-8334-b622eeb2e669-run-httpd\") pod \"8ec536ba-5940-41a9-8334-b622eeb2e669\" (UID: \"8ec536ba-5940-41a9-8334-b622eeb2e669\") " Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.527448 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ec536ba-5940-41a9-8334-b622eeb2e669-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8ec536ba-5940-41a9-8334-b622eeb2e669" (UID: "8ec536ba-5940-41a9-8334-b622eeb2e669"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.527512 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ec536ba-5940-41a9-8334-b622eeb2e669-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8ec536ba-5940-41a9-8334-b622eeb2e669" (UID: "8ec536ba-5940-41a9-8334-b622eeb2e669"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.533141 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ec536ba-5940-41a9-8334-b622eeb2e669-scripts" (OuterVolumeSpecName: "scripts") pod "8ec536ba-5940-41a9-8334-b622eeb2e669" (UID: "8ec536ba-5940-41a9-8334-b622eeb2e669"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.547202 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ec536ba-5940-41a9-8334-b622eeb2e669-kube-api-access-t45m4" (OuterVolumeSpecName: "kube-api-access-t45m4") pod "8ec536ba-5940-41a9-8334-b622eeb2e669" (UID: "8ec536ba-5940-41a9-8334-b622eeb2e669"). InnerVolumeSpecName "kube-api-access-t45m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.578147 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ec536ba-5940-41a9-8334-b622eeb2e669-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8ec536ba-5940-41a9-8334-b622eeb2e669" (UID: "8ec536ba-5940-41a9-8334-b622eeb2e669"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.626970 4897 generic.go:334] "Generic (PLEG): container finished" podID="8ec536ba-5940-41a9-8334-b622eeb2e669" containerID="3b14b98294fbe1ca48f5d2ddc6cd466a686b102b7d42c2937b59eba4f84ed7ee" exitCode=0 Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.627059 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ec536ba-5940-41a9-8334-b622eeb2e669","Type":"ContainerDied","Data":"3b14b98294fbe1ca48f5d2ddc6cd466a686b102b7d42c2937b59eba4f84ed7ee"} Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.627093 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ec536ba-5940-41a9-8334-b622eeb2e669","Type":"ContainerDied","Data":"e54637865acd89719a36f3585313958f513d42db19f48a71db6a50e78b523507"} Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.627115 4897 scope.go:117] "RemoveContainer" containerID="bc5208014e029d4719ee7e9c813e802970240b70564afc243fb3949e0128533e" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.627267 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.629483 4897 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ec536ba-5940-41a9-8334-b622eeb2e669-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.629594 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ec536ba-5940-41a9-8334-b622eeb2e669-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.629659 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t45m4\" (UniqueName: \"kubernetes.io/projected/8ec536ba-5940-41a9-8334-b622eeb2e669-kube-api-access-t45m4\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.629721 4897 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8ec536ba-5940-41a9-8334-b622eeb2e669-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.629776 4897 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ec536ba-5940-41a9-8334-b622eeb2e669-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.632232 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"46937bb4-8832-4a52-a593-bee2fc6e292b","Type":"ContainerStarted","Data":"48c2dcec026bd3d5c62219b8074d407d24934a827c8abbc877a6c3b64a23f399"} Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.645248 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ec536ba-5940-41a9-8334-b622eeb2e669-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ec536ba-5940-41a9-8334-b622eeb2e669" (UID: "8ec536ba-5940-41a9-8334-b622eeb2e669"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.656630 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.65661336 podStartE2EDuration="4.65661336s" podCreationTimestamp="2026-02-14 19:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:06:33.652986017 +0000 UTC m=+1446.629394520" watchObservedRunningTime="2026-02-14 19:06:33.65661336 +0000 UTC m=+1446.633021843" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.687787 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ec536ba-5940-41a9-8334-b622eeb2e669-config-data" (OuterVolumeSpecName: "config-data") pod "8ec536ba-5940-41a9-8334-b622eeb2e669" (UID: "8ec536ba-5940-41a9-8334-b622eeb2e669"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.732077 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ec536ba-5940-41a9-8334-b622eeb2e669-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.732109 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ec536ba-5940-41a9-8334-b622eeb2e669-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.767770 4897 scope.go:117] "RemoveContainer" containerID="cd6dced0b7a1957b3c75577a5f4243c6657324eb755ea4f3eb2fd3e0db10a3ac" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.786575 4897 scope.go:117] "RemoveContainer" containerID="d8ac3e5b7ea4f0ecdf02355d8e670cc5ba9a020b266b7bdf645f666a5afd9b78" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.805833 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aec03a9b-3137-443f-b07f-eade8ffa27f5" path="/var/lib/kubelet/pods/aec03a9b-3137-443f-b07f-eade8ffa27f5/volumes" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.806442 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0485238-dabe-46e0-87b1-239d64814ef8" path="/var/lib/kubelet/pods/c0485238-dabe-46e0-87b1-239d64814ef8/volumes" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.819503 4897 scope.go:117] "RemoveContainer" containerID="3b14b98294fbe1ca48f5d2ddc6cd466a686b102b7d42c2937b59eba4f84ed7ee" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.846533 4897 scope.go:117] "RemoveContainer" containerID="bc5208014e029d4719ee7e9c813e802970240b70564afc243fb3949e0128533e" Feb 14 19:06:33 crc kubenswrapper[4897]: E0214 19:06:33.848392 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc5208014e029d4719ee7e9c813e802970240b70564afc243fb3949e0128533e\": container with ID starting with bc5208014e029d4719ee7e9c813e802970240b70564afc243fb3949e0128533e not found: ID does not exist" containerID="bc5208014e029d4719ee7e9c813e802970240b70564afc243fb3949e0128533e" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.848421 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc5208014e029d4719ee7e9c813e802970240b70564afc243fb3949e0128533e"} err="failed to get container status \"bc5208014e029d4719ee7e9c813e802970240b70564afc243fb3949e0128533e\": rpc error: code = NotFound desc = could not find container \"bc5208014e029d4719ee7e9c813e802970240b70564afc243fb3949e0128533e\": container with ID starting with bc5208014e029d4719ee7e9c813e802970240b70564afc243fb3949e0128533e not found: ID does not exist" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.848444 4897 scope.go:117] "RemoveContainer" containerID="cd6dced0b7a1957b3c75577a5f4243c6657324eb755ea4f3eb2fd3e0db10a3ac" Feb 14 19:06:33 crc kubenswrapper[4897]: E0214 19:06:33.848764 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd6dced0b7a1957b3c75577a5f4243c6657324eb755ea4f3eb2fd3e0db10a3ac\": container with ID starting with cd6dced0b7a1957b3c75577a5f4243c6657324eb755ea4f3eb2fd3e0db10a3ac not found: ID does not exist" containerID="cd6dced0b7a1957b3c75577a5f4243c6657324eb755ea4f3eb2fd3e0db10a3ac" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.848804 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd6dced0b7a1957b3c75577a5f4243c6657324eb755ea4f3eb2fd3e0db10a3ac"} err="failed to get container status \"cd6dced0b7a1957b3c75577a5f4243c6657324eb755ea4f3eb2fd3e0db10a3ac\": rpc error: code = NotFound desc = could not find container \"cd6dced0b7a1957b3c75577a5f4243c6657324eb755ea4f3eb2fd3e0db10a3ac\": container with ID starting with cd6dced0b7a1957b3c75577a5f4243c6657324eb755ea4f3eb2fd3e0db10a3ac not found: ID does not exist" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.848837 4897 scope.go:117] "RemoveContainer" containerID="d8ac3e5b7ea4f0ecdf02355d8e670cc5ba9a020b266b7bdf645f666a5afd9b78" Feb 14 19:06:33 crc kubenswrapper[4897]: E0214 19:06:33.849090 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8ac3e5b7ea4f0ecdf02355d8e670cc5ba9a020b266b7bdf645f666a5afd9b78\": container with ID starting with d8ac3e5b7ea4f0ecdf02355d8e670cc5ba9a020b266b7bdf645f666a5afd9b78 not found: ID does not exist" containerID="d8ac3e5b7ea4f0ecdf02355d8e670cc5ba9a020b266b7bdf645f666a5afd9b78" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.849112 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8ac3e5b7ea4f0ecdf02355d8e670cc5ba9a020b266b7bdf645f666a5afd9b78"} err="failed to get container status \"d8ac3e5b7ea4f0ecdf02355d8e670cc5ba9a020b266b7bdf645f666a5afd9b78\": rpc error: code = NotFound desc = could not find container \"d8ac3e5b7ea4f0ecdf02355d8e670cc5ba9a020b266b7bdf645f666a5afd9b78\": container with ID starting with d8ac3e5b7ea4f0ecdf02355d8e670cc5ba9a020b266b7bdf645f666a5afd9b78 not found: ID does not exist" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.849125 4897 scope.go:117] "RemoveContainer" containerID="3b14b98294fbe1ca48f5d2ddc6cd466a686b102b7d42c2937b59eba4f84ed7ee" Feb 14 19:06:33 crc kubenswrapper[4897]: E0214 19:06:33.849299 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b14b98294fbe1ca48f5d2ddc6cd466a686b102b7d42c2937b59eba4f84ed7ee\": container with ID starting with 3b14b98294fbe1ca48f5d2ddc6cd466a686b102b7d42c2937b59eba4f84ed7ee not found: ID does not exist" containerID="3b14b98294fbe1ca48f5d2ddc6cd466a686b102b7d42c2937b59eba4f84ed7ee" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.849319 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b14b98294fbe1ca48f5d2ddc6cd466a686b102b7d42c2937b59eba4f84ed7ee"} err="failed to get container status \"3b14b98294fbe1ca48f5d2ddc6cd466a686b102b7d42c2937b59eba4f84ed7ee\": rpc error: code = NotFound desc = could not find container \"3b14b98294fbe1ca48f5d2ddc6cd466a686b102b7d42c2937b59eba4f84ed7ee\": container with ID starting with 3b14b98294fbe1ca48f5d2ddc6cd466a686b102b7d42c2937b59eba4f84ed7ee not found: ID does not exist" Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.949831 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:06:33 crc kubenswrapper[4897]: I0214 19:06:33.965892 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.002110 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:06:34 crc kubenswrapper[4897]: E0214 19:06:34.003195 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec536ba-5940-41a9-8334-b622eeb2e669" containerName="ceilometer-notification-agent" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.003218 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec536ba-5940-41a9-8334-b622eeb2e669" containerName="ceilometer-notification-agent" Feb 14 19:06:34 crc kubenswrapper[4897]: E0214 19:06:34.003250 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0485238-dabe-46e0-87b1-239d64814ef8" containerName="heat-cfnapi" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.003258 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0485238-dabe-46e0-87b1-239d64814ef8" containerName="heat-cfnapi" Feb 14 19:06:34 crc kubenswrapper[4897]: E0214 19:06:34.003275 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e276c7c0-3036-4f26-8971-92a5c22b7840" containerName="mariadb-account-create-update" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.003282 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="e276c7c0-3036-4f26-8971-92a5c22b7840" containerName="mariadb-account-create-update" Feb 14 19:06:34 crc kubenswrapper[4897]: E0214 19:06:34.003300 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec536ba-5940-41a9-8334-b622eeb2e669" containerName="sg-core" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.003307 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec536ba-5940-41a9-8334-b622eeb2e669" containerName="sg-core" Feb 14 19:06:34 crc kubenswrapper[4897]: E0214 19:06:34.003330 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fd09d35-34e4-4a37-ac93-455f2f12b0d5" containerName="mariadb-account-create-update" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.003337 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fd09d35-34e4-4a37-ac93-455f2f12b0d5" containerName="mariadb-account-create-update" Feb 14 19:06:34 crc kubenswrapper[4897]: E0214 19:06:34.003357 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adaee017-ddec-4818-acc9-54a5caa1571f" containerName="mariadb-database-create" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.003364 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="adaee017-ddec-4818-acc9-54a5caa1571f" containerName="mariadb-database-create" Feb 14 19:06:34 crc kubenswrapper[4897]: E0214 19:06:34.003377 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c892fc72-2d4f-4417-9078-65f0519fcc2d" containerName="mariadb-account-create-update" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.003384 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c892fc72-2d4f-4417-9078-65f0519fcc2d" containerName="mariadb-account-create-update" Feb 14 19:06:34 crc kubenswrapper[4897]: E0214 19:06:34.003406 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec03a9b-3137-443f-b07f-eade8ffa27f5" containerName="heat-api" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.003412 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec03a9b-3137-443f-b07f-eade8ffa27f5" containerName="heat-api" Feb 14 19:06:34 crc kubenswrapper[4897]: E0214 19:06:34.003425 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec536ba-5940-41a9-8334-b622eeb2e669" containerName="proxy-httpd" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.003431 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec536ba-5940-41a9-8334-b622eeb2e669" containerName="proxy-httpd" Feb 14 19:06:34 crc kubenswrapper[4897]: E0214 19:06:34.003450 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec536ba-5940-41a9-8334-b622eeb2e669" containerName="ceilometer-central-agent" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.003456 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec536ba-5940-41a9-8334-b622eeb2e669" containerName="ceilometer-central-agent" Feb 14 19:06:34 crc kubenswrapper[4897]: E0214 19:06:34.003476 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec03a9b-3137-443f-b07f-eade8ffa27f5" containerName="heat-api" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.003484 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec03a9b-3137-443f-b07f-eade8ffa27f5" containerName="heat-api" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.003885 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fd09d35-34e4-4a37-ac93-455f2f12b0d5" containerName="mariadb-account-create-update" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.003916 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ec536ba-5940-41a9-8334-b622eeb2e669" containerName="ceilometer-notification-agent" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.003932 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="c892fc72-2d4f-4417-9078-65f0519fcc2d" containerName="mariadb-account-create-update" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.003957 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0485238-dabe-46e0-87b1-239d64814ef8" containerName="heat-cfnapi" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.003982 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0485238-dabe-46e0-87b1-239d64814ef8" containerName="heat-cfnapi" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.003990 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="aec03a9b-3137-443f-b07f-eade8ffa27f5" containerName="heat-api" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.004009 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ec536ba-5940-41a9-8334-b622eeb2e669" containerName="ceilometer-central-agent" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.004018 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ec536ba-5940-41a9-8334-b622eeb2e669" containerName="proxy-httpd" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.004073 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ec536ba-5940-41a9-8334-b622eeb2e669" containerName="sg-core" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.004095 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="adaee017-ddec-4818-acc9-54a5caa1571f" containerName="mariadb-database-create" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.004119 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="e276c7c0-3036-4f26-8971-92a5c22b7840" containerName="mariadb-account-create-update" Feb 14 19:06:34 crc kubenswrapper[4897]: E0214 19:06:34.004596 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0485238-dabe-46e0-87b1-239d64814ef8" containerName="heat-cfnapi" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.004614 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0485238-dabe-46e0-87b1-239d64814ef8" containerName="heat-cfnapi" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.005085 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="aec03a9b-3137-443f-b07f-eade8ffa27f5" containerName="heat-api" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.028200 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.030449 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.031062 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.071420 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.140869 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c924f7-d6f6-4a90-b527-95498fd32761-config-data\") pod \"ceilometer-0\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " pod="openstack/ceilometer-0" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.140923 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjf77\" (UniqueName: \"kubernetes.io/projected/05c924f7-d6f6-4a90-b527-95498fd32761-kube-api-access-gjf77\") pod \"ceilometer-0\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " pod="openstack/ceilometer-0" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.140946 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c924f7-d6f6-4a90-b527-95498fd32761-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " pod="openstack/ceilometer-0" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.141011 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05c924f7-d6f6-4a90-b527-95498fd32761-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " pod="openstack/ceilometer-0" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.141090 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05c924f7-d6f6-4a90-b527-95498fd32761-log-httpd\") pod \"ceilometer-0\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " pod="openstack/ceilometer-0" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.141106 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05c924f7-d6f6-4a90-b527-95498fd32761-run-httpd\") pod \"ceilometer-0\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " pod="openstack/ceilometer-0" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.141157 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05c924f7-d6f6-4a90-b527-95498fd32761-scripts\") pod \"ceilometer-0\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " pod="openstack/ceilometer-0" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.243434 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05c924f7-d6f6-4a90-b527-95498fd32761-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " pod="openstack/ceilometer-0" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.243529 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05c924f7-d6f6-4a90-b527-95498fd32761-log-httpd\") pod \"ceilometer-0\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " pod="openstack/ceilometer-0" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.243550 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05c924f7-d6f6-4a90-b527-95498fd32761-run-httpd\") pod \"ceilometer-0\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " pod="openstack/ceilometer-0" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.243604 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05c924f7-d6f6-4a90-b527-95498fd32761-scripts\") pod \"ceilometer-0\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " pod="openstack/ceilometer-0" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.243664 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c924f7-d6f6-4a90-b527-95498fd32761-config-data\") pod \"ceilometer-0\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " pod="openstack/ceilometer-0" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.243689 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjf77\" (UniqueName: \"kubernetes.io/projected/05c924f7-d6f6-4a90-b527-95498fd32761-kube-api-access-gjf77\") pod \"ceilometer-0\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " pod="openstack/ceilometer-0" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.243706 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c924f7-d6f6-4a90-b527-95498fd32761-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " pod="openstack/ceilometer-0" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.244331 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05c924f7-d6f6-4a90-b527-95498fd32761-log-httpd\") pod \"ceilometer-0\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " pod="openstack/ceilometer-0" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.244432 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05c924f7-d6f6-4a90-b527-95498fd32761-run-httpd\") pod \"ceilometer-0\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " pod="openstack/ceilometer-0" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.248948 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05c924f7-d6f6-4a90-b527-95498fd32761-scripts\") pod \"ceilometer-0\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " pod="openstack/ceilometer-0" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.251960 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c924f7-d6f6-4a90-b527-95498fd32761-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " pod="openstack/ceilometer-0" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.252238 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05c924f7-d6f6-4a90-b527-95498fd32761-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " pod="openstack/ceilometer-0" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.253791 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c924f7-d6f6-4a90-b527-95498fd32761-config-data\") pod \"ceilometer-0\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " pod="openstack/ceilometer-0" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.265852 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjf77\" (UniqueName: \"kubernetes.io/projected/05c924f7-d6f6-4a90-b527-95498fd32761-kube-api-access-gjf77\") pod \"ceilometer-0\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " pod="openstack/ceilometer-0" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.346514 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.657263 4897 generic.go:334] "Generic (PLEG): container finished" podID="1639a907-9497-4dea-a153-945921c79337" containerID="50173ab05d1c9f56f9b06b808ec984f5e26ec10f3a8eaf8d7c6e65e628bf172a" exitCode=0 Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.658310 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6fc586c7b4-8x7qx" event={"ID":"1639a907-9497-4dea-a153-945921c79337","Type":"ContainerDied","Data":"50173ab05d1c9f56f9b06b808ec984f5e26ec10f3a8eaf8d7c6e65e628bf172a"} Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.770262 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4tshq"] Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.771719 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4tshq" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.782995 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.783121 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.783575 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-xvvb8" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.798213 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4tshq"] Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.859338 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29c25\" (UniqueName: \"kubernetes.io/projected/1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5-kube-api-access-29c25\") pod \"nova-cell0-conductor-db-sync-4tshq\" (UID: \"1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5\") " pod="openstack/nova-cell0-conductor-db-sync-4tshq" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.859616 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4tshq\" (UID: \"1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5\") " pod="openstack/nova-cell0-conductor-db-sync-4tshq" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.859652 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5-config-data\") pod \"nova-cell0-conductor-db-sync-4tshq\" (UID: \"1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5\") " pod="openstack/nova-cell0-conductor-db-sync-4tshq" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.859736 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5-scripts\") pod \"nova-cell0-conductor-db-sync-4tshq\" (UID: \"1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5\") " pod="openstack/nova-cell0-conductor-db-sync-4tshq" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.962058 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5-scripts\") pod \"nova-cell0-conductor-db-sync-4tshq\" (UID: \"1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5\") " pod="openstack/nova-cell0-conductor-db-sync-4tshq" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.962231 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29c25\" (UniqueName: \"kubernetes.io/projected/1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5-kube-api-access-29c25\") pod \"nova-cell0-conductor-db-sync-4tshq\" (UID: \"1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5\") " pod="openstack/nova-cell0-conductor-db-sync-4tshq" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.962263 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4tshq\" (UID: \"1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5\") " pod="openstack/nova-cell0-conductor-db-sync-4tshq" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.962285 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5-config-data\") pod \"nova-cell0-conductor-db-sync-4tshq\" (UID: \"1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5\") " pod="openstack/nova-cell0-conductor-db-sync-4tshq" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.968903 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4tshq\" (UID: \"1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5\") " pod="openstack/nova-cell0-conductor-db-sync-4tshq" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.989654 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5-scripts\") pod \"nova-cell0-conductor-db-sync-4tshq\" (UID: \"1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5\") " pod="openstack/nova-cell0-conductor-db-sync-4tshq" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.990182 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5-config-data\") pod \"nova-cell0-conductor-db-sync-4tshq\" (UID: \"1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5\") " pod="openstack/nova-cell0-conductor-db-sync-4tshq" Feb 14 19:06:34 crc kubenswrapper[4897]: I0214 19:06:34.991714 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29c25\" (UniqueName: \"kubernetes.io/projected/1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5-kube-api-access-29c25\") pod \"nova-cell0-conductor-db-sync-4tshq\" (UID: \"1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5\") " pod="openstack/nova-cell0-conductor-db-sync-4tshq" Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.091803 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.115891 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4tshq" Feb 14 19:06:35 crc kubenswrapper[4897]: W0214 19:06:35.121321 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05c924f7_d6f6_4a90_b527_95498fd32761.slice/crio-4cae331df5e9c76119707cf25f0f19788dba658430acedaa7d84855c93fdf615 WatchSource:0}: Error finding container 4cae331df5e9c76119707cf25f0f19788dba658430acedaa7d84855c93fdf615: Status 404 returned error can't find the container with id 4cae331df5e9c76119707cf25f0f19788dba658430acedaa7d84855c93fdf615 Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.121491 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.173850 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-internal-tls-certs\") pod \"1639a907-9497-4dea-a153-945921c79337\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.173917 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-public-tls-certs\") pod \"1639a907-9497-4dea-a153-945921c79337\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.173949 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1639a907-9497-4dea-a153-945921c79337-logs\") pod \"1639a907-9497-4dea-a153-945921c79337\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.173965 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-config-data\") pod \"1639a907-9497-4dea-a153-945921c79337\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.174057 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-combined-ca-bundle\") pod \"1639a907-9497-4dea-a153-945921c79337\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.174091 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97ffh\" (UniqueName: \"kubernetes.io/projected/1639a907-9497-4dea-a153-945921c79337-kube-api-access-97ffh\") pod \"1639a907-9497-4dea-a153-945921c79337\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.174198 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-scripts\") pod \"1639a907-9497-4dea-a153-945921c79337\" (UID: \"1639a907-9497-4dea-a153-945921c79337\") " Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.175565 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1639a907-9497-4dea-a153-945921c79337-logs" (OuterVolumeSpecName: "logs") pod "1639a907-9497-4dea-a153-945921c79337" (UID: "1639a907-9497-4dea-a153-945921c79337"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.187756 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-scripts" (OuterVolumeSpecName: "scripts") pod "1639a907-9497-4dea-a153-945921c79337" (UID: "1639a907-9497-4dea-a153-945921c79337"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.223238 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1639a907-9497-4dea-a153-945921c79337-kube-api-access-97ffh" (OuterVolumeSpecName: "kube-api-access-97ffh") pod "1639a907-9497-4dea-a153-945921c79337" (UID: "1639a907-9497-4dea-a153-945921c79337"). InnerVolumeSpecName "kube-api-access-97ffh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.276254 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.276278 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1639a907-9497-4dea-a153-945921c79337-logs\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.276287 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97ffh\" (UniqueName: \"kubernetes.io/projected/1639a907-9497-4dea-a153-945921c79337-kube-api-access-97ffh\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.286216 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-config-data" (OuterVolumeSpecName: "config-data") pod "1639a907-9497-4dea-a153-945921c79337" (UID: "1639a907-9497-4dea-a153-945921c79337"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.288624 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1639a907-9497-4dea-a153-945921c79337" (UID: "1639a907-9497-4dea-a153-945921c79337"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.359242 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1639a907-9497-4dea-a153-945921c79337" (UID: "1639a907-9497-4dea-a153-945921c79337"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.384301 4897 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.384330 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.384339 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.384400 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1639a907-9497-4dea-a153-945921c79337" (UID: "1639a907-9497-4dea-a153-945921c79337"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.486622 4897 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1639a907-9497-4dea-a153-945921c79337-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.633369 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4tshq"] Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.667965 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05c924f7-d6f6-4a90-b527-95498fd32761","Type":"ContainerStarted","Data":"4cae331df5e9c76119707cf25f0f19788dba658430acedaa7d84855c93fdf615"} Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.669724 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4tshq" event={"ID":"1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5","Type":"ContainerStarted","Data":"47490eef1ce51e5d70f98d210b98cd057845d1bbaa9d197a38b6ef6eddbf7089"} Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.672915 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6fc586c7b4-8x7qx" event={"ID":"1639a907-9497-4dea-a153-945921c79337","Type":"ContainerDied","Data":"5ef470851ae3c487d97ac4629e821b476854b1422c5d0f21257bae3bf1fa1dac"} Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.672976 4897 scope.go:117] "RemoveContainer" containerID="50173ab05d1c9f56f9b06b808ec984f5e26ec10f3a8eaf8d7c6e65e628bf172a" Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.672973 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6fc586c7b4-8x7qx" Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.742530 4897 scope.go:117] "RemoveContainer" containerID="82dd2f2688766725650bee1eb8d63c5a544e36fef4aa12e9f5c39f2fa22c5032" Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.775976 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6fc586c7b4-8x7qx"] Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.785430 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6fc586c7b4-8x7qx"] Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.804878 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1639a907-9497-4dea-a153-945921c79337" path="/var/lib/kubelet/pods/1639a907-9497-4dea-a153-945921c79337/volumes" Feb 14 19:06:35 crc kubenswrapper[4897]: I0214 19:06:35.805757 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ec536ba-5940-41a9-8334-b622eeb2e669" path="/var/lib/kubelet/pods/8ec536ba-5940-41a9-8334-b622eeb2e669/volumes" Feb 14 19:06:36 crc kubenswrapper[4897]: I0214 19:06:36.495399 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-dc4df654d-9w4f2" Feb 14 19:06:36 crc kubenswrapper[4897]: I0214 19:06:36.599463 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5d7f548864-bdfgg"] Feb 14 19:06:36 crc kubenswrapper[4897]: I0214 19:06:36.599707 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-5d7f548864-bdfgg" podUID="89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5" containerName="heat-engine" containerID="cri-o://35276787e444e0dba8fbe84b677288f7946efc97c106da56dcafe88909d203d9" gracePeriod=60 Feb 14 19:06:36 crc kubenswrapper[4897]: I0214 19:06:36.634535 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5d7f548864-bdfgg" Feb 14 19:06:36 crc kubenswrapper[4897]: I0214 19:06:36.691281 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05c924f7-d6f6-4a90-b527-95498fd32761","Type":"ContainerStarted","Data":"1fb5bfdca2c83603eb389ec6a124bd8ddc15eda11e6df5821e739e4e34cedc30"} Feb 14 19:06:36 crc kubenswrapper[4897]: I0214 19:06:36.691589 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05c924f7-d6f6-4a90-b527-95498fd32761","Type":"ContainerStarted","Data":"65a4f3f5d09b808eafa14bdd5240146ed468b2bc84bca8e08a447813fed69436"} Feb 14 19:06:37 crc kubenswrapper[4897]: I0214 19:06:37.711074 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05c924f7-d6f6-4a90-b527-95498fd32761","Type":"ContainerStarted","Data":"5f9c1d720708c4297a0f07c27bd2918df3f1a458ecce0c103b985be4d192c172"} Feb 14 19:06:38 crc kubenswrapper[4897]: I0214 19:06:38.724770 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05c924f7-d6f6-4a90-b527-95498fd32761","Type":"ContainerStarted","Data":"6bfa5098989c0331042070f701aa1a6806cbe4bf88b89bb99887c980f3e3ca7b"} Feb 14 19:06:38 crc kubenswrapper[4897]: I0214 19:06:38.725305 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 19:06:38 crc kubenswrapper[4897]: I0214 19:06:38.744392 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.591083769 podStartE2EDuration="5.744375745s" podCreationTimestamp="2026-02-14 19:06:33 +0000 UTC" firstStartedPulling="2026-02-14 19:06:35.130839925 +0000 UTC m=+1448.107248408" lastFinishedPulling="2026-02-14 19:06:38.284131901 +0000 UTC m=+1451.260540384" observedRunningTime="2026-02-14 19:06:38.741884027 +0000 UTC m=+1451.718292520" watchObservedRunningTime="2026-02-14 19:06:38.744375745 +0000 UTC m=+1451.720784228" Feb 14 19:06:39 crc kubenswrapper[4897]: I0214 19:06:39.173520 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:06:39 crc kubenswrapper[4897]: E0214 19:06:39.964132 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="35276787e444e0dba8fbe84b677288f7946efc97c106da56dcafe88909d203d9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 19:06:39 crc kubenswrapper[4897]: E0214 19:06:39.968840 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="35276787e444e0dba8fbe84b677288f7946efc97c106da56dcafe88909d203d9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 19:06:39 crc kubenswrapper[4897]: E0214 19:06:39.969967 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="35276787e444e0dba8fbe84b677288f7946efc97c106da56dcafe88909d203d9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 19:06:39 crc kubenswrapper[4897]: E0214 19:06:39.969999 4897 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-5d7f548864-bdfgg" podUID="89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5" containerName="heat-engine" Feb 14 19:06:40 crc kubenswrapper[4897]: I0214 19:06:40.368593 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 14 19:06:40 crc kubenswrapper[4897]: I0214 19:06:40.368865 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 14 19:06:40 crc kubenswrapper[4897]: I0214 19:06:40.418021 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 14 19:06:40 crc kubenswrapper[4897]: I0214 19:06:40.419665 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 14 19:06:40 crc kubenswrapper[4897]: I0214 19:06:40.746528 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05c924f7-d6f6-4a90-b527-95498fd32761" containerName="ceilometer-central-agent" containerID="cri-o://65a4f3f5d09b808eafa14bdd5240146ed468b2bc84bca8e08a447813fed69436" gracePeriod=30 Feb 14 19:06:40 crc kubenswrapper[4897]: I0214 19:06:40.747057 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05c924f7-d6f6-4a90-b527-95498fd32761" containerName="proxy-httpd" containerID="cri-o://6bfa5098989c0331042070f701aa1a6806cbe4bf88b89bb99887c980f3e3ca7b" gracePeriod=30 Feb 14 19:06:40 crc kubenswrapper[4897]: I0214 19:06:40.747360 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05c924f7-d6f6-4a90-b527-95498fd32761" containerName="ceilometer-notification-agent" containerID="cri-o://1fb5bfdca2c83603eb389ec6a124bd8ddc15eda11e6df5821e739e4e34cedc30" gracePeriod=30 Feb 14 19:06:40 crc kubenswrapper[4897]: I0214 19:06:40.747108 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 14 19:06:40 crc kubenswrapper[4897]: I0214 19:06:40.747441 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 14 19:06:40 crc kubenswrapper[4897]: I0214 19:06:40.747281 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05c924f7-d6f6-4a90-b527-95498fd32761" containerName="sg-core" containerID="cri-o://5f9c1d720708c4297a0f07c27bd2918df3f1a458ecce0c103b985be4d192c172" gracePeriod=30 Feb 14 19:06:41 crc kubenswrapper[4897]: E0214 19:06:41.277906 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05c924f7_d6f6_4a90_b527_95498fd32761.slice/crio-1fb5bfdca2c83603eb389ec6a124bd8ddc15eda11e6df5821e739e4e34cedc30.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05c924f7_d6f6_4a90_b527_95498fd32761.slice/crio-conmon-65a4f3f5d09b808eafa14bdd5240146ed468b2bc84bca8e08a447813fed69436.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05c924f7_d6f6_4a90_b527_95498fd32761.slice/crio-conmon-1fb5bfdca2c83603eb389ec6a124bd8ddc15eda11e6df5821e739e4e34cedc30.scope\": RecentStats: unable to find data in memory cache]" Feb 14 19:06:41 crc kubenswrapper[4897]: I0214 19:06:41.761357 4897 generic.go:334] "Generic (PLEG): container finished" podID="05c924f7-d6f6-4a90-b527-95498fd32761" containerID="6bfa5098989c0331042070f701aa1a6806cbe4bf88b89bb99887c980f3e3ca7b" exitCode=0 Feb 14 19:06:41 crc kubenswrapper[4897]: I0214 19:06:41.761637 4897 generic.go:334] "Generic (PLEG): container finished" podID="05c924f7-d6f6-4a90-b527-95498fd32761" containerID="5f9c1d720708c4297a0f07c27bd2918df3f1a458ecce0c103b985be4d192c172" exitCode=2 Feb 14 19:06:41 crc kubenswrapper[4897]: I0214 19:06:41.761444 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05c924f7-d6f6-4a90-b527-95498fd32761","Type":"ContainerDied","Data":"6bfa5098989c0331042070f701aa1a6806cbe4bf88b89bb99887c980f3e3ca7b"} Feb 14 19:06:41 crc kubenswrapper[4897]: I0214 19:06:41.761694 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05c924f7-d6f6-4a90-b527-95498fd32761","Type":"ContainerDied","Data":"5f9c1d720708c4297a0f07c27bd2918df3f1a458ecce0c103b985be4d192c172"} Feb 14 19:06:41 crc kubenswrapper[4897]: I0214 19:06:41.761714 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05c924f7-d6f6-4a90-b527-95498fd32761","Type":"ContainerDied","Data":"1fb5bfdca2c83603eb389ec6a124bd8ddc15eda11e6df5821e739e4e34cedc30"} Feb 14 19:06:41 crc kubenswrapper[4897]: I0214 19:06:41.761651 4897 generic.go:334] "Generic (PLEG): container finished" podID="05c924f7-d6f6-4a90-b527-95498fd32761" containerID="1fb5bfdca2c83603eb389ec6a124bd8ddc15eda11e6df5821e739e4e34cedc30" exitCode=0 Feb 14 19:06:41 crc kubenswrapper[4897]: I0214 19:06:41.761741 4897 generic.go:334] "Generic (PLEG): container finished" podID="05c924f7-d6f6-4a90-b527-95498fd32761" containerID="65a4f3f5d09b808eafa14bdd5240146ed468b2bc84bca8e08a447813fed69436" exitCode=0 Feb 14 19:06:41 crc kubenswrapper[4897]: I0214 19:06:41.762063 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05c924f7-d6f6-4a90-b527-95498fd32761","Type":"ContainerDied","Data":"65a4f3f5d09b808eafa14bdd5240146ed468b2bc84bca8e08a447813fed69436"} Feb 14 19:06:42 crc kubenswrapper[4897]: I0214 19:06:42.776674 4897 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 19:06:42 crc kubenswrapper[4897]: I0214 19:06:42.776704 4897 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 19:06:44 crc kubenswrapper[4897]: I0214 19:06:44.424937 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 14 19:06:44 crc kubenswrapper[4897]: I0214 19:06:44.425312 4897 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 19:06:44 crc kubenswrapper[4897]: I0214 19:06:44.490254 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 14 19:06:45 crc kubenswrapper[4897]: I0214 19:06:45.817647 4897 generic.go:334] "Generic (PLEG): container finished" podID="89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5" containerID="35276787e444e0dba8fbe84b677288f7946efc97c106da56dcafe88909d203d9" exitCode=0 Feb 14 19:06:45 crc kubenswrapper[4897]: I0214 19:06:45.817735 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5d7f548864-bdfgg" event={"ID":"89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5","Type":"ContainerDied","Data":"35276787e444e0dba8fbe84b677288f7946efc97c106da56dcafe88909d203d9"} Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.578835 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5d7f548864-bdfgg" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.618585 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5-config-data-custom\") pod \"89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5\" (UID: \"89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5\") " Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.618731 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5-combined-ca-bundle\") pod \"89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5\" (UID: \"89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5\") " Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.618844 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79b9v\" (UniqueName: \"kubernetes.io/projected/89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5-kube-api-access-79b9v\") pod \"89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5\" (UID: \"89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5\") " Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.619104 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5-config-data\") pod \"89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5\" (UID: \"89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5\") " Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.626147 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5" (UID: "89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.631936 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5-kube-api-access-79b9v" (OuterVolumeSpecName: "kube-api-access-79b9v") pod "89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5" (UID: "89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5"). InnerVolumeSpecName "kube-api-access-79b9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.646861 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.661950 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5" (UID: "89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.696727 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5-config-data" (OuterVolumeSpecName: "config-data") pod "89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5" (UID: "89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.720967 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05c924f7-d6f6-4a90-b527-95498fd32761-log-httpd\") pod \"05c924f7-d6f6-4a90-b527-95498fd32761\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.721065 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c924f7-d6f6-4a90-b527-95498fd32761-config-data\") pod \"05c924f7-d6f6-4a90-b527-95498fd32761\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.721100 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjf77\" (UniqueName: \"kubernetes.io/projected/05c924f7-d6f6-4a90-b527-95498fd32761-kube-api-access-gjf77\") pod \"05c924f7-d6f6-4a90-b527-95498fd32761\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.721153 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05c924f7-d6f6-4a90-b527-95498fd32761-scripts\") pod \"05c924f7-d6f6-4a90-b527-95498fd32761\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.721305 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c924f7-d6f6-4a90-b527-95498fd32761-combined-ca-bundle\") pod \"05c924f7-d6f6-4a90-b527-95498fd32761\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.721354 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05c924f7-d6f6-4a90-b527-95498fd32761-sg-core-conf-yaml\") pod \"05c924f7-d6f6-4a90-b527-95498fd32761\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.721406 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05c924f7-d6f6-4a90-b527-95498fd32761-run-httpd\") pod \"05c924f7-d6f6-4a90-b527-95498fd32761\" (UID: \"05c924f7-d6f6-4a90-b527-95498fd32761\") " Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.721614 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05c924f7-d6f6-4a90-b527-95498fd32761-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "05c924f7-d6f6-4a90-b527-95498fd32761" (UID: "05c924f7-d6f6-4a90-b527-95498fd32761"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.722016 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.722048 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79b9v\" (UniqueName: \"kubernetes.io/projected/89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5-kube-api-access-79b9v\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.722060 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.722068 4897 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05c924f7-d6f6-4a90-b527-95498fd32761-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.722077 4897 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.722306 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05c924f7-d6f6-4a90-b527-95498fd32761-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "05c924f7-d6f6-4a90-b527-95498fd32761" (UID: "05c924f7-d6f6-4a90-b527-95498fd32761"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.725506 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05c924f7-d6f6-4a90-b527-95498fd32761-kube-api-access-gjf77" (OuterVolumeSpecName: "kube-api-access-gjf77") pod "05c924f7-d6f6-4a90-b527-95498fd32761" (UID: "05c924f7-d6f6-4a90-b527-95498fd32761"). InnerVolumeSpecName "kube-api-access-gjf77". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.725645 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05c924f7-d6f6-4a90-b527-95498fd32761-scripts" (OuterVolumeSpecName: "scripts") pod "05c924f7-d6f6-4a90-b527-95498fd32761" (UID: "05c924f7-d6f6-4a90-b527-95498fd32761"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.756039 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05c924f7-d6f6-4a90-b527-95498fd32761-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "05c924f7-d6f6-4a90-b527-95498fd32761" (UID: "05c924f7-d6f6-4a90-b527-95498fd32761"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.823664 4897 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05c924f7-d6f6-4a90-b527-95498fd32761-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.824096 4897 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05c924f7-d6f6-4a90-b527-95498fd32761-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.824112 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjf77\" (UniqueName: \"kubernetes.io/projected/05c924f7-d6f6-4a90-b527-95498fd32761-kube-api-access-gjf77\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.824126 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05c924f7-d6f6-4a90-b527-95498fd32761-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.830735 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05c924f7-d6f6-4a90-b527-95498fd32761-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05c924f7-d6f6-4a90-b527-95498fd32761" (UID: "05c924f7-d6f6-4a90-b527-95498fd32761"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.841159 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05c924f7-d6f6-4a90-b527-95498fd32761-config-data" (OuterVolumeSpecName: "config-data") pod "05c924f7-d6f6-4a90-b527-95498fd32761" (UID: "05c924f7-d6f6-4a90-b527-95498fd32761"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.879257 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.883289 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5d7f548864-bdfgg" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.926095 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-4tshq" podStartSLOduration=2.329664625 podStartE2EDuration="13.926073066s" podCreationTimestamp="2026-02-14 19:06:34 +0000 UTC" firstStartedPulling="2026-02-14 19:06:35.631551968 +0000 UTC m=+1448.607960451" lastFinishedPulling="2026-02-14 19:06:47.227960409 +0000 UTC m=+1460.204368892" observedRunningTime="2026-02-14 19:06:47.903197778 +0000 UTC m=+1460.879606291" watchObservedRunningTime="2026-02-14 19:06:47.926073066 +0000 UTC m=+1460.902481569" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.926471 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c924f7-d6f6-4a90-b527-95498fd32761-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.926497 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c924f7-d6f6-4a90-b527-95498fd32761-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.963623 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05c924f7-d6f6-4a90-b527-95498fd32761","Type":"ContainerDied","Data":"4cae331df5e9c76119707cf25f0f19788dba658430acedaa7d84855c93fdf615"} Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.963673 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4tshq" event={"ID":"1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5","Type":"ContainerStarted","Data":"7daa3e9182145db070e6ed99d9195899d85acbe4ee391f4c15f379f0c1bf3b1f"} Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.963688 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5d7f548864-bdfgg" event={"ID":"89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5","Type":"ContainerDied","Data":"72652d35d5f1e8c7b0d13fa03db67857d8526182a6473dbbd04f2ca4e958c746"} Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.963727 4897 scope.go:117] "RemoveContainer" containerID="6bfa5098989c0331042070f701aa1a6806cbe4bf88b89bb99887c980f3e3ca7b" Feb 14 19:06:47 crc kubenswrapper[4897]: I0214 19:06:47.995865 4897 scope.go:117] "RemoveContainer" containerID="5f9c1d720708c4297a0f07c27bd2918df3f1a458ecce0c103b985be4d192c172" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.001041 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.018007 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.033730 4897 scope.go:117] "RemoveContainer" containerID="1fb5bfdca2c83603eb389ec6a124bd8ddc15eda11e6df5821e739e4e34cedc30" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.034365 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5d7f548864-bdfgg"] Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.047513 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-5d7f548864-bdfgg"] Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.055956 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:06:48 crc kubenswrapper[4897]: E0214 19:06:48.056499 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5" containerName="heat-engine" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.056520 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5" containerName="heat-engine" Feb 14 19:06:48 crc kubenswrapper[4897]: E0214 19:06:48.056541 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c924f7-d6f6-4a90-b527-95498fd32761" containerName="ceilometer-notification-agent" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.056550 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c924f7-d6f6-4a90-b527-95498fd32761" containerName="ceilometer-notification-agent" Feb 14 19:06:48 crc kubenswrapper[4897]: E0214 19:06:48.056570 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c924f7-d6f6-4a90-b527-95498fd32761" containerName="ceilometer-central-agent" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.056578 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c924f7-d6f6-4a90-b527-95498fd32761" containerName="ceilometer-central-agent" Feb 14 19:06:48 crc kubenswrapper[4897]: E0214 19:06:48.056594 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c924f7-d6f6-4a90-b527-95498fd32761" containerName="proxy-httpd" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.056602 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c924f7-d6f6-4a90-b527-95498fd32761" containerName="proxy-httpd" Feb 14 19:06:48 crc kubenswrapper[4897]: E0214 19:06:48.056621 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c924f7-d6f6-4a90-b527-95498fd32761" containerName="sg-core" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.056629 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c924f7-d6f6-4a90-b527-95498fd32761" containerName="sg-core" Feb 14 19:06:48 crc kubenswrapper[4897]: E0214 19:06:48.056647 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1639a907-9497-4dea-a153-945921c79337" containerName="placement-api" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.056657 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="1639a907-9497-4dea-a153-945921c79337" containerName="placement-api" Feb 14 19:06:48 crc kubenswrapper[4897]: E0214 19:06:48.056679 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1639a907-9497-4dea-a153-945921c79337" containerName="placement-log" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.056687 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="1639a907-9497-4dea-a153-945921c79337" containerName="placement-log" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.056959 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="1639a907-9497-4dea-a153-945921c79337" containerName="placement-api" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.056978 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c924f7-d6f6-4a90-b527-95498fd32761" containerName="proxy-httpd" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.056995 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c924f7-d6f6-4a90-b527-95498fd32761" containerName="ceilometer-central-agent" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.057008 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c924f7-d6f6-4a90-b527-95498fd32761" containerName="ceilometer-notification-agent" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.057022 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="1639a907-9497-4dea-a153-945921c79337" containerName="placement-log" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.057051 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5" containerName="heat-engine" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.057071 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c924f7-d6f6-4a90-b527-95498fd32761" containerName="sg-core" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.059803 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.062152 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.062746 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.067884 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.077419 4897 scope.go:117] "RemoveContainer" containerID="65a4f3f5d09b808eafa14bdd5240146ed468b2bc84bca8e08a447813fed69436" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.098187 4897 scope.go:117] "RemoveContainer" containerID="35276787e444e0dba8fbe84b677288f7946efc97c106da56dcafe88909d203d9" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.131324 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07a95e5b-bcf6-42b9-a442-e114aa79c508-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " pod="openstack/ceilometer-0" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.131388 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07a95e5b-bcf6-42b9-a442-e114aa79c508-run-httpd\") pod \"ceilometer-0\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " pod="openstack/ceilometer-0" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.131419 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq5k9\" (UniqueName: \"kubernetes.io/projected/07a95e5b-bcf6-42b9-a442-e114aa79c508-kube-api-access-mq5k9\") pod \"ceilometer-0\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " pod="openstack/ceilometer-0" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.131568 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07a95e5b-bcf6-42b9-a442-e114aa79c508-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " pod="openstack/ceilometer-0" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.131637 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07a95e5b-bcf6-42b9-a442-e114aa79c508-scripts\") pod \"ceilometer-0\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " pod="openstack/ceilometer-0" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.131694 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07a95e5b-bcf6-42b9-a442-e114aa79c508-log-httpd\") pod \"ceilometer-0\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " pod="openstack/ceilometer-0" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.131943 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07a95e5b-bcf6-42b9-a442-e114aa79c508-config-data\") pod \"ceilometer-0\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " pod="openstack/ceilometer-0" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.234579 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07a95e5b-bcf6-42b9-a442-e114aa79c508-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " pod="openstack/ceilometer-0" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.234655 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07a95e5b-bcf6-42b9-a442-e114aa79c508-run-httpd\") pod \"ceilometer-0\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " pod="openstack/ceilometer-0" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.234689 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq5k9\" (UniqueName: \"kubernetes.io/projected/07a95e5b-bcf6-42b9-a442-e114aa79c508-kube-api-access-mq5k9\") pod \"ceilometer-0\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " pod="openstack/ceilometer-0" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.234729 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07a95e5b-bcf6-42b9-a442-e114aa79c508-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " pod="openstack/ceilometer-0" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.234760 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07a95e5b-bcf6-42b9-a442-e114aa79c508-scripts\") pod \"ceilometer-0\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " pod="openstack/ceilometer-0" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.234798 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07a95e5b-bcf6-42b9-a442-e114aa79c508-log-httpd\") pod \"ceilometer-0\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " pod="openstack/ceilometer-0" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.234865 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07a95e5b-bcf6-42b9-a442-e114aa79c508-config-data\") pod \"ceilometer-0\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " pod="openstack/ceilometer-0" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.235236 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07a95e5b-bcf6-42b9-a442-e114aa79c508-run-httpd\") pod \"ceilometer-0\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " pod="openstack/ceilometer-0" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.235426 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07a95e5b-bcf6-42b9-a442-e114aa79c508-log-httpd\") pod \"ceilometer-0\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " pod="openstack/ceilometer-0" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.238187 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07a95e5b-bcf6-42b9-a442-e114aa79c508-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " pod="openstack/ceilometer-0" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.240290 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07a95e5b-bcf6-42b9-a442-e114aa79c508-config-data\") pod \"ceilometer-0\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " pod="openstack/ceilometer-0" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.241398 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07a95e5b-bcf6-42b9-a442-e114aa79c508-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " pod="openstack/ceilometer-0" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.244611 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07a95e5b-bcf6-42b9-a442-e114aa79c508-scripts\") pod \"ceilometer-0\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " pod="openstack/ceilometer-0" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.255110 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq5k9\" (UniqueName: \"kubernetes.io/projected/07a95e5b-bcf6-42b9-a442-e114aa79c508-kube-api-access-mq5k9\") pod \"ceilometer-0\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " pod="openstack/ceilometer-0" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.387908 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:06:48 crc kubenswrapper[4897]: I0214 19:06:48.971116 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:06:49 crc kubenswrapper[4897]: I0214 19:06:49.808930 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05c924f7-d6f6-4a90-b527-95498fd32761" path="/var/lib/kubelet/pods/05c924f7-d6f6-4a90-b527-95498fd32761/volumes" Feb 14 19:06:49 crc kubenswrapper[4897]: I0214 19:06:49.810816 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5" path="/var/lib/kubelet/pods/89d0f2b3-6afb-4ffd-a2bb-4584f6792fd5/volumes" Feb 14 19:06:49 crc kubenswrapper[4897]: I0214 19:06:49.921242 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07a95e5b-bcf6-42b9-a442-e114aa79c508","Type":"ContainerStarted","Data":"a8c32396ac81429f441d2275688db148f9368d391c250112f6770615211d5100"} Feb 14 19:06:49 crc kubenswrapper[4897]: I0214 19:06:49.921289 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07a95e5b-bcf6-42b9-a442-e114aa79c508","Type":"ContainerStarted","Data":"38d28d435e21d34181a0255bc4b485f30221c2e5a2a6cae41a4b6d726fa6f972"} Feb 14 19:06:50 crc kubenswrapper[4897]: I0214 19:06:50.934303 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07a95e5b-bcf6-42b9-a442-e114aa79c508","Type":"ContainerStarted","Data":"8e941c21b700db6ca83251942d2c34353f2b516b02a1a767443c91849dde0358"} Feb 14 19:06:51 crc kubenswrapper[4897]: I0214 19:06:51.946837 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07a95e5b-bcf6-42b9-a442-e114aa79c508","Type":"ContainerStarted","Data":"a8b0c4fd92288334df8d2266437f8697efe2c1826363b85fb7fb5ccf2a0778df"} Feb 14 19:06:52 crc kubenswrapper[4897]: I0214 19:06:52.969391 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07a95e5b-bcf6-42b9-a442-e114aa79c508","Type":"ContainerStarted","Data":"8cad4295a4b2e0d8e59b6a9aaf2b4471ae03986672dfef9b88a203d6488cac66"} Feb 14 19:06:52 crc kubenswrapper[4897]: I0214 19:06:52.969758 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 19:06:52 crc kubenswrapper[4897]: I0214 19:06:52.994835 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.972313012 podStartE2EDuration="5.994816485s" podCreationTimestamp="2026-02-14 19:06:47 +0000 UTC" firstStartedPulling="2026-02-14 19:06:48.978134312 +0000 UTC m=+1461.954542795" lastFinishedPulling="2026-02-14 19:06:52.000637755 +0000 UTC m=+1464.977046268" observedRunningTime="2026-02-14 19:06:52.992501252 +0000 UTC m=+1465.968909765" watchObservedRunningTime="2026-02-14 19:06:52.994816485 +0000 UTC m=+1465.971224988" Feb 14 19:06:58 crc kubenswrapper[4897]: I0214 19:06:58.394166 4897 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podb47b5146-8110-4b6d-972a-e3d08f5c7e3c"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podb47b5146-8110-4b6d-972a-e3d08f5c7e3c] : Timed out while waiting for systemd to remove kubepods-besteffort-podb47b5146_8110_4b6d_972a_e3d08f5c7e3c.slice" Feb 14 19:06:58 crc kubenswrapper[4897]: E0214 19:06:58.394717 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podb47b5146-8110-4b6d-972a-e3d08f5c7e3c] : unable to destroy cgroup paths for cgroup [kubepods besteffort podb47b5146-8110-4b6d-972a-e3d08f5c7e3c] : Timed out while waiting for systemd to remove kubepods-besteffort-podb47b5146_8110_4b6d_972a_e3d08f5c7e3c.slice" pod="openstack/glance-default-external-api-0" podUID="b47b5146-8110-4b6d-972a-e3d08f5c7e3c" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.038924 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.084639 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.096884 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.110807 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.112672 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.115722 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.115894 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.122675 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.305927 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c42ac74f-f937-4f5a-973e-a97c0ec3986a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c42ac74f-f937-4f5a-973e-a97c0ec3986a\") " pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.305989 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c42ac74f-f937-4f5a-973e-a97c0ec3986a-scripts\") pod \"glance-default-external-api-0\" (UID: \"c42ac74f-f937-4f5a-973e-a97c0ec3986a\") " pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.306059 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c42ac74f-f937-4f5a-973e-a97c0ec3986a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c42ac74f-f937-4f5a-973e-a97c0ec3986a\") " pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.306111 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c42ac74f-f937-4f5a-973e-a97c0ec3986a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c42ac74f-f937-4f5a-973e-a97c0ec3986a\") " pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.306203 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c2c4846d-e178-48b1-80da-0604a66e3200\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c4846d-e178-48b1-80da-0604a66e3200\") pod \"glance-default-external-api-0\" (UID: \"c42ac74f-f937-4f5a-973e-a97c0ec3986a\") " pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.306244 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6csbn\" (UniqueName: \"kubernetes.io/projected/c42ac74f-f937-4f5a-973e-a97c0ec3986a-kube-api-access-6csbn\") pod \"glance-default-external-api-0\" (UID: \"c42ac74f-f937-4f5a-973e-a97c0ec3986a\") " pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.306282 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c42ac74f-f937-4f5a-973e-a97c0ec3986a-logs\") pod \"glance-default-external-api-0\" (UID: \"c42ac74f-f937-4f5a-973e-a97c0ec3986a\") " pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.306364 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c42ac74f-f937-4f5a-973e-a97c0ec3986a-config-data\") pod \"glance-default-external-api-0\" (UID: \"c42ac74f-f937-4f5a-973e-a97c0ec3986a\") " pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.408098 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c2c4846d-e178-48b1-80da-0604a66e3200\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c4846d-e178-48b1-80da-0604a66e3200\") pod \"glance-default-external-api-0\" (UID: \"c42ac74f-f937-4f5a-973e-a97c0ec3986a\") " pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.408152 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6csbn\" (UniqueName: \"kubernetes.io/projected/c42ac74f-f937-4f5a-973e-a97c0ec3986a-kube-api-access-6csbn\") pod \"glance-default-external-api-0\" (UID: \"c42ac74f-f937-4f5a-973e-a97c0ec3986a\") " pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.408181 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c42ac74f-f937-4f5a-973e-a97c0ec3986a-logs\") pod \"glance-default-external-api-0\" (UID: \"c42ac74f-f937-4f5a-973e-a97c0ec3986a\") " pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.408230 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c42ac74f-f937-4f5a-973e-a97c0ec3986a-config-data\") pod \"glance-default-external-api-0\" (UID: \"c42ac74f-f937-4f5a-973e-a97c0ec3986a\") " pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.408317 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c42ac74f-f937-4f5a-973e-a97c0ec3986a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c42ac74f-f937-4f5a-973e-a97c0ec3986a\") " pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.408340 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c42ac74f-f937-4f5a-973e-a97c0ec3986a-scripts\") pod \"glance-default-external-api-0\" (UID: \"c42ac74f-f937-4f5a-973e-a97c0ec3986a\") " pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.408373 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c42ac74f-f937-4f5a-973e-a97c0ec3986a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c42ac74f-f937-4f5a-973e-a97c0ec3986a\") " pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.408401 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c42ac74f-f937-4f5a-973e-a97c0ec3986a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c42ac74f-f937-4f5a-973e-a97c0ec3986a\") " pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.408898 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c42ac74f-f937-4f5a-973e-a97c0ec3986a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c42ac74f-f937-4f5a-973e-a97c0ec3986a\") " pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.409740 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c42ac74f-f937-4f5a-973e-a97c0ec3986a-logs\") pod \"glance-default-external-api-0\" (UID: \"c42ac74f-f937-4f5a-973e-a97c0ec3986a\") " pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.413102 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.413140 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c2c4846d-e178-48b1-80da-0604a66e3200\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c4846d-e178-48b1-80da-0604a66e3200\") pod \"glance-default-external-api-0\" (UID: \"c42ac74f-f937-4f5a-973e-a97c0ec3986a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c60ca6e58b7228eda216e886c2f088869a9fd33844e5fbdaaee4673098f90fe3/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.415102 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c42ac74f-f937-4f5a-973e-a97c0ec3986a-config-data\") pod \"glance-default-external-api-0\" (UID: \"c42ac74f-f937-4f5a-973e-a97c0ec3986a\") " pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.415591 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c42ac74f-f937-4f5a-973e-a97c0ec3986a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c42ac74f-f937-4f5a-973e-a97c0ec3986a\") " pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.415678 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c42ac74f-f937-4f5a-973e-a97c0ec3986a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c42ac74f-f937-4f5a-973e-a97c0ec3986a\") " pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.426380 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c42ac74f-f937-4f5a-973e-a97c0ec3986a-scripts\") pod \"glance-default-external-api-0\" (UID: \"c42ac74f-f937-4f5a-973e-a97c0ec3986a\") " pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.427543 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6csbn\" (UniqueName: \"kubernetes.io/projected/c42ac74f-f937-4f5a-973e-a97c0ec3986a-kube-api-access-6csbn\") pod \"glance-default-external-api-0\" (UID: \"c42ac74f-f937-4f5a-973e-a97c0ec3986a\") " pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.469703 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c2c4846d-e178-48b1-80da-0604a66e3200\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c4846d-e178-48b1-80da-0604a66e3200\") pod \"glance-default-external-api-0\" (UID: \"c42ac74f-f937-4f5a-973e-a97c0ec3986a\") " pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.749440 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 19:06:59 crc kubenswrapper[4897]: I0214 19:06:59.814922 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b47b5146-8110-4b6d-972a-e3d08f5c7e3c" path="/var/lib/kubelet/pods/b47b5146-8110-4b6d-972a-e3d08f5c7e3c/volumes" Feb 14 19:07:00 crc kubenswrapper[4897]: W0214 19:07:00.327200 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc42ac74f_f937_4f5a_973e_a97c0ec3986a.slice/crio-7c7552a2565d1e015f9b5cbcf39060a03a6a571c6a44b1ec12ef3ae3e9c537a2 WatchSource:0}: Error finding container 7c7552a2565d1e015f9b5cbcf39060a03a6a571c6a44b1ec12ef3ae3e9c537a2: Status 404 returned error can't find the container with id 7c7552a2565d1e015f9b5cbcf39060a03a6a571c6a44b1ec12ef3ae3e9c537a2 Feb 14 19:07:00 crc kubenswrapper[4897]: I0214 19:07:00.331431 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 19:07:01 crc kubenswrapper[4897]: I0214 19:07:01.077113 4897 generic.go:334] "Generic (PLEG): container finished" podID="1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5" containerID="7daa3e9182145db070e6ed99d9195899d85acbe4ee391f4c15f379f0c1bf3b1f" exitCode=0 Feb 14 19:07:01 crc kubenswrapper[4897]: I0214 19:07:01.077472 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4tshq" event={"ID":"1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5","Type":"ContainerDied","Data":"7daa3e9182145db070e6ed99d9195899d85acbe4ee391f4c15f379f0c1bf3b1f"} Feb 14 19:07:01 crc kubenswrapper[4897]: I0214 19:07:01.080775 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c42ac74f-f937-4f5a-973e-a97c0ec3986a","Type":"ContainerStarted","Data":"157f3d492cbe516db4d362594f4f0767060593f1e505d9cadd9c7c7d69c31f4b"} Feb 14 19:07:01 crc kubenswrapper[4897]: I0214 19:07:01.080806 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c42ac74f-f937-4f5a-973e-a97c0ec3986a","Type":"ContainerStarted","Data":"7c7552a2565d1e015f9b5cbcf39060a03a6a571c6a44b1ec12ef3ae3e9c537a2"} Feb 14 19:07:01 crc kubenswrapper[4897]: I0214 19:07:01.404451 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:07:01 crc kubenswrapper[4897]: I0214 19:07:01.405072 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07a95e5b-bcf6-42b9-a442-e114aa79c508" containerName="ceilometer-central-agent" containerID="cri-o://a8c32396ac81429f441d2275688db148f9368d391c250112f6770615211d5100" gracePeriod=30 Feb 14 19:07:01 crc kubenswrapper[4897]: I0214 19:07:01.405108 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07a95e5b-bcf6-42b9-a442-e114aa79c508" containerName="proxy-httpd" containerID="cri-o://8cad4295a4b2e0d8e59b6a9aaf2b4471ae03986672dfef9b88a203d6488cac66" gracePeriod=30 Feb 14 19:07:01 crc kubenswrapper[4897]: I0214 19:07:01.405276 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07a95e5b-bcf6-42b9-a442-e114aa79c508" containerName="ceilometer-notification-agent" containerID="cri-o://8e941c21b700db6ca83251942d2c34353f2b516b02a1a767443c91849dde0358" gracePeriod=30 Feb 14 19:07:01 crc kubenswrapper[4897]: I0214 19:07:01.405311 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07a95e5b-bcf6-42b9-a442-e114aa79c508" containerName="sg-core" containerID="cri-o://a8b0c4fd92288334df8d2266437f8697efe2c1826363b85fb7fb5ccf2a0778df" gracePeriod=30 Feb 14 19:07:01 crc kubenswrapper[4897]: E0214 19:07:01.629170 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07a95e5b_bcf6_42b9_a442_e114aa79c508.slice/crio-conmon-a8b0c4fd92288334df8d2266437f8697efe2c1826363b85fb7fb5ccf2a0778df.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07a95e5b_bcf6_42b9_a442_e114aa79c508.slice/crio-conmon-8cad4295a4b2e0d8e59b6a9aaf2b4471ae03986672dfef9b88a203d6488cac66.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07a95e5b_bcf6_42b9_a442_e114aa79c508.slice/crio-8cad4295a4b2e0d8e59b6a9aaf2b4471ae03986672dfef9b88a203d6488cac66.scope\": RecentStats: unable to find data in memory cache]" Feb 14 19:07:02 crc kubenswrapper[4897]: I0214 19:07:02.094331 4897 generic.go:334] "Generic (PLEG): container finished" podID="07a95e5b-bcf6-42b9-a442-e114aa79c508" containerID="8cad4295a4b2e0d8e59b6a9aaf2b4471ae03986672dfef9b88a203d6488cac66" exitCode=0 Feb 14 19:07:02 crc kubenswrapper[4897]: I0214 19:07:02.094709 4897 generic.go:334] "Generic (PLEG): container finished" podID="07a95e5b-bcf6-42b9-a442-e114aa79c508" containerID="a8b0c4fd92288334df8d2266437f8697efe2c1826363b85fb7fb5ccf2a0778df" exitCode=2 Feb 14 19:07:02 crc kubenswrapper[4897]: I0214 19:07:02.094723 4897 generic.go:334] "Generic (PLEG): container finished" podID="07a95e5b-bcf6-42b9-a442-e114aa79c508" containerID="a8c32396ac81429f441d2275688db148f9368d391c250112f6770615211d5100" exitCode=0 Feb 14 19:07:02 crc kubenswrapper[4897]: I0214 19:07:02.094414 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07a95e5b-bcf6-42b9-a442-e114aa79c508","Type":"ContainerDied","Data":"8cad4295a4b2e0d8e59b6a9aaf2b4471ae03986672dfef9b88a203d6488cac66"} Feb 14 19:07:02 crc kubenswrapper[4897]: I0214 19:07:02.094813 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07a95e5b-bcf6-42b9-a442-e114aa79c508","Type":"ContainerDied","Data":"a8b0c4fd92288334df8d2266437f8697efe2c1826363b85fb7fb5ccf2a0778df"} Feb 14 19:07:02 crc kubenswrapper[4897]: I0214 19:07:02.094826 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07a95e5b-bcf6-42b9-a442-e114aa79c508","Type":"ContainerDied","Data":"a8c32396ac81429f441d2275688db148f9368d391c250112f6770615211d5100"} Feb 14 19:07:02 crc kubenswrapper[4897]: I0214 19:07:02.098401 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c42ac74f-f937-4f5a-973e-a97c0ec3986a","Type":"ContainerStarted","Data":"4219b26b43acec8c4d55592b7ca3caaef8f759bd59bc577ef05350e50086b312"} Feb 14 19:07:02 crc kubenswrapper[4897]: I0214 19:07:02.139169 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.139146253 podStartE2EDuration="3.139146253s" podCreationTimestamp="2026-02-14 19:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:07:02.130353437 +0000 UTC m=+1475.106761920" watchObservedRunningTime="2026-02-14 19:07:02.139146253 +0000 UTC m=+1475.115554756" Feb 14 19:07:02 crc kubenswrapper[4897]: I0214 19:07:02.628828 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4tshq" Feb 14 19:07:02 crc kubenswrapper[4897]: I0214 19:07:02.653466 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5-scripts\") pod \"1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5\" (UID: \"1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5\") " Feb 14 19:07:02 crc kubenswrapper[4897]: I0214 19:07:02.653532 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5-config-data\") pod \"1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5\" (UID: \"1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5\") " Feb 14 19:07:02 crc kubenswrapper[4897]: I0214 19:07:02.653566 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29c25\" (UniqueName: \"kubernetes.io/projected/1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5-kube-api-access-29c25\") pod \"1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5\" (UID: \"1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5\") " Feb 14 19:07:02 crc kubenswrapper[4897]: I0214 19:07:02.653653 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5-combined-ca-bundle\") pod \"1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5\" (UID: \"1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5\") " Feb 14 19:07:02 crc kubenswrapper[4897]: I0214 19:07:02.663768 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5-kube-api-access-29c25" (OuterVolumeSpecName: "kube-api-access-29c25") pod "1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5" (UID: "1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5"). InnerVolumeSpecName "kube-api-access-29c25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:07:02 crc kubenswrapper[4897]: I0214 19:07:02.718554 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5-scripts" (OuterVolumeSpecName: "scripts") pod "1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5" (UID: "1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:02 crc kubenswrapper[4897]: I0214 19:07:02.736415 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5" (UID: "1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:02 crc kubenswrapper[4897]: I0214 19:07:02.737887 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5-config-data" (OuterVolumeSpecName: "config-data") pod "1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5" (UID: "1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:02 crc kubenswrapper[4897]: I0214 19:07:02.757831 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:02 crc kubenswrapper[4897]: I0214 19:07:02.757865 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:02 crc kubenswrapper[4897]: I0214 19:07:02.757876 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29c25\" (UniqueName: \"kubernetes.io/projected/1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5-kube-api-access-29c25\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:02 crc kubenswrapper[4897]: I0214 19:07:02.757887 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.048308 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.125019 4897 generic.go:334] "Generic (PLEG): container finished" podID="07a95e5b-bcf6-42b9-a442-e114aa79c508" containerID="8e941c21b700db6ca83251942d2c34353f2b516b02a1a767443c91849dde0358" exitCode=0 Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.125172 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07a95e5b-bcf6-42b9-a442-e114aa79c508","Type":"ContainerDied","Data":"8e941c21b700db6ca83251942d2c34353f2b516b02a1a767443c91849dde0358"} Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.125225 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07a95e5b-bcf6-42b9-a442-e114aa79c508","Type":"ContainerDied","Data":"38d28d435e21d34181a0255bc4b485f30221c2e5a2a6cae41a4b6d726fa6f972"} Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.125262 4897 scope.go:117] "RemoveContainer" containerID="8cad4295a4b2e0d8e59b6a9aaf2b4471ae03986672dfef9b88a203d6488cac66" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.125460 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.133835 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4tshq" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.134239 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4tshq" event={"ID":"1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5","Type":"ContainerDied","Data":"47490eef1ce51e5d70f98d210b98cd057845d1bbaa9d197a38b6ef6eddbf7089"} Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.144287 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47490eef1ce51e5d70f98d210b98cd057845d1bbaa9d197a38b6ef6eddbf7089" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.153995 4897 scope.go:117] "RemoveContainer" containerID="a8b0c4fd92288334df8d2266437f8697efe2c1826363b85fb7fb5ccf2a0778df" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.174160 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07a95e5b-bcf6-42b9-a442-e114aa79c508-sg-core-conf-yaml\") pod \"07a95e5b-bcf6-42b9-a442-e114aa79c508\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.174273 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07a95e5b-bcf6-42b9-a442-e114aa79c508-combined-ca-bundle\") pod \"07a95e5b-bcf6-42b9-a442-e114aa79c508\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.174299 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq5k9\" (UniqueName: \"kubernetes.io/projected/07a95e5b-bcf6-42b9-a442-e114aa79c508-kube-api-access-mq5k9\") pod \"07a95e5b-bcf6-42b9-a442-e114aa79c508\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.174363 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07a95e5b-bcf6-42b9-a442-e114aa79c508-run-httpd\") pod \"07a95e5b-bcf6-42b9-a442-e114aa79c508\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.174402 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07a95e5b-bcf6-42b9-a442-e114aa79c508-log-httpd\") pod \"07a95e5b-bcf6-42b9-a442-e114aa79c508\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.174439 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07a95e5b-bcf6-42b9-a442-e114aa79c508-config-data\") pod \"07a95e5b-bcf6-42b9-a442-e114aa79c508\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.174547 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07a95e5b-bcf6-42b9-a442-e114aa79c508-scripts\") pod \"07a95e5b-bcf6-42b9-a442-e114aa79c508\" (UID: \"07a95e5b-bcf6-42b9-a442-e114aa79c508\") " Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.175501 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07a95e5b-bcf6-42b9-a442-e114aa79c508-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "07a95e5b-bcf6-42b9-a442-e114aa79c508" (UID: "07a95e5b-bcf6-42b9-a442-e114aa79c508"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.177586 4897 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07a95e5b-bcf6-42b9-a442-e114aa79c508-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.179211 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07a95e5b-bcf6-42b9-a442-e114aa79c508-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "07a95e5b-bcf6-42b9-a442-e114aa79c508" (UID: "07a95e5b-bcf6-42b9-a442-e114aa79c508"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.184913 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07a95e5b-bcf6-42b9-a442-e114aa79c508-kube-api-access-mq5k9" (OuterVolumeSpecName: "kube-api-access-mq5k9") pod "07a95e5b-bcf6-42b9-a442-e114aa79c508" (UID: "07a95e5b-bcf6-42b9-a442-e114aa79c508"). InnerVolumeSpecName "kube-api-access-mq5k9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.185008 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07a95e5b-bcf6-42b9-a442-e114aa79c508-scripts" (OuterVolumeSpecName: "scripts") pod "07a95e5b-bcf6-42b9-a442-e114aa79c508" (UID: "07a95e5b-bcf6-42b9-a442-e114aa79c508"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.185316 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 14 19:07:03 crc kubenswrapper[4897]: E0214 19:07:03.185768 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a95e5b-bcf6-42b9-a442-e114aa79c508" containerName="sg-core" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.185782 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a95e5b-bcf6-42b9-a442-e114aa79c508" containerName="sg-core" Feb 14 19:07:03 crc kubenswrapper[4897]: E0214 19:07:03.185795 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a95e5b-bcf6-42b9-a442-e114aa79c508" containerName="proxy-httpd" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.185800 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a95e5b-bcf6-42b9-a442-e114aa79c508" containerName="proxy-httpd" Feb 14 19:07:03 crc kubenswrapper[4897]: E0214 19:07:03.185820 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5" containerName="nova-cell0-conductor-db-sync" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.185826 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5" containerName="nova-cell0-conductor-db-sync" Feb 14 19:07:03 crc kubenswrapper[4897]: E0214 19:07:03.185836 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a95e5b-bcf6-42b9-a442-e114aa79c508" containerName="ceilometer-notification-agent" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.185842 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a95e5b-bcf6-42b9-a442-e114aa79c508" containerName="ceilometer-notification-agent" Feb 14 19:07:03 crc kubenswrapper[4897]: E0214 19:07:03.185878 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a95e5b-bcf6-42b9-a442-e114aa79c508" containerName="ceilometer-central-agent" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.185884 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a95e5b-bcf6-42b9-a442-e114aa79c508" containerName="ceilometer-central-agent" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.190657 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="07a95e5b-bcf6-42b9-a442-e114aa79c508" containerName="ceilometer-notification-agent" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.190746 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="07a95e5b-bcf6-42b9-a442-e114aa79c508" containerName="proxy-httpd" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.190814 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="07a95e5b-bcf6-42b9-a442-e114aa79c508" containerName="ceilometer-central-agent" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.190846 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5" containerName="nova-cell0-conductor-db-sync" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.190884 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="07a95e5b-bcf6-42b9-a442-e114aa79c508" containerName="sg-core" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.192397 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.196215 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.197394 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-xvvb8" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.219122 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.224871 4897 scope.go:117] "RemoveContainer" containerID="8e941c21b700db6ca83251942d2c34353f2b516b02a1a767443c91849dde0358" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.231282 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07a95e5b-bcf6-42b9-a442-e114aa79c508-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "07a95e5b-bcf6-42b9-a442-e114aa79c508" (UID: "07a95e5b-bcf6-42b9-a442-e114aa79c508"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.246141 4897 scope.go:117] "RemoveContainer" containerID="a8c32396ac81429f441d2275688db148f9368d391c250112f6770615211d5100" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.270300 4897 scope.go:117] "RemoveContainer" containerID="8cad4295a4b2e0d8e59b6a9aaf2b4471ae03986672dfef9b88a203d6488cac66" Feb 14 19:07:03 crc kubenswrapper[4897]: E0214 19:07:03.270818 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cad4295a4b2e0d8e59b6a9aaf2b4471ae03986672dfef9b88a203d6488cac66\": container with ID starting with 8cad4295a4b2e0d8e59b6a9aaf2b4471ae03986672dfef9b88a203d6488cac66 not found: ID does not exist" containerID="8cad4295a4b2e0d8e59b6a9aaf2b4471ae03986672dfef9b88a203d6488cac66" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.270848 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cad4295a4b2e0d8e59b6a9aaf2b4471ae03986672dfef9b88a203d6488cac66"} err="failed to get container status \"8cad4295a4b2e0d8e59b6a9aaf2b4471ae03986672dfef9b88a203d6488cac66\": rpc error: code = NotFound desc = could not find container \"8cad4295a4b2e0d8e59b6a9aaf2b4471ae03986672dfef9b88a203d6488cac66\": container with ID starting with 8cad4295a4b2e0d8e59b6a9aaf2b4471ae03986672dfef9b88a203d6488cac66 not found: ID does not exist" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.270868 4897 scope.go:117] "RemoveContainer" containerID="a8b0c4fd92288334df8d2266437f8697efe2c1826363b85fb7fb5ccf2a0778df" Feb 14 19:07:03 crc kubenswrapper[4897]: E0214 19:07:03.271696 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8b0c4fd92288334df8d2266437f8697efe2c1826363b85fb7fb5ccf2a0778df\": container with ID starting with a8b0c4fd92288334df8d2266437f8697efe2c1826363b85fb7fb5ccf2a0778df not found: ID does not exist" containerID="a8b0c4fd92288334df8d2266437f8697efe2c1826363b85fb7fb5ccf2a0778df" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.271744 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8b0c4fd92288334df8d2266437f8697efe2c1826363b85fb7fb5ccf2a0778df"} err="failed to get container status \"a8b0c4fd92288334df8d2266437f8697efe2c1826363b85fb7fb5ccf2a0778df\": rpc error: code = NotFound desc = could not find container \"a8b0c4fd92288334df8d2266437f8697efe2c1826363b85fb7fb5ccf2a0778df\": container with ID starting with a8b0c4fd92288334df8d2266437f8697efe2c1826363b85fb7fb5ccf2a0778df not found: ID does not exist" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.271774 4897 scope.go:117] "RemoveContainer" containerID="8e941c21b700db6ca83251942d2c34353f2b516b02a1a767443c91849dde0358" Feb 14 19:07:03 crc kubenswrapper[4897]: E0214 19:07:03.272329 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e941c21b700db6ca83251942d2c34353f2b516b02a1a767443c91849dde0358\": container with ID starting with 8e941c21b700db6ca83251942d2c34353f2b516b02a1a767443c91849dde0358 not found: ID does not exist" containerID="8e941c21b700db6ca83251942d2c34353f2b516b02a1a767443c91849dde0358" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.272365 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e941c21b700db6ca83251942d2c34353f2b516b02a1a767443c91849dde0358"} err="failed to get container status \"8e941c21b700db6ca83251942d2c34353f2b516b02a1a767443c91849dde0358\": rpc error: code = NotFound desc = could not find container \"8e941c21b700db6ca83251942d2c34353f2b516b02a1a767443c91849dde0358\": container with ID starting with 8e941c21b700db6ca83251942d2c34353f2b516b02a1a767443c91849dde0358 not found: ID does not exist" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.272384 4897 scope.go:117] "RemoveContainer" containerID="a8c32396ac81429f441d2275688db148f9368d391c250112f6770615211d5100" Feb 14 19:07:03 crc kubenswrapper[4897]: E0214 19:07:03.272983 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8c32396ac81429f441d2275688db148f9368d391c250112f6770615211d5100\": container with ID starting with a8c32396ac81429f441d2275688db148f9368d391c250112f6770615211d5100 not found: ID does not exist" containerID="a8c32396ac81429f441d2275688db148f9368d391c250112f6770615211d5100" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.273015 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8c32396ac81429f441d2275688db148f9368d391c250112f6770615211d5100"} err="failed to get container status \"a8c32396ac81429f441d2275688db148f9368d391c250112f6770615211d5100\": rpc error: code = NotFound desc = could not find container \"a8c32396ac81429f441d2275688db148f9368d391c250112f6770615211d5100\": container with ID starting with a8c32396ac81429f441d2275688db148f9368d391c250112f6770615211d5100 not found: ID does not exist" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.280085 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bf82541-7932-4602-bdc4-ee1514cd59f4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5bf82541-7932-4602-bdc4-ee1514cd59f4\") " pod="openstack/nova-cell0-conductor-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.280354 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm9gv\" (UniqueName: \"kubernetes.io/projected/5bf82541-7932-4602-bdc4-ee1514cd59f4-kube-api-access-nm9gv\") pod \"nova-cell0-conductor-0\" (UID: \"5bf82541-7932-4602-bdc4-ee1514cd59f4\") " pod="openstack/nova-cell0-conductor-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.280457 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf82541-7932-4602-bdc4-ee1514cd59f4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5bf82541-7932-4602-bdc4-ee1514cd59f4\") " pod="openstack/nova-cell0-conductor-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.280560 4897 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07a95e5b-bcf6-42b9-a442-e114aa79c508-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.280581 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq5k9\" (UniqueName: \"kubernetes.io/projected/07a95e5b-bcf6-42b9-a442-e114aa79c508-kube-api-access-mq5k9\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.280597 4897 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07a95e5b-bcf6-42b9-a442-e114aa79c508-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.280608 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07a95e5b-bcf6-42b9-a442-e114aa79c508-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.318574 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07a95e5b-bcf6-42b9-a442-e114aa79c508-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07a95e5b-bcf6-42b9-a442-e114aa79c508" (UID: "07a95e5b-bcf6-42b9-a442-e114aa79c508"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.343282 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07a95e5b-bcf6-42b9-a442-e114aa79c508-config-data" (OuterVolumeSpecName: "config-data") pod "07a95e5b-bcf6-42b9-a442-e114aa79c508" (UID: "07a95e5b-bcf6-42b9-a442-e114aa79c508"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.382054 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bf82541-7932-4602-bdc4-ee1514cd59f4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5bf82541-7932-4602-bdc4-ee1514cd59f4\") " pod="openstack/nova-cell0-conductor-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.382201 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm9gv\" (UniqueName: \"kubernetes.io/projected/5bf82541-7932-4602-bdc4-ee1514cd59f4-kube-api-access-nm9gv\") pod \"nova-cell0-conductor-0\" (UID: \"5bf82541-7932-4602-bdc4-ee1514cd59f4\") " pod="openstack/nova-cell0-conductor-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.382282 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf82541-7932-4602-bdc4-ee1514cd59f4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5bf82541-7932-4602-bdc4-ee1514cd59f4\") " pod="openstack/nova-cell0-conductor-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.382389 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07a95e5b-bcf6-42b9-a442-e114aa79c508-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.382400 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07a95e5b-bcf6-42b9-a442-e114aa79c508-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.385623 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bf82541-7932-4602-bdc4-ee1514cd59f4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5bf82541-7932-4602-bdc4-ee1514cd59f4\") " pod="openstack/nova-cell0-conductor-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.385642 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf82541-7932-4602-bdc4-ee1514cd59f4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5bf82541-7932-4602-bdc4-ee1514cd59f4\") " pod="openstack/nova-cell0-conductor-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.398627 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm9gv\" (UniqueName: \"kubernetes.io/projected/5bf82541-7932-4602-bdc4-ee1514cd59f4-kube-api-access-nm9gv\") pod \"nova-cell0-conductor-0\" (UID: \"5bf82541-7932-4602-bdc4-ee1514cd59f4\") " pod="openstack/nova-cell0-conductor-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.467462 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.498511 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.510529 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.513291 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.516596 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.516974 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.526273 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.526833 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.587532 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-run-httpd\") pod \"ceilometer-0\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " pod="openstack/ceilometer-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.587579 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-log-httpd\") pod \"ceilometer-0\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " pod="openstack/ceilometer-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.587622 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-scripts\") pod \"ceilometer-0\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " pod="openstack/ceilometer-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.587641 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " pod="openstack/ceilometer-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.587877 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " pod="openstack/ceilometer-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.588102 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26q5j\" (UniqueName: \"kubernetes.io/projected/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-kube-api-access-26q5j\") pod \"ceilometer-0\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " pod="openstack/ceilometer-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.588230 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-config-data\") pod \"ceilometer-0\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " pod="openstack/ceilometer-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.690258 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26q5j\" (UniqueName: \"kubernetes.io/projected/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-kube-api-access-26q5j\") pod \"ceilometer-0\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " pod="openstack/ceilometer-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.690658 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-config-data\") pod \"ceilometer-0\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " pod="openstack/ceilometer-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.690748 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-run-httpd\") pod \"ceilometer-0\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " pod="openstack/ceilometer-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.690774 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-log-httpd\") pod \"ceilometer-0\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " pod="openstack/ceilometer-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.690823 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-scripts\") pod \"ceilometer-0\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " pod="openstack/ceilometer-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.690851 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " pod="openstack/ceilometer-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.690907 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " pod="openstack/ceilometer-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.692067 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-run-httpd\") pod \"ceilometer-0\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " pod="openstack/ceilometer-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.695850 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-log-httpd\") pod \"ceilometer-0\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " pod="openstack/ceilometer-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.696273 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " pod="openstack/ceilometer-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.699485 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-config-data\") pod \"ceilometer-0\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " pod="openstack/ceilometer-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.700225 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-scripts\") pod \"ceilometer-0\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " pod="openstack/ceilometer-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.705435 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " pod="openstack/ceilometer-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.708974 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26q5j\" (UniqueName: \"kubernetes.io/projected/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-kube-api-access-26q5j\") pod \"ceilometer-0\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " pod="openstack/ceilometer-0" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.814559 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07a95e5b-bcf6-42b9-a442-e114aa79c508" path="/var/lib/kubelet/pods/07a95e5b-bcf6-42b9-a442-e114aa79c508/volumes" Feb 14 19:07:03 crc kubenswrapper[4897]: I0214 19:07:03.835746 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:07:04 crc kubenswrapper[4897]: W0214 19:07:04.024080 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bf82541_7932_4602_bdc4_ee1514cd59f4.slice/crio-b55186507b00102e72e73792e210ea1bb829854d6d308deaebf1cd1abdf27872 WatchSource:0}: Error finding container b55186507b00102e72e73792e210ea1bb829854d6d308deaebf1cd1abdf27872: Status 404 returned error can't find the container with id b55186507b00102e72e73792e210ea1bb829854d6d308deaebf1cd1abdf27872 Feb 14 19:07:04 crc kubenswrapper[4897]: I0214 19:07:04.041946 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 14 19:07:04 crc kubenswrapper[4897]: I0214 19:07:04.153996 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5bf82541-7932-4602-bdc4-ee1514cd59f4","Type":"ContainerStarted","Data":"b55186507b00102e72e73792e210ea1bb829854d6d308deaebf1cd1abdf27872"} Feb 14 19:07:04 crc kubenswrapper[4897]: I0214 19:07:04.337914 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:07:05 crc kubenswrapper[4897]: I0214 19:07:05.167384 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45dff63f-a226-4b9c-aa9c-bd84d92f1f10","Type":"ContainerStarted","Data":"2ed39d29a72ab4bc0bc778788e5c251011ba94e5078cdf4de1a7f00ced10c960"} Feb 14 19:07:05 crc kubenswrapper[4897]: I0214 19:07:05.167449 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45dff63f-a226-4b9c-aa9c-bd84d92f1f10","Type":"ContainerStarted","Data":"bf81e095d5cd10dde0843cab931ed9754bcc4202ae5f0b4b18ad9b5bde8c7f22"} Feb 14 19:07:05 crc kubenswrapper[4897]: I0214 19:07:05.168928 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5bf82541-7932-4602-bdc4-ee1514cd59f4","Type":"ContainerStarted","Data":"aaefaa1ffdeed51225cf30a67696083b0f883149679af731bfab02cf4dc458fc"} Feb 14 19:07:05 crc kubenswrapper[4897]: I0214 19:07:05.169133 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 14 19:07:05 crc kubenswrapper[4897]: I0214 19:07:05.191265 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.191235725 podStartE2EDuration="2.191235725s" podCreationTimestamp="2026-02-14 19:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:07:05.18246574 +0000 UTC m=+1478.158874233" watchObservedRunningTime="2026-02-14 19:07:05.191235725 +0000 UTC m=+1478.167644198" Feb 14 19:07:05 crc kubenswrapper[4897]: I0214 19:07:05.613866 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:07:06 crc kubenswrapper[4897]: I0214 19:07:06.182828 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45dff63f-a226-4b9c-aa9c-bd84d92f1f10","Type":"ContainerStarted","Data":"bc2ba80f963fb8673a32e0e793bda1dbcec6e810a3f8b12891a4ee5995949280"} Feb 14 19:07:07 crc kubenswrapper[4897]: I0214 19:07:07.193337 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45dff63f-a226-4b9c-aa9c-bd84d92f1f10","Type":"ContainerStarted","Data":"7ec01835d606cff8e32b68afbd9dcdec8585a0ac8e8b0654c39b8cb40035711f"} Feb 14 19:07:07 crc kubenswrapper[4897]: I0214 19:07:07.964612 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 14 19:07:07 crc kubenswrapper[4897]: I0214 19:07:07.965215 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="5bf82541-7932-4602-bdc4-ee1514cd59f4" containerName="nova-cell0-conductor-conductor" containerID="cri-o://aaefaa1ffdeed51225cf30a67696083b0f883149679af731bfab02cf4dc458fc" gracePeriod=30 Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.207346 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45dff63f-a226-4b9c-aa9c-bd84d92f1f10","Type":"ContainerStarted","Data":"37d057246023243980c9cb441b99435e6cc6312cc0228dfd036fe82eccba6d23"} Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.207573 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45dff63f-a226-4b9c-aa9c-bd84d92f1f10" containerName="ceilometer-central-agent" containerID="cri-o://2ed39d29a72ab4bc0bc778788e5c251011ba94e5078cdf4de1a7f00ced10c960" gracePeriod=30 Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.207916 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.208367 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45dff63f-a226-4b9c-aa9c-bd84d92f1f10" containerName="proxy-httpd" containerID="cri-o://37d057246023243980c9cb441b99435e6cc6312cc0228dfd036fe82eccba6d23" gracePeriod=30 Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.208443 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45dff63f-a226-4b9c-aa9c-bd84d92f1f10" containerName="sg-core" containerID="cri-o://7ec01835d606cff8e32b68afbd9dcdec8585a0ac8e8b0654c39b8cb40035711f" gracePeriod=30 Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.208501 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45dff63f-a226-4b9c-aa9c-bd84d92f1f10" containerName="ceilometer-notification-agent" containerID="cri-o://bc2ba80f963fb8673a32e0e793bda1dbcec6e810a3f8b12891a4ee5995949280" gracePeriod=30 Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.230192 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.776261753 podStartE2EDuration="5.230170454s" podCreationTimestamp="2026-02-14 19:07:03 +0000 UTC" firstStartedPulling="2026-02-14 19:07:04.329244924 +0000 UTC m=+1477.305653407" lastFinishedPulling="2026-02-14 19:07:07.783153625 +0000 UTC m=+1480.759562108" observedRunningTime="2026-02-14 19:07:08.226992714 +0000 UTC m=+1481.203401217" watchObservedRunningTime="2026-02-14 19:07:08.230170454 +0000 UTC m=+1481.206578937" Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.566893 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-t6njw"] Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.569977 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-t6njw" Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.593097 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-t6njw"] Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.677617 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-99e6-account-create-update-wvnr5"] Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.679142 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-99e6-account-create-update-wvnr5" Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.680969 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.716898 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-99e6-account-create-update-wvnr5"] Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.733208 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrpgp\" (UniqueName: \"kubernetes.io/projected/9031cb08-dfc3-4d67-b9f2-2953713beb20-kube-api-access-vrpgp\") pod \"aodh-db-create-t6njw\" (UID: \"9031cb08-dfc3-4d67-b9f2-2953713beb20\") " pod="openstack/aodh-db-create-t6njw" Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.733325 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9031cb08-dfc3-4d67-b9f2-2953713beb20-operator-scripts\") pod \"aodh-db-create-t6njw\" (UID: \"9031cb08-dfc3-4d67-b9f2-2953713beb20\") " pod="openstack/aodh-db-create-t6njw" Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.835511 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9031cb08-dfc3-4d67-b9f2-2953713beb20-operator-scripts\") pod \"aodh-db-create-t6njw\" (UID: \"9031cb08-dfc3-4d67-b9f2-2953713beb20\") " pod="openstack/aodh-db-create-t6njw" Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.835893 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrpgp\" (UniqueName: \"kubernetes.io/projected/9031cb08-dfc3-4d67-b9f2-2953713beb20-kube-api-access-vrpgp\") pod \"aodh-db-create-t6njw\" (UID: \"9031cb08-dfc3-4d67-b9f2-2953713beb20\") " pod="openstack/aodh-db-create-t6njw" Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.835947 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afcb6bce-1132-4c0b-836f-82c6b0fd1406-operator-scripts\") pod \"aodh-99e6-account-create-update-wvnr5\" (UID: \"afcb6bce-1132-4c0b-836f-82c6b0fd1406\") " pod="openstack/aodh-99e6-account-create-update-wvnr5" Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.835972 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdstl\" (UniqueName: \"kubernetes.io/projected/afcb6bce-1132-4c0b-836f-82c6b0fd1406-kube-api-access-fdstl\") pod \"aodh-99e6-account-create-update-wvnr5\" (UID: \"afcb6bce-1132-4c0b-836f-82c6b0fd1406\") " pod="openstack/aodh-99e6-account-create-update-wvnr5" Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.836562 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9031cb08-dfc3-4d67-b9f2-2953713beb20-operator-scripts\") pod \"aodh-db-create-t6njw\" (UID: \"9031cb08-dfc3-4d67-b9f2-2953713beb20\") " pod="openstack/aodh-db-create-t6njw" Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.857095 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrpgp\" (UniqueName: \"kubernetes.io/projected/9031cb08-dfc3-4d67-b9f2-2953713beb20-kube-api-access-vrpgp\") pod \"aodh-db-create-t6njw\" (UID: \"9031cb08-dfc3-4d67-b9f2-2953713beb20\") " pod="openstack/aodh-db-create-t6njw" Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.896968 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-t6njw" Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.937860 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afcb6bce-1132-4c0b-836f-82c6b0fd1406-operator-scripts\") pod \"aodh-99e6-account-create-update-wvnr5\" (UID: \"afcb6bce-1132-4c0b-836f-82c6b0fd1406\") " pod="openstack/aodh-99e6-account-create-update-wvnr5" Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.938057 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdstl\" (UniqueName: \"kubernetes.io/projected/afcb6bce-1132-4c0b-836f-82c6b0fd1406-kube-api-access-fdstl\") pod \"aodh-99e6-account-create-update-wvnr5\" (UID: \"afcb6bce-1132-4c0b-836f-82c6b0fd1406\") " pod="openstack/aodh-99e6-account-create-update-wvnr5" Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.940152 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afcb6bce-1132-4c0b-836f-82c6b0fd1406-operator-scripts\") pod \"aodh-99e6-account-create-update-wvnr5\" (UID: \"afcb6bce-1132-4c0b-836f-82c6b0fd1406\") " pod="openstack/aodh-99e6-account-create-update-wvnr5" Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.965505 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdstl\" (UniqueName: \"kubernetes.io/projected/afcb6bce-1132-4c0b-836f-82c6b0fd1406-kube-api-access-fdstl\") pod \"aodh-99e6-account-create-update-wvnr5\" (UID: \"afcb6bce-1132-4c0b-836f-82c6b0fd1406\") " pod="openstack/aodh-99e6-account-create-update-wvnr5" Feb 14 19:07:08 crc kubenswrapper[4897]: I0214 19:07:08.996939 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-99e6-account-create-update-wvnr5" Feb 14 19:07:09 crc kubenswrapper[4897]: I0214 19:07:09.222999 4897 generic.go:334] "Generic (PLEG): container finished" podID="45dff63f-a226-4b9c-aa9c-bd84d92f1f10" containerID="37d057246023243980c9cb441b99435e6cc6312cc0228dfd036fe82eccba6d23" exitCode=0 Feb 14 19:07:09 crc kubenswrapper[4897]: I0214 19:07:09.223362 4897 generic.go:334] "Generic (PLEG): container finished" podID="45dff63f-a226-4b9c-aa9c-bd84d92f1f10" containerID="7ec01835d606cff8e32b68afbd9dcdec8585a0ac8e8b0654c39b8cb40035711f" exitCode=2 Feb 14 19:07:09 crc kubenswrapper[4897]: I0214 19:07:09.223373 4897 generic.go:334] "Generic (PLEG): container finished" podID="45dff63f-a226-4b9c-aa9c-bd84d92f1f10" containerID="bc2ba80f963fb8673a32e0e793bda1dbcec6e810a3f8b12891a4ee5995949280" exitCode=0 Feb 14 19:07:09 crc kubenswrapper[4897]: I0214 19:07:09.223059 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45dff63f-a226-4b9c-aa9c-bd84d92f1f10","Type":"ContainerDied","Data":"37d057246023243980c9cb441b99435e6cc6312cc0228dfd036fe82eccba6d23"} Feb 14 19:07:09 crc kubenswrapper[4897]: I0214 19:07:09.223456 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45dff63f-a226-4b9c-aa9c-bd84d92f1f10","Type":"ContainerDied","Data":"7ec01835d606cff8e32b68afbd9dcdec8585a0ac8e8b0654c39b8cb40035711f"} Feb 14 19:07:09 crc kubenswrapper[4897]: I0214 19:07:09.223471 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45dff63f-a226-4b9c-aa9c-bd84d92f1f10","Type":"ContainerDied","Data":"bc2ba80f963fb8673a32e0e793bda1dbcec6e810a3f8b12891a4ee5995949280"} Feb 14 19:07:09 crc kubenswrapper[4897]: I0214 19:07:09.227457 4897 generic.go:334] "Generic (PLEG): container finished" podID="5bf82541-7932-4602-bdc4-ee1514cd59f4" containerID="aaefaa1ffdeed51225cf30a67696083b0f883149679af731bfab02cf4dc458fc" exitCode=0 Feb 14 19:07:09 crc kubenswrapper[4897]: I0214 19:07:09.227485 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5bf82541-7932-4602-bdc4-ee1514cd59f4","Type":"ContainerDied","Data":"aaefaa1ffdeed51225cf30a67696083b0f883149679af731bfab02cf4dc458fc"} Feb 14 19:07:09 crc kubenswrapper[4897]: I0214 19:07:09.750256 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 14 19:07:09 crc kubenswrapper[4897]: I0214 19:07:09.750658 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 14 19:07:09 crc kubenswrapper[4897]: W0214 19:07:09.848138 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9031cb08_dfc3_4d67_b9f2_2953713beb20.slice/crio-ae5ad553c8eae54c2ff65821c21242db97c6fc4c336dcf8b67265f7246d98d4c WatchSource:0}: Error finding container ae5ad553c8eae54c2ff65821c21242db97c6fc4c336dcf8b67265f7246d98d4c: Status 404 returned error can't find the container with id ae5ad553c8eae54c2ff65821c21242db97c6fc4c336dcf8b67265f7246d98d4c Feb 14 19:07:09 crc kubenswrapper[4897]: I0214 19:07:09.858868 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-t6njw"] Feb 14 19:07:09 crc kubenswrapper[4897]: I0214 19:07:09.858967 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 14 19:07:09 crc kubenswrapper[4897]: I0214 19:07:09.859001 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.006062 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.065970 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-99e6-account-create-update-wvnr5"] Feb 14 19:07:10 crc kubenswrapper[4897]: W0214 19:07:10.072711 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafcb6bce_1132_4c0b_836f_82c6b0fd1406.slice/crio-3309b12824d8d500c0f96974bd93c6a987d17d0adc3d7ee913c5ad028e72c9fa WatchSource:0}: Error finding container 3309b12824d8d500c0f96974bd93c6a987d17d0adc3d7ee913c5ad028e72c9fa: Status 404 returned error can't find the container with id 3309b12824d8d500c0f96974bd93c6a987d17d0adc3d7ee913c5ad028e72c9fa Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.169222 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf82541-7932-4602-bdc4-ee1514cd59f4-combined-ca-bundle\") pod \"5bf82541-7932-4602-bdc4-ee1514cd59f4\" (UID: \"5bf82541-7932-4602-bdc4-ee1514cd59f4\") " Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.169360 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bf82541-7932-4602-bdc4-ee1514cd59f4-config-data\") pod \"5bf82541-7932-4602-bdc4-ee1514cd59f4\" (UID: \"5bf82541-7932-4602-bdc4-ee1514cd59f4\") " Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.169453 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm9gv\" (UniqueName: \"kubernetes.io/projected/5bf82541-7932-4602-bdc4-ee1514cd59f4-kube-api-access-nm9gv\") pod \"5bf82541-7932-4602-bdc4-ee1514cd59f4\" (UID: \"5bf82541-7932-4602-bdc4-ee1514cd59f4\") " Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.181198 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bf82541-7932-4602-bdc4-ee1514cd59f4-kube-api-access-nm9gv" (OuterVolumeSpecName: "kube-api-access-nm9gv") pod "5bf82541-7932-4602-bdc4-ee1514cd59f4" (UID: "5bf82541-7932-4602-bdc4-ee1514cd59f4"). InnerVolumeSpecName "kube-api-access-nm9gv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.213678 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bf82541-7932-4602-bdc4-ee1514cd59f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bf82541-7932-4602-bdc4-ee1514cd59f4" (UID: "5bf82541-7932-4602-bdc4-ee1514cd59f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.214742 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bf82541-7932-4602-bdc4-ee1514cd59f4-config-data" (OuterVolumeSpecName: "config-data") pod "5bf82541-7932-4602-bdc4-ee1514cd59f4" (UID: "5bf82541-7932-4602-bdc4-ee1514cd59f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.242960 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5bf82541-7932-4602-bdc4-ee1514cd59f4","Type":"ContainerDied","Data":"b55186507b00102e72e73792e210ea1bb829854d6d308deaebf1cd1abdf27872"} Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.243200 4897 scope.go:117] "RemoveContainer" containerID="aaefaa1ffdeed51225cf30a67696083b0f883149679af731bfab02cf4dc458fc" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.243426 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.247690 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-99e6-account-create-update-wvnr5" event={"ID":"afcb6bce-1132-4c0b-836f-82c6b0fd1406","Type":"ContainerStarted","Data":"3309b12824d8d500c0f96974bd93c6a987d17d0adc3d7ee913c5ad028e72c9fa"} Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.250337 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-t6njw" event={"ID":"9031cb08-dfc3-4d67-b9f2-2953713beb20","Type":"ContainerStarted","Data":"dddae43a08b2757ad4f6142d87658cdd6c6686245df43ca11144d39c9ab8ede9"} Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.250456 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.250546 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-t6njw" event={"ID":"9031cb08-dfc3-4d67-b9f2-2953713beb20","Type":"ContainerStarted","Data":"ae5ad553c8eae54c2ff65821c21242db97c6fc4c336dcf8b67265f7246d98d4c"} Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.250934 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.266649 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-t6njw" podStartSLOduration=2.266630472 podStartE2EDuration="2.266630472s" podCreationTimestamp="2026-02-14 19:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:07:10.262134321 +0000 UTC m=+1483.238542814" watchObservedRunningTime="2026-02-14 19:07:10.266630472 +0000 UTC m=+1483.243038955" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.272352 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bf82541-7932-4602-bdc4-ee1514cd59f4-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.272377 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm9gv\" (UniqueName: \"kubernetes.io/projected/5bf82541-7932-4602-bdc4-ee1514cd59f4-kube-api-access-nm9gv\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.272387 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf82541-7932-4602-bdc4-ee1514cd59f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.374116 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.390958 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.404393 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 14 19:07:10 crc kubenswrapper[4897]: E0214 19:07:10.404923 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bf82541-7932-4602-bdc4-ee1514cd59f4" containerName="nova-cell0-conductor-conductor" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.404939 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bf82541-7932-4602-bdc4-ee1514cd59f4" containerName="nova-cell0-conductor-conductor" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.405156 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bf82541-7932-4602-bdc4-ee1514cd59f4" containerName="nova-cell0-conductor-conductor" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.405940 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.413182 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.413444 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-xvvb8" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.428646 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.578388 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/520a8b04-bc67-440f-958b-166905cd4e0a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"520a8b04-bc67-440f-958b-166905cd4e0a\") " pod="openstack/nova-cell0-conductor-0" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.578513 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb6dz\" (UniqueName: \"kubernetes.io/projected/520a8b04-bc67-440f-958b-166905cd4e0a-kube-api-access-vb6dz\") pod \"nova-cell0-conductor-0\" (UID: \"520a8b04-bc67-440f-958b-166905cd4e0a\") " pod="openstack/nova-cell0-conductor-0" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.578547 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/520a8b04-bc67-440f-958b-166905cd4e0a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"520a8b04-bc67-440f-958b-166905cd4e0a\") " pod="openstack/nova-cell0-conductor-0" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.680957 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb6dz\" (UniqueName: \"kubernetes.io/projected/520a8b04-bc67-440f-958b-166905cd4e0a-kube-api-access-vb6dz\") pod \"nova-cell0-conductor-0\" (UID: \"520a8b04-bc67-440f-958b-166905cd4e0a\") " pod="openstack/nova-cell0-conductor-0" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.681019 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/520a8b04-bc67-440f-958b-166905cd4e0a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"520a8b04-bc67-440f-958b-166905cd4e0a\") " pod="openstack/nova-cell0-conductor-0" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.681161 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/520a8b04-bc67-440f-958b-166905cd4e0a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"520a8b04-bc67-440f-958b-166905cd4e0a\") " pod="openstack/nova-cell0-conductor-0" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.684937 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/520a8b04-bc67-440f-958b-166905cd4e0a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"520a8b04-bc67-440f-958b-166905cd4e0a\") " pod="openstack/nova-cell0-conductor-0" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.687836 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/520a8b04-bc67-440f-958b-166905cd4e0a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"520a8b04-bc67-440f-958b-166905cd4e0a\") " pod="openstack/nova-cell0-conductor-0" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.703986 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb6dz\" (UniqueName: \"kubernetes.io/projected/520a8b04-bc67-440f-958b-166905cd4e0a-kube-api-access-vb6dz\") pod \"nova-cell0-conductor-0\" (UID: \"520a8b04-bc67-440f-958b-166905cd4e0a\") " pod="openstack/nova-cell0-conductor-0" Feb 14 19:07:10 crc kubenswrapper[4897]: I0214 19:07:10.730585 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 14 19:07:11 crc kubenswrapper[4897]: I0214 19:07:11.177871 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 14 19:07:11 crc kubenswrapper[4897]: I0214 19:07:11.262711 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-t6njw" event={"ID":"9031cb08-dfc3-4d67-b9f2-2953713beb20","Type":"ContainerDied","Data":"dddae43a08b2757ad4f6142d87658cdd6c6686245df43ca11144d39c9ab8ede9"} Feb 14 19:07:11 crc kubenswrapper[4897]: I0214 19:07:11.262509 4897 generic.go:334] "Generic (PLEG): container finished" podID="9031cb08-dfc3-4d67-b9f2-2953713beb20" containerID="dddae43a08b2757ad4f6142d87658cdd6c6686245df43ca11144d39c9ab8ede9" exitCode=0 Feb 14 19:07:11 crc kubenswrapper[4897]: I0214 19:07:11.264800 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"520a8b04-bc67-440f-958b-166905cd4e0a","Type":"ContainerStarted","Data":"074b2bb05db9c5bad18c2c018d3108dcef19d3821f673c45187840c074931cd7"} Feb 14 19:07:11 crc kubenswrapper[4897]: I0214 19:07:11.268415 4897 generic.go:334] "Generic (PLEG): container finished" podID="afcb6bce-1132-4c0b-836f-82c6b0fd1406" containerID="55bb5a98301b0ab3ae3fbd80df8fa1d0991f008eac023a4c67ea9f0ca034aa77" exitCode=0 Feb 14 19:07:11 crc kubenswrapper[4897]: I0214 19:07:11.268480 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-99e6-account-create-update-wvnr5" event={"ID":"afcb6bce-1132-4c0b-836f-82c6b0fd1406","Type":"ContainerDied","Data":"55bb5a98301b0ab3ae3fbd80df8fa1d0991f008eac023a4c67ea9f0ca034aa77"} Feb 14 19:07:11 crc kubenswrapper[4897]: I0214 19:07:11.806957 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bf82541-7932-4602-bdc4-ee1514cd59f4" path="/var/lib/kubelet/pods/5bf82541-7932-4602-bdc4-ee1514cd59f4/volumes" Feb 14 19:07:12 crc kubenswrapper[4897]: I0214 19:07:12.091663 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 14 19:07:12 crc kubenswrapper[4897]: I0214 19:07:12.148578 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 14 19:07:12 crc kubenswrapper[4897]: I0214 19:07:12.286325 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"520a8b04-bc67-440f-958b-166905cd4e0a","Type":"ContainerStarted","Data":"f963e5a4564d9a216dd944b60c1db9d587915d5fb17aa6d030e0346a680352bd"} Feb 14 19:07:12 crc kubenswrapper[4897]: I0214 19:07:12.287473 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 14 19:07:12 crc kubenswrapper[4897]: I0214 19:07:12.307278 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.307262031 podStartE2EDuration="2.307262031s" podCreationTimestamp="2026-02-14 19:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:07:12.306634242 +0000 UTC m=+1485.283042735" watchObservedRunningTime="2026-02-14 19:07:12.307262031 +0000 UTC m=+1485.283670514" Feb 14 19:07:12 crc kubenswrapper[4897]: I0214 19:07:12.589993 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gfs45"] Feb 14 19:07:12 crc kubenswrapper[4897]: I0214 19:07:12.607053 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gfs45" Feb 14 19:07:12 crc kubenswrapper[4897]: I0214 19:07:12.625858 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gfs45"] Feb 14 19:07:12 crc kubenswrapper[4897]: I0214 19:07:12.743205 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7df22cbc-a251-4d73-8c0c-c83d17200278-catalog-content\") pod \"redhat-marketplace-gfs45\" (UID: \"7df22cbc-a251-4d73-8c0c-c83d17200278\") " pod="openshift-marketplace/redhat-marketplace-gfs45" Feb 14 19:07:12 crc kubenswrapper[4897]: I0214 19:07:12.743332 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7df22cbc-a251-4d73-8c0c-c83d17200278-utilities\") pod \"redhat-marketplace-gfs45\" (UID: \"7df22cbc-a251-4d73-8c0c-c83d17200278\") " pod="openshift-marketplace/redhat-marketplace-gfs45" Feb 14 19:07:12 crc kubenswrapper[4897]: I0214 19:07:12.743491 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdqvk\" (UniqueName: \"kubernetes.io/projected/7df22cbc-a251-4d73-8c0c-c83d17200278-kube-api-access-gdqvk\") pod \"redhat-marketplace-gfs45\" (UID: \"7df22cbc-a251-4d73-8c0c-c83d17200278\") " pod="openshift-marketplace/redhat-marketplace-gfs45" Feb 14 19:07:12 crc kubenswrapper[4897]: I0214 19:07:12.853654 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7df22cbc-a251-4d73-8c0c-c83d17200278-catalog-content\") pod \"redhat-marketplace-gfs45\" (UID: \"7df22cbc-a251-4d73-8c0c-c83d17200278\") " pod="openshift-marketplace/redhat-marketplace-gfs45" Feb 14 19:07:12 crc kubenswrapper[4897]: I0214 19:07:12.854360 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7df22cbc-a251-4d73-8c0c-c83d17200278-utilities\") pod \"redhat-marketplace-gfs45\" (UID: \"7df22cbc-a251-4d73-8c0c-c83d17200278\") " pod="openshift-marketplace/redhat-marketplace-gfs45" Feb 14 19:07:12 crc kubenswrapper[4897]: I0214 19:07:12.855778 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7df22cbc-a251-4d73-8c0c-c83d17200278-utilities\") pod \"redhat-marketplace-gfs45\" (UID: \"7df22cbc-a251-4d73-8c0c-c83d17200278\") " pod="openshift-marketplace/redhat-marketplace-gfs45" Feb 14 19:07:12 crc kubenswrapper[4897]: I0214 19:07:12.856077 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7df22cbc-a251-4d73-8c0c-c83d17200278-catalog-content\") pod \"redhat-marketplace-gfs45\" (UID: \"7df22cbc-a251-4d73-8c0c-c83d17200278\") " pod="openshift-marketplace/redhat-marketplace-gfs45" Feb 14 19:07:12 crc kubenswrapper[4897]: I0214 19:07:12.856226 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdqvk\" (UniqueName: \"kubernetes.io/projected/7df22cbc-a251-4d73-8c0c-c83d17200278-kube-api-access-gdqvk\") pod \"redhat-marketplace-gfs45\" (UID: \"7df22cbc-a251-4d73-8c0c-c83d17200278\") " pod="openshift-marketplace/redhat-marketplace-gfs45" Feb 14 19:07:12 crc kubenswrapper[4897]: I0214 19:07:12.876621 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdqvk\" (UniqueName: \"kubernetes.io/projected/7df22cbc-a251-4d73-8c0c-c83d17200278-kube-api-access-gdqvk\") pod \"redhat-marketplace-gfs45\" (UID: \"7df22cbc-a251-4d73-8c0c-c83d17200278\") " pod="openshift-marketplace/redhat-marketplace-gfs45" Feb 14 19:07:12 crc kubenswrapper[4897]: I0214 19:07:12.932872 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gfs45" Feb 14 19:07:12 crc kubenswrapper[4897]: I0214 19:07:12.951874 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-t6njw" Feb 14 19:07:13 crc kubenswrapper[4897]: I0214 19:07:13.061846 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrpgp\" (UniqueName: \"kubernetes.io/projected/9031cb08-dfc3-4d67-b9f2-2953713beb20-kube-api-access-vrpgp\") pod \"9031cb08-dfc3-4d67-b9f2-2953713beb20\" (UID: \"9031cb08-dfc3-4d67-b9f2-2953713beb20\") " Feb 14 19:07:13 crc kubenswrapper[4897]: I0214 19:07:13.061998 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9031cb08-dfc3-4d67-b9f2-2953713beb20-operator-scripts\") pod \"9031cb08-dfc3-4d67-b9f2-2953713beb20\" (UID: \"9031cb08-dfc3-4d67-b9f2-2953713beb20\") " Feb 14 19:07:13 crc kubenswrapper[4897]: I0214 19:07:13.062788 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9031cb08-dfc3-4d67-b9f2-2953713beb20-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9031cb08-dfc3-4d67-b9f2-2953713beb20" (UID: "9031cb08-dfc3-4d67-b9f2-2953713beb20"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:07:13 crc kubenswrapper[4897]: I0214 19:07:13.066909 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9031cb08-dfc3-4d67-b9f2-2953713beb20-kube-api-access-vrpgp" (OuterVolumeSpecName: "kube-api-access-vrpgp") pod "9031cb08-dfc3-4d67-b9f2-2953713beb20" (UID: "9031cb08-dfc3-4d67-b9f2-2953713beb20"). InnerVolumeSpecName "kube-api-access-vrpgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:07:13 crc kubenswrapper[4897]: I0214 19:07:13.097806 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-99e6-account-create-update-wvnr5" Feb 14 19:07:13 crc kubenswrapper[4897]: I0214 19:07:13.164352 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9031cb08-dfc3-4d67-b9f2-2953713beb20-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:13 crc kubenswrapper[4897]: I0214 19:07:13.164382 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrpgp\" (UniqueName: \"kubernetes.io/projected/9031cb08-dfc3-4d67-b9f2-2953713beb20-kube-api-access-vrpgp\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:13 crc kubenswrapper[4897]: I0214 19:07:13.265752 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afcb6bce-1132-4c0b-836f-82c6b0fd1406-operator-scripts\") pod \"afcb6bce-1132-4c0b-836f-82c6b0fd1406\" (UID: \"afcb6bce-1132-4c0b-836f-82c6b0fd1406\") " Feb 14 19:07:13 crc kubenswrapper[4897]: I0214 19:07:13.265843 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdstl\" (UniqueName: \"kubernetes.io/projected/afcb6bce-1132-4c0b-836f-82c6b0fd1406-kube-api-access-fdstl\") pod \"afcb6bce-1132-4c0b-836f-82c6b0fd1406\" (UID: \"afcb6bce-1132-4c0b-836f-82c6b0fd1406\") " Feb 14 19:07:13 crc kubenswrapper[4897]: I0214 19:07:13.270705 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afcb6bce-1132-4c0b-836f-82c6b0fd1406-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "afcb6bce-1132-4c0b-836f-82c6b0fd1406" (UID: "afcb6bce-1132-4c0b-836f-82c6b0fd1406"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:07:13 crc kubenswrapper[4897]: I0214 19:07:13.273294 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afcb6bce-1132-4c0b-836f-82c6b0fd1406-kube-api-access-fdstl" (OuterVolumeSpecName: "kube-api-access-fdstl") pod "afcb6bce-1132-4c0b-836f-82c6b0fd1406" (UID: "afcb6bce-1132-4c0b-836f-82c6b0fd1406"). InnerVolumeSpecName "kube-api-access-fdstl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:07:13 crc kubenswrapper[4897]: I0214 19:07:13.299857 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-t6njw" event={"ID":"9031cb08-dfc3-4d67-b9f2-2953713beb20","Type":"ContainerDied","Data":"ae5ad553c8eae54c2ff65821c21242db97c6fc4c336dcf8b67265f7246d98d4c"} Feb 14 19:07:13 crc kubenswrapper[4897]: I0214 19:07:13.299906 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae5ad553c8eae54c2ff65821c21242db97c6fc4c336dcf8b67265f7246d98d4c" Feb 14 19:07:13 crc kubenswrapper[4897]: I0214 19:07:13.299973 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-t6njw" Feb 14 19:07:13 crc kubenswrapper[4897]: I0214 19:07:13.303812 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-99e6-account-create-update-wvnr5" Feb 14 19:07:13 crc kubenswrapper[4897]: I0214 19:07:13.303937 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-99e6-account-create-update-wvnr5" event={"ID":"afcb6bce-1132-4c0b-836f-82c6b0fd1406","Type":"ContainerDied","Data":"3309b12824d8d500c0f96974bd93c6a987d17d0adc3d7ee913c5ad028e72c9fa"} Feb 14 19:07:13 crc kubenswrapper[4897]: I0214 19:07:13.303965 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3309b12824d8d500c0f96974bd93c6a987d17d0adc3d7ee913c5ad028e72c9fa" Feb 14 19:07:13 crc kubenswrapper[4897]: I0214 19:07:13.368681 4897 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afcb6bce-1132-4c0b-836f-82c6b0fd1406-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:13 crc kubenswrapper[4897]: I0214 19:07:13.368725 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdstl\" (UniqueName: \"kubernetes.io/projected/afcb6bce-1132-4c0b-836f-82c6b0fd1406-kube-api-access-fdstl\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:13 crc kubenswrapper[4897]: I0214 19:07:13.538926 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gfs45"] Feb 14 19:07:13 crc kubenswrapper[4897]: W0214 19:07:13.590125 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7df22cbc_a251_4d73_8c0c_c83d17200278.slice/crio-0607c3846ec4dd194e3942c131a22b42b0c657ee99d2dd5d5757b40f8e3db189 WatchSource:0}: Error finding container 0607c3846ec4dd194e3942c131a22b42b0c657ee99d2dd5d5757b40f8e3db189: Status 404 returned error can't find the container with id 0607c3846ec4dd194e3942c131a22b42b0c657ee99d2dd5d5757b40f8e3db189 Feb 14 19:07:13 crc kubenswrapper[4897]: I0214 19:07:13.993502 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.085489 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-config-data\") pod \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.085862 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-log-httpd\") pod \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.085968 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-sg-core-conf-yaml\") pod \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.086150 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26q5j\" (UniqueName: \"kubernetes.io/projected/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-kube-api-access-26q5j\") pod \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.086200 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-run-httpd\") pod \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.086235 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-scripts\") pod \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.086261 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-combined-ca-bundle\") pod \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\" (UID: \"45dff63f-a226-4b9c-aa9c-bd84d92f1f10\") " Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.086550 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "45dff63f-a226-4b9c-aa9c-bd84d92f1f10" (UID: "45dff63f-a226-4b9c-aa9c-bd84d92f1f10"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.087164 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "45dff63f-a226-4b9c-aa9c-bd84d92f1f10" (UID: "45dff63f-a226-4b9c-aa9c-bd84d92f1f10"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.087699 4897 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.087722 4897 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.093288 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-scripts" (OuterVolumeSpecName: "scripts") pod "45dff63f-a226-4b9c-aa9c-bd84d92f1f10" (UID: "45dff63f-a226-4b9c-aa9c-bd84d92f1f10"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.093947 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-kube-api-access-26q5j" (OuterVolumeSpecName: "kube-api-access-26q5j") pod "45dff63f-a226-4b9c-aa9c-bd84d92f1f10" (UID: "45dff63f-a226-4b9c-aa9c-bd84d92f1f10"). InnerVolumeSpecName "kube-api-access-26q5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.121650 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "45dff63f-a226-4b9c-aa9c-bd84d92f1f10" (UID: "45dff63f-a226-4b9c-aa9c-bd84d92f1f10"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.190407 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26q5j\" (UniqueName: \"kubernetes.io/projected/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-kube-api-access-26q5j\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.190445 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.190459 4897 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.197102 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-config-data" (OuterVolumeSpecName: "config-data") pod "45dff63f-a226-4b9c-aa9c-bd84d92f1f10" (UID: "45dff63f-a226-4b9c-aa9c-bd84d92f1f10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.200476 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45dff63f-a226-4b9c-aa9c-bd84d92f1f10" (UID: "45dff63f-a226-4b9c-aa9c-bd84d92f1f10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.292720 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.292974 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45dff63f-a226-4b9c-aa9c-bd84d92f1f10-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.321025 4897 generic.go:334] "Generic (PLEG): container finished" podID="45dff63f-a226-4b9c-aa9c-bd84d92f1f10" containerID="2ed39d29a72ab4bc0bc778788e5c251011ba94e5078cdf4de1a7f00ced10c960" exitCode=0 Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.321309 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45dff63f-a226-4b9c-aa9c-bd84d92f1f10","Type":"ContainerDied","Data":"2ed39d29a72ab4bc0bc778788e5c251011ba94e5078cdf4de1a7f00ced10c960"} Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.321388 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45dff63f-a226-4b9c-aa9c-bd84d92f1f10","Type":"ContainerDied","Data":"bf81e095d5cd10dde0843cab931ed9754bcc4202ae5f0b4b18ad9b5bde8c7f22"} Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.321420 4897 scope.go:117] "RemoveContainer" containerID="37d057246023243980c9cb441b99435e6cc6312cc0228dfd036fe82eccba6d23" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.321640 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.324469 4897 generic.go:334] "Generic (PLEG): container finished" podID="7df22cbc-a251-4d73-8c0c-c83d17200278" containerID="1ea3910cf04a4e02b502c0d982f57340eb3d169e53dc1cbe92715eb99b39389b" exitCode=0 Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.324562 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gfs45" event={"ID":"7df22cbc-a251-4d73-8c0c-c83d17200278","Type":"ContainerDied","Data":"1ea3910cf04a4e02b502c0d982f57340eb3d169e53dc1cbe92715eb99b39389b"} Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.324641 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gfs45" event={"ID":"7df22cbc-a251-4d73-8c0c-c83d17200278","Type":"ContainerStarted","Data":"0607c3846ec4dd194e3942c131a22b42b0c657ee99d2dd5d5757b40f8e3db189"} Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.367283 4897 scope.go:117] "RemoveContainer" containerID="7ec01835d606cff8e32b68afbd9dcdec8585a0ac8e8b0654c39b8cb40035711f" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.388781 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.395383 4897 scope.go:117] "RemoveContainer" containerID="bc2ba80f963fb8673a32e0e793bda1dbcec6e810a3f8b12891a4ee5995949280" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.402416 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.411234 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:07:14 crc kubenswrapper[4897]: E0214 19:07:14.411841 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dff63f-a226-4b9c-aa9c-bd84d92f1f10" containerName="sg-core" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.411862 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dff63f-a226-4b9c-aa9c-bd84d92f1f10" containerName="sg-core" Feb 14 19:07:14 crc kubenswrapper[4897]: E0214 19:07:14.411893 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dff63f-a226-4b9c-aa9c-bd84d92f1f10" containerName="ceilometer-central-agent" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.411904 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dff63f-a226-4b9c-aa9c-bd84d92f1f10" containerName="ceilometer-central-agent" Feb 14 19:07:14 crc kubenswrapper[4897]: E0214 19:07:14.411923 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9031cb08-dfc3-4d67-b9f2-2953713beb20" containerName="mariadb-database-create" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.411932 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="9031cb08-dfc3-4d67-b9f2-2953713beb20" containerName="mariadb-database-create" Feb 14 19:07:14 crc kubenswrapper[4897]: E0214 19:07:14.411953 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dff63f-a226-4b9c-aa9c-bd84d92f1f10" containerName="proxy-httpd" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.411962 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dff63f-a226-4b9c-aa9c-bd84d92f1f10" containerName="proxy-httpd" Feb 14 19:07:14 crc kubenswrapper[4897]: E0214 19:07:14.411983 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dff63f-a226-4b9c-aa9c-bd84d92f1f10" containerName="ceilometer-notification-agent" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.411992 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dff63f-a226-4b9c-aa9c-bd84d92f1f10" containerName="ceilometer-notification-agent" Feb 14 19:07:14 crc kubenswrapper[4897]: E0214 19:07:14.412009 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afcb6bce-1132-4c0b-836f-82c6b0fd1406" containerName="mariadb-account-create-update" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.412018 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="afcb6bce-1132-4c0b-836f-82c6b0fd1406" containerName="mariadb-account-create-update" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.412304 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dff63f-a226-4b9c-aa9c-bd84d92f1f10" containerName="sg-core" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.412327 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="9031cb08-dfc3-4d67-b9f2-2953713beb20" containerName="mariadb-database-create" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.412348 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dff63f-a226-4b9c-aa9c-bd84d92f1f10" containerName="ceilometer-notification-agent" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.412364 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="afcb6bce-1132-4c0b-836f-82c6b0fd1406" containerName="mariadb-account-create-update" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.412375 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dff63f-a226-4b9c-aa9c-bd84d92f1f10" containerName="proxy-httpd" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.412389 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dff63f-a226-4b9c-aa9c-bd84d92f1f10" containerName="ceilometer-central-agent" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.415206 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.417672 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.419863 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.425118 4897 scope.go:117] "RemoveContainer" containerID="2ed39d29a72ab4bc0bc778788e5c251011ba94e5078cdf4de1a7f00ced10c960" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.425999 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.485180 4897 scope.go:117] "RemoveContainer" containerID="37d057246023243980c9cb441b99435e6cc6312cc0228dfd036fe82eccba6d23" Feb 14 19:07:14 crc kubenswrapper[4897]: E0214 19:07:14.485761 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37d057246023243980c9cb441b99435e6cc6312cc0228dfd036fe82eccba6d23\": container with ID starting with 37d057246023243980c9cb441b99435e6cc6312cc0228dfd036fe82eccba6d23 not found: ID does not exist" containerID="37d057246023243980c9cb441b99435e6cc6312cc0228dfd036fe82eccba6d23" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.485817 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37d057246023243980c9cb441b99435e6cc6312cc0228dfd036fe82eccba6d23"} err="failed to get container status \"37d057246023243980c9cb441b99435e6cc6312cc0228dfd036fe82eccba6d23\": rpc error: code = NotFound desc = could not find container \"37d057246023243980c9cb441b99435e6cc6312cc0228dfd036fe82eccba6d23\": container with ID starting with 37d057246023243980c9cb441b99435e6cc6312cc0228dfd036fe82eccba6d23 not found: ID does not exist" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.485849 4897 scope.go:117] "RemoveContainer" containerID="7ec01835d606cff8e32b68afbd9dcdec8585a0ac8e8b0654c39b8cb40035711f" Feb 14 19:07:14 crc kubenswrapper[4897]: E0214 19:07:14.491212 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ec01835d606cff8e32b68afbd9dcdec8585a0ac8e8b0654c39b8cb40035711f\": container with ID starting with 7ec01835d606cff8e32b68afbd9dcdec8585a0ac8e8b0654c39b8cb40035711f not found: ID does not exist" containerID="7ec01835d606cff8e32b68afbd9dcdec8585a0ac8e8b0654c39b8cb40035711f" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.491250 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ec01835d606cff8e32b68afbd9dcdec8585a0ac8e8b0654c39b8cb40035711f"} err="failed to get container status \"7ec01835d606cff8e32b68afbd9dcdec8585a0ac8e8b0654c39b8cb40035711f\": rpc error: code = NotFound desc = could not find container \"7ec01835d606cff8e32b68afbd9dcdec8585a0ac8e8b0654c39b8cb40035711f\": container with ID starting with 7ec01835d606cff8e32b68afbd9dcdec8585a0ac8e8b0654c39b8cb40035711f not found: ID does not exist" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.491271 4897 scope.go:117] "RemoveContainer" containerID="bc2ba80f963fb8673a32e0e793bda1dbcec6e810a3f8b12891a4ee5995949280" Feb 14 19:07:14 crc kubenswrapper[4897]: E0214 19:07:14.491702 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc2ba80f963fb8673a32e0e793bda1dbcec6e810a3f8b12891a4ee5995949280\": container with ID starting with bc2ba80f963fb8673a32e0e793bda1dbcec6e810a3f8b12891a4ee5995949280 not found: ID does not exist" containerID="bc2ba80f963fb8673a32e0e793bda1dbcec6e810a3f8b12891a4ee5995949280" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.491758 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc2ba80f963fb8673a32e0e793bda1dbcec6e810a3f8b12891a4ee5995949280"} err="failed to get container status \"bc2ba80f963fb8673a32e0e793bda1dbcec6e810a3f8b12891a4ee5995949280\": rpc error: code = NotFound desc = could not find container \"bc2ba80f963fb8673a32e0e793bda1dbcec6e810a3f8b12891a4ee5995949280\": container with ID starting with bc2ba80f963fb8673a32e0e793bda1dbcec6e810a3f8b12891a4ee5995949280 not found: ID does not exist" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.491784 4897 scope.go:117] "RemoveContainer" containerID="2ed39d29a72ab4bc0bc778788e5c251011ba94e5078cdf4de1a7f00ced10c960" Feb 14 19:07:14 crc kubenswrapper[4897]: E0214 19:07:14.492242 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ed39d29a72ab4bc0bc778788e5c251011ba94e5078cdf4de1a7f00ced10c960\": container with ID starting with 2ed39d29a72ab4bc0bc778788e5c251011ba94e5078cdf4de1a7f00ced10c960 not found: ID does not exist" containerID="2ed39d29a72ab4bc0bc778788e5c251011ba94e5078cdf4de1a7f00ced10c960" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.492341 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ed39d29a72ab4bc0bc778788e5c251011ba94e5078cdf4de1a7f00ced10c960"} err="failed to get container status \"2ed39d29a72ab4bc0bc778788e5c251011ba94e5078cdf4de1a7f00ced10c960\": rpc error: code = NotFound desc = could not find container \"2ed39d29a72ab4bc0bc778788e5c251011ba94e5078cdf4de1a7f00ced10c960\": container with ID starting with 2ed39d29a72ab4bc0bc778788e5c251011ba94e5078cdf4de1a7f00ced10c960 not found: ID does not exist" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.601806 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7bae8002-6e52-4df1-b7d6-e42290023f2f-run-httpd\") pod \"ceilometer-0\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " pod="openstack/ceilometer-0" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.601847 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bae8002-6e52-4df1-b7d6-e42290023f2f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " pod="openstack/ceilometer-0" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.601931 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bae8002-6e52-4df1-b7d6-e42290023f2f-scripts\") pod \"ceilometer-0\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " pod="openstack/ceilometer-0" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.601961 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7bae8002-6e52-4df1-b7d6-e42290023f2f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " pod="openstack/ceilometer-0" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.602010 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bae8002-6e52-4df1-b7d6-e42290023f2f-config-data\") pod \"ceilometer-0\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " pod="openstack/ceilometer-0" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.602066 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7bae8002-6e52-4df1-b7d6-e42290023f2f-log-httpd\") pod \"ceilometer-0\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " pod="openstack/ceilometer-0" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.602116 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94bz4\" (UniqueName: \"kubernetes.io/projected/7bae8002-6e52-4df1-b7d6-e42290023f2f-kube-api-access-94bz4\") pod \"ceilometer-0\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " pod="openstack/ceilometer-0" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.704342 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bae8002-6e52-4df1-b7d6-e42290023f2f-scripts\") pod \"ceilometer-0\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " pod="openstack/ceilometer-0" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.704445 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7bae8002-6e52-4df1-b7d6-e42290023f2f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " pod="openstack/ceilometer-0" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.704559 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bae8002-6e52-4df1-b7d6-e42290023f2f-config-data\") pod \"ceilometer-0\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " pod="openstack/ceilometer-0" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.704657 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7bae8002-6e52-4df1-b7d6-e42290023f2f-log-httpd\") pod \"ceilometer-0\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " pod="openstack/ceilometer-0" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.704760 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94bz4\" (UniqueName: \"kubernetes.io/projected/7bae8002-6e52-4df1-b7d6-e42290023f2f-kube-api-access-94bz4\") pod \"ceilometer-0\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " pod="openstack/ceilometer-0" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.704826 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7bae8002-6e52-4df1-b7d6-e42290023f2f-run-httpd\") pod \"ceilometer-0\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " pod="openstack/ceilometer-0" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.704862 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bae8002-6e52-4df1-b7d6-e42290023f2f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " pod="openstack/ceilometer-0" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.706441 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7bae8002-6e52-4df1-b7d6-e42290023f2f-log-httpd\") pod \"ceilometer-0\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " pod="openstack/ceilometer-0" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.706646 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7bae8002-6e52-4df1-b7d6-e42290023f2f-run-httpd\") pod \"ceilometer-0\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " pod="openstack/ceilometer-0" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.708717 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7bae8002-6e52-4df1-b7d6-e42290023f2f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " pod="openstack/ceilometer-0" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.709554 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bae8002-6e52-4df1-b7d6-e42290023f2f-scripts\") pod \"ceilometer-0\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " pod="openstack/ceilometer-0" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.710855 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bae8002-6e52-4df1-b7d6-e42290023f2f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " pod="openstack/ceilometer-0" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.714306 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bae8002-6e52-4df1-b7d6-e42290023f2f-config-data\") pod \"ceilometer-0\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " pod="openstack/ceilometer-0" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.726533 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94bz4\" (UniqueName: \"kubernetes.io/projected/7bae8002-6e52-4df1-b7d6-e42290023f2f-kube-api-access-94bz4\") pod \"ceilometer-0\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " pod="openstack/ceilometer-0" Feb 14 19:07:14 crc kubenswrapper[4897]: I0214 19:07:14.764884 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:07:15 crc kubenswrapper[4897]: I0214 19:07:15.293669 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:07:15 crc kubenswrapper[4897]: W0214 19:07:15.304300 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bae8002_6e52_4df1_b7d6_e42290023f2f.slice/crio-133db7345cbddd31345ecd198055d455250004288cbca6229e896a68724ecd03 WatchSource:0}: Error finding container 133db7345cbddd31345ecd198055d455250004288cbca6229e896a68724ecd03: Status 404 returned error can't find the container with id 133db7345cbddd31345ecd198055d455250004288cbca6229e896a68724ecd03 Feb 14 19:07:15 crc kubenswrapper[4897]: I0214 19:07:15.348646 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7bae8002-6e52-4df1-b7d6-e42290023f2f","Type":"ContainerStarted","Data":"133db7345cbddd31345ecd198055d455250004288cbca6229e896a68724ecd03"} Feb 14 19:07:15 crc kubenswrapper[4897]: I0214 19:07:15.353060 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gfs45" event={"ID":"7df22cbc-a251-4d73-8c0c-c83d17200278","Type":"ContainerStarted","Data":"8ee2e5c06f036a06c6f3c01f01873b375ed94b5f0b10049ff27363cdfb933da6"} Feb 14 19:07:15 crc kubenswrapper[4897]: I0214 19:07:15.808246 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45dff63f-a226-4b9c-aa9c-bd84d92f1f10" path="/var/lib/kubelet/pods/45dff63f-a226-4b9c-aa9c-bd84d92f1f10/volumes" Feb 14 19:07:16 crc kubenswrapper[4897]: I0214 19:07:16.365096 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7bae8002-6e52-4df1-b7d6-e42290023f2f","Type":"ContainerStarted","Data":"58403a42b92c553e49ef2d748b8392444527d201c48ea6d6e9a6b5d51b768eaf"} Feb 14 19:07:16 crc kubenswrapper[4897]: I0214 19:07:16.367376 4897 generic.go:334] "Generic (PLEG): container finished" podID="7df22cbc-a251-4d73-8c0c-c83d17200278" containerID="8ee2e5c06f036a06c6f3c01f01873b375ed94b5f0b10049ff27363cdfb933da6" exitCode=0 Feb 14 19:07:16 crc kubenswrapper[4897]: I0214 19:07:16.367433 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gfs45" event={"ID":"7df22cbc-a251-4d73-8c0c-c83d17200278","Type":"ContainerDied","Data":"8ee2e5c06f036a06c6f3c01f01873b375ed94b5f0b10049ff27363cdfb933da6"} Feb 14 19:07:17 crc kubenswrapper[4897]: I0214 19:07:17.400492 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gfs45" event={"ID":"7df22cbc-a251-4d73-8c0c-c83d17200278","Type":"ContainerStarted","Data":"17a12ecb487ee2f4ef7873cbc579963178c766a143418f254d6f2386feebe9d3"} Feb 14 19:07:17 crc kubenswrapper[4897]: I0214 19:07:17.407850 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7bae8002-6e52-4df1-b7d6-e42290023f2f","Type":"ContainerStarted","Data":"9bc7ae3d8b6287e823d2e339beadc0d4fd4c05ad74e4eb2b3f028a866e9c4e6b"} Feb 14 19:07:17 crc kubenswrapper[4897]: I0214 19:07:17.435750 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gfs45" podStartSLOduration=3.036829921 podStartE2EDuration="5.435730393s" podCreationTimestamp="2026-02-14 19:07:12 +0000 UTC" firstStartedPulling="2026-02-14 19:07:14.328366098 +0000 UTC m=+1487.304774581" lastFinishedPulling="2026-02-14 19:07:16.72726656 +0000 UTC m=+1489.703675053" observedRunningTime="2026-02-14 19:07:17.430973795 +0000 UTC m=+1490.407382278" watchObservedRunningTime="2026-02-14 19:07:17.435730393 +0000 UTC m=+1490.412138876" Feb 14 19:07:18 crc kubenswrapper[4897]: I0214 19:07:18.426206 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7bae8002-6e52-4df1-b7d6-e42290023f2f","Type":"ContainerStarted","Data":"8cfb5f3a828c556fafc416a5991274b7fb91f13dc1dba21f25977b404f2c7c3e"} Feb 14 19:07:18 crc kubenswrapper[4897]: I0214 19:07:18.960600 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-hb6vq"] Feb 14 19:07:18 crc kubenswrapper[4897]: I0214 19:07:18.962678 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-hb6vq" Feb 14 19:07:18 crc kubenswrapper[4897]: I0214 19:07:18.965468 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-5zcr5" Feb 14 19:07:18 crc kubenswrapper[4897]: I0214 19:07:18.966261 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 14 19:07:18 crc kubenswrapper[4897]: I0214 19:07:18.966673 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 14 19:07:18 crc kubenswrapper[4897]: I0214 19:07:18.966877 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 14 19:07:18 crc kubenswrapper[4897]: I0214 19:07:18.976654 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-hb6vq"] Feb 14 19:07:19 crc kubenswrapper[4897]: I0214 19:07:19.112251 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9ec880e-a3d2-47d3-86b2-b3e826d66a52-scripts\") pod \"aodh-db-sync-hb6vq\" (UID: \"b9ec880e-a3d2-47d3-86b2-b3e826d66a52\") " pod="openstack/aodh-db-sync-hb6vq" Feb 14 19:07:19 crc kubenswrapper[4897]: I0214 19:07:19.112437 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9ec880e-a3d2-47d3-86b2-b3e826d66a52-combined-ca-bundle\") pod \"aodh-db-sync-hb6vq\" (UID: \"b9ec880e-a3d2-47d3-86b2-b3e826d66a52\") " pod="openstack/aodh-db-sync-hb6vq" Feb 14 19:07:19 crc kubenswrapper[4897]: I0214 19:07:19.112753 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmbc2\" (UniqueName: \"kubernetes.io/projected/b9ec880e-a3d2-47d3-86b2-b3e826d66a52-kube-api-access-tmbc2\") pod \"aodh-db-sync-hb6vq\" (UID: \"b9ec880e-a3d2-47d3-86b2-b3e826d66a52\") " pod="openstack/aodh-db-sync-hb6vq" Feb 14 19:07:19 crc kubenswrapper[4897]: I0214 19:07:19.113337 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9ec880e-a3d2-47d3-86b2-b3e826d66a52-config-data\") pod \"aodh-db-sync-hb6vq\" (UID: \"b9ec880e-a3d2-47d3-86b2-b3e826d66a52\") " pod="openstack/aodh-db-sync-hb6vq" Feb 14 19:07:19 crc kubenswrapper[4897]: I0214 19:07:19.215477 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9ec880e-a3d2-47d3-86b2-b3e826d66a52-scripts\") pod \"aodh-db-sync-hb6vq\" (UID: \"b9ec880e-a3d2-47d3-86b2-b3e826d66a52\") " pod="openstack/aodh-db-sync-hb6vq" Feb 14 19:07:19 crc kubenswrapper[4897]: I0214 19:07:19.215549 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9ec880e-a3d2-47d3-86b2-b3e826d66a52-combined-ca-bundle\") pod \"aodh-db-sync-hb6vq\" (UID: \"b9ec880e-a3d2-47d3-86b2-b3e826d66a52\") " pod="openstack/aodh-db-sync-hb6vq" Feb 14 19:07:19 crc kubenswrapper[4897]: I0214 19:07:19.215622 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmbc2\" (UniqueName: \"kubernetes.io/projected/b9ec880e-a3d2-47d3-86b2-b3e826d66a52-kube-api-access-tmbc2\") pod \"aodh-db-sync-hb6vq\" (UID: \"b9ec880e-a3d2-47d3-86b2-b3e826d66a52\") " pod="openstack/aodh-db-sync-hb6vq" Feb 14 19:07:19 crc kubenswrapper[4897]: I0214 19:07:19.215702 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9ec880e-a3d2-47d3-86b2-b3e826d66a52-config-data\") pod \"aodh-db-sync-hb6vq\" (UID: \"b9ec880e-a3d2-47d3-86b2-b3e826d66a52\") " pod="openstack/aodh-db-sync-hb6vq" Feb 14 19:07:19 crc kubenswrapper[4897]: I0214 19:07:19.219981 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9ec880e-a3d2-47d3-86b2-b3e826d66a52-combined-ca-bundle\") pod \"aodh-db-sync-hb6vq\" (UID: \"b9ec880e-a3d2-47d3-86b2-b3e826d66a52\") " pod="openstack/aodh-db-sync-hb6vq" Feb 14 19:07:19 crc kubenswrapper[4897]: I0214 19:07:19.220332 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9ec880e-a3d2-47d3-86b2-b3e826d66a52-config-data\") pod \"aodh-db-sync-hb6vq\" (UID: \"b9ec880e-a3d2-47d3-86b2-b3e826d66a52\") " pod="openstack/aodh-db-sync-hb6vq" Feb 14 19:07:19 crc kubenswrapper[4897]: I0214 19:07:19.230123 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9ec880e-a3d2-47d3-86b2-b3e826d66a52-scripts\") pod \"aodh-db-sync-hb6vq\" (UID: \"b9ec880e-a3d2-47d3-86b2-b3e826d66a52\") " pod="openstack/aodh-db-sync-hb6vq" Feb 14 19:07:19 crc kubenswrapper[4897]: I0214 19:07:19.234524 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmbc2\" (UniqueName: \"kubernetes.io/projected/b9ec880e-a3d2-47d3-86b2-b3e826d66a52-kube-api-access-tmbc2\") pod \"aodh-db-sync-hb6vq\" (UID: \"b9ec880e-a3d2-47d3-86b2-b3e826d66a52\") " pod="openstack/aodh-db-sync-hb6vq" Feb 14 19:07:19 crc kubenswrapper[4897]: I0214 19:07:19.415203 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-hb6vq" Feb 14 19:07:19 crc kubenswrapper[4897]: I0214 19:07:19.439577 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7bae8002-6e52-4df1-b7d6-e42290023f2f","Type":"ContainerStarted","Data":"470efcc96ea10d6f36db068042218f2ae8ebb22f4fd9cacd4f039acc58b2afd8"} Feb 14 19:07:19 crc kubenswrapper[4897]: I0214 19:07:19.439742 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 19:07:19 crc kubenswrapper[4897]: I0214 19:07:19.470402 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.8564309909999999 podStartE2EDuration="5.470386366s" podCreationTimestamp="2026-02-14 19:07:14 +0000 UTC" firstStartedPulling="2026-02-14 19:07:15.307922318 +0000 UTC m=+1488.284330801" lastFinishedPulling="2026-02-14 19:07:18.921877693 +0000 UTC m=+1491.898286176" observedRunningTime="2026-02-14 19:07:19.465938276 +0000 UTC m=+1492.442346779" watchObservedRunningTime="2026-02-14 19:07:19.470386366 +0000 UTC m=+1492.446794849" Feb 14 19:07:19 crc kubenswrapper[4897]: I0214 19:07:19.924443 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-hb6vq"] Feb 14 19:07:20 crc kubenswrapper[4897]: I0214 19:07:20.449800 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-hb6vq" event={"ID":"b9ec880e-a3d2-47d3-86b2-b3e826d66a52","Type":"ContainerStarted","Data":"2c1b09a1077da6a4e9ead93cdbada3c23f6be62e1e287430379d603ab654ec72"} Feb 14 19:07:20 crc kubenswrapper[4897]: I0214 19:07:20.767804 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.311061 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-ltbnn"] Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.312463 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ltbnn" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.314651 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.314775 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.328135 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-ltbnn"] Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.374615 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883bcca0-6930-4d70-9386-657adbf063c9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ltbnn\" (UID: \"883bcca0-6930-4d70-9386-657adbf063c9\") " pod="openstack/nova-cell0-cell-mapping-ltbnn" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.374912 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883bcca0-6930-4d70-9386-657adbf063c9-config-data\") pod \"nova-cell0-cell-mapping-ltbnn\" (UID: \"883bcca0-6930-4d70-9386-657adbf063c9\") " pod="openstack/nova-cell0-cell-mapping-ltbnn" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.375052 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883bcca0-6930-4d70-9386-657adbf063c9-scripts\") pod \"nova-cell0-cell-mapping-ltbnn\" (UID: \"883bcca0-6930-4d70-9386-657adbf063c9\") " pod="openstack/nova-cell0-cell-mapping-ltbnn" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.375072 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kclgj\" (UniqueName: \"kubernetes.io/projected/883bcca0-6930-4d70-9386-657adbf063c9-kube-api-access-kclgj\") pod \"nova-cell0-cell-mapping-ltbnn\" (UID: \"883bcca0-6930-4d70-9386-657adbf063c9\") " pod="openstack/nova-cell0-cell-mapping-ltbnn" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.482120 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883bcca0-6930-4d70-9386-657adbf063c9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ltbnn\" (UID: \"883bcca0-6930-4d70-9386-657adbf063c9\") " pod="openstack/nova-cell0-cell-mapping-ltbnn" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.482203 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883bcca0-6930-4d70-9386-657adbf063c9-config-data\") pod \"nova-cell0-cell-mapping-ltbnn\" (UID: \"883bcca0-6930-4d70-9386-657adbf063c9\") " pod="openstack/nova-cell0-cell-mapping-ltbnn" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.482340 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883bcca0-6930-4d70-9386-657adbf063c9-scripts\") pod \"nova-cell0-cell-mapping-ltbnn\" (UID: \"883bcca0-6930-4d70-9386-657adbf063c9\") " pod="openstack/nova-cell0-cell-mapping-ltbnn" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.482363 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kclgj\" (UniqueName: \"kubernetes.io/projected/883bcca0-6930-4d70-9386-657adbf063c9-kube-api-access-kclgj\") pod \"nova-cell0-cell-mapping-ltbnn\" (UID: \"883bcca0-6930-4d70-9386-657adbf063c9\") " pod="openstack/nova-cell0-cell-mapping-ltbnn" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.488604 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883bcca0-6930-4d70-9386-657adbf063c9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ltbnn\" (UID: \"883bcca0-6930-4d70-9386-657adbf063c9\") " pod="openstack/nova-cell0-cell-mapping-ltbnn" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.493430 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883bcca0-6930-4d70-9386-657adbf063c9-scripts\") pod \"nova-cell0-cell-mapping-ltbnn\" (UID: \"883bcca0-6930-4d70-9386-657adbf063c9\") " pod="openstack/nova-cell0-cell-mapping-ltbnn" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.495559 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883bcca0-6930-4d70-9386-657adbf063c9-config-data\") pod \"nova-cell0-cell-mapping-ltbnn\" (UID: \"883bcca0-6930-4d70-9386-657adbf063c9\") " pod="openstack/nova-cell0-cell-mapping-ltbnn" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.534527 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kclgj\" (UniqueName: \"kubernetes.io/projected/883bcca0-6930-4d70-9386-657adbf063c9-kube-api-access-kclgj\") pod \"nova-cell0-cell-mapping-ltbnn\" (UID: \"883bcca0-6930-4d70-9386-657adbf063c9\") " pod="openstack/nova-cell0-cell-mapping-ltbnn" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.589399 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.591355 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.601495 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.647525 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.666933 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.672017 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.678872 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.685398 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ltbnn" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.692421 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/454d8a2d-ab1c-41a6-810f-23687631a17b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"454d8a2d-ab1c-41a6-810f-23687631a17b\") " pod="openstack/nova-api-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.692579 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/454d8a2d-ab1c-41a6-810f-23687631a17b-config-data\") pod \"nova-api-0\" (UID: \"454d8a2d-ab1c-41a6-810f-23687631a17b\") " pod="openstack/nova-api-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.692728 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2xml\" (UniqueName: \"kubernetes.io/projected/9ed93b6d-b02e-486a-9451-fdb9604769a0-kube-api-access-w2xml\") pod \"nova-metadata-0\" (UID: \"9ed93b6d-b02e-486a-9451-fdb9604769a0\") " pod="openstack/nova-metadata-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.692866 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ed93b6d-b02e-486a-9451-fdb9604769a0-logs\") pod \"nova-metadata-0\" (UID: \"9ed93b6d-b02e-486a-9451-fdb9604769a0\") " pod="openstack/nova-metadata-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.693277 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ed93b6d-b02e-486a-9451-fdb9604769a0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9ed93b6d-b02e-486a-9451-fdb9604769a0\") " pod="openstack/nova-metadata-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.693377 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/454d8a2d-ab1c-41a6-810f-23687631a17b-logs\") pod \"nova-api-0\" (UID: \"454d8a2d-ab1c-41a6-810f-23687631a17b\") " pod="openstack/nova-api-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.693496 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pmjl\" (UniqueName: \"kubernetes.io/projected/454d8a2d-ab1c-41a6-810f-23687631a17b-kube-api-access-4pmjl\") pod \"nova-api-0\" (UID: \"454d8a2d-ab1c-41a6-810f-23687631a17b\") " pod="openstack/nova-api-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.693860 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ed93b6d-b02e-486a-9451-fdb9604769a0-config-data\") pod \"nova-metadata-0\" (UID: \"9ed93b6d-b02e-486a-9451-fdb9604769a0\") " pod="openstack/nova-metadata-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.735087 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.778253 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.779836 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.783510 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.815643 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2250ea12-361c-47a8-89cf-8c46d41a0ab8-config-data\") pod \"nova-scheduler-0\" (UID: \"2250ea12-361c-47a8-89cf-8c46d41a0ab8\") " pod="openstack/nova-scheduler-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.830668 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.832802 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdmx7\" (UniqueName: \"kubernetes.io/projected/2250ea12-361c-47a8-89cf-8c46d41a0ab8-kube-api-access-xdmx7\") pod \"nova-scheduler-0\" (UID: \"2250ea12-361c-47a8-89cf-8c46d41a0ab8\") " pod="openstack/nova-scheduler-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.833102 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/454d8a2d-ab1c-41a6-810f-23687631a17b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"454d8a2d-ab1c-41a6-810f-23687631a17b\") " pod="openstack/nova-api-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.833149 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/454d8a2d-ab1c-41a6-810f-23687631a17b-config-data\") pod \"nova-api-0\" (UID: \"454d8a2d-ab1c-41a6-810f-23687631a17b\") " pod="openstack/nova-api-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.833183 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2xml\" (UniqueName: \"kubernetes.io/projected/9ed93b6d-b02e-486a-9451-fdb9604769a0-kube-api-access-w2xml\") pod \"nova-metadata-0\" (UID: \"9ed93b6d-b02e-486a-9451-fdb9604769a0\") " pod="openstack/nova-metadata-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.833222 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ed93b6d-b02e-486a-9451-fdb9604769a0-logs\") pod \"nova-metadata-0\" (UID: \"9ed93b6d-b02e-486a-9451-fdb9604769a0\") " pod="openstack/nova-metadata-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.833281 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ed93b6d-b02e-486a-9451-fdb9604769a0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9ed93b6d-b02e-486a-9451-fdb9604769a0\") " pod="openstack/nova-metadata-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.833298 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/454d8a2d-ab1c-41a6-810f-23687631a17b-logs\") pod \"nova-api-0\" (UID: \"454d8a2d-ab1c-41a6-810f-23687631a17b\") " pod="openstack/nova-api-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.833318 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2250ea12-361c-47a8-89cf-8c46d41a0ab8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2250ea12-361c-47a8-89cf-8c46d41a0ab8\") " pod="openstack/nova-scheduler-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.833349 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pmjl\" (UniqueName: \"kubernetes.io/projected/454d8a2d-ab1c-41a6-810f-23687631a17b-kube-api-access-4pmjl\") pod \"nova-api-0\" (UID: \"454d8a2d-ab1c-41a6-810f-23687631a17b\") " pod="openstack/nova-api-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.833417 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ed93b6d-b02e-486a-9451-fdb9604769a0-config-data\") pod \"nova-metadata-0\" (UID: \"9ed93b6d-b02e-486a-9451-fdb9604769a0\") " pod="openstack/nova-metadata-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.835282 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.836835 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.838357 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ed93b6d-b02e-486a-9451-fdb9604769a0-logs\") pod \"nova-metadata-0\" (UID: \"9ed93b6d-b02e-486a-9451-fdb9604769a0\") " pod="openstack/nova-metadata-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.839438 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/454d8a2d-ab1c-41a6-810f-23687631a17b-logs\") pod \"nova-api-0\" (UID: \"454d8a2d-ab1c-41a6-810f-23687631a17b\") " pod="openstack/nova-api-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.846133 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.847358 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/454d8a2d-ab1c-41a6-810f-23687631a17b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"454d8a2d-ab1c-41a6-810f-23687631a17b\") " pod="openstack/nova-api-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.853996 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/454d8a2d-ab1c-41a6-810f-23687631a17b-config-data\") pod \"nova-api-0\" (UID: \"454d8a2d-ab1c-41a6-810f-23687631a17b\") " pod="openstack/nova-api-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.857509 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ed93b6d-b02e-486a-9451-fdb9604769a0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9ed93b6d-b02e-486a-9451-fdb9604769a0\") " pod="openstack/nova-metadata-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.858739 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ed93b6d-b02e-486a-9451-fdb9604769a0-config-data\") pod \"nova-metadata-0\" (UID: \"9ed93b6d-b02e-486a-9451-fdb9604769a0\") " pod="openstack/nova-metadata-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.863630 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.885580 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2xml\" (UniqueName: \"kubernetes.io/projected/9ed93b6d-b02e-486a-9451-fdb9604769a0-kube-api-access-w2xml\") pod \"nova-metadata-0\" (UID: \"9ed93b6d-b02e-486a-9451-fdb9604769a0\") " pod="openstack/nova-metadata-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.891781 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pmjl\" (UniqueName: \"kubernetes.io/projected/454d8a2d-ab1c-41a6-810f-23687631a17b-kube-api-access-4pmjl\") pod \"nova-api-0\" (UID: \"454d8a2d-ab1c-41a6-810f-23687631a17b\") " pod="openstack/nova-api-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.917893 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-pwx2b"] Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.920836 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.939377 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.940829 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cbc738b-7239-431b-9ea6-fde705a328a3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4cbc738b-7239-431b-9ea6-fde705a328a3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.940859 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cbc738b-7239-431b-9ea6-fde705a328a3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4cbc738b-7239-431b-9ea6-fde705a328a3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.940984 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2250ea12-361c-47a8-89cf-8c46d41a0ab8-config-data\") pod \"nova-scheduler-0\" (UID: \"2250ea12-361c-47a8-89cf-8c46d41a0ab8\") " pod="openstack/nova-scheduler-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.941015 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdmx7\" (UniqueName: \"kubernetes.io/projected/2250ea12-361c-47a8-89cf-8c46d41a0ab8-kube-api-access-xdmx7\") pod \"nova-scheduler-0\" (UID: \"2250ea12-361c-47a8-89cf-8c46d41a0ab8\") " pod="openstack/nova-scheduler-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.941195 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2250ea12-361c-47a8-89cf-8c46d41a0ab8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2250ea12-361c-47a8-89cf-8c46d41a0ab8\") " pod="openstack/nova-scheduler-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.941228 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnjr7\" (UniqueName: \"kubernetes.io/projected/4cbc738b-7239-431b-9ea6-fde705a328a3-kube-api-access-vnjr7\") pod \"nova-cell1-novncproxy-0\" (UID: \"4cbc738b-7239-431b-9ea6-fde705a328a3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.959071 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2250ea12-361c-47a8-89cf-8c46d41a0ab8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2250ea12-361c-47a8-89cf-8c46d41a0ab8\") " pod="openstack/nova-scheduler-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.959612 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2250ea12-361c-47a8-89cf-8c46d41a0ab8-config-data\") pod \"nova-scheduler-0\" (UID: \"2250ea12-361c-47a8-89cf-8c46d41a0ab8\") " pod="openstack/nova-scheduler-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.969440 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-pwx2b"] Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.977505 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdmx7\" (UniqueName: \"kubernetes.io/projected/2250ea12-361c-47a8-89cf-8c46d41a0ab8-kube-api-access-xdmx7\") pod \"nova-scheduler-0\" (UID: \"2250ea12-361c-47a8-89cf-8c46d41a0ab8\") " pod="openstack/nova-scheduler-0" Feb 14 19:07:21 crc kubenswrapper[4897]: I0214 19:07:21.983832 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.006621 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.044295 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-pwx2b\" (UID: \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\") " pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.044553 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-dns-svc\") pod \"dnsmasq-dns-9b86998b5-pwx2b\" (UID: \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\") " pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.044600 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-pwx2b\" (UID: \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\") " pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.044635 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dk6g\" (UniqueName: \"kubernetes.io/projected/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-kube-api-access-4dk6g\") pod \"dnsmasq-dns-9b86998b5-pwx2b\" (UID: \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\") " pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.044707 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnjr7\" (UniqueName: \"kubernetes.io/projected/4cbc738b-7239-431b-9ea6-fde705a328a3-kube-api-access-vnjr7\") pod \"nova-cell1-novncproxy-0\" (UID: \"4cbc738b-7239-431b-9ea6-fde705a328a3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.044779 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-pwx2b\" (UID: \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\") " pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.044815 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cbc738b-7239-431b-9ea6-fde705a328a3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4cbc738b-7239-431b-9ea6-fde705a328a3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.044833 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cbc738b-7239-431b-9ea6-fde705a328a3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4cbc738b-7239-431b-9ea6-fde705a328a3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.044974 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-config\") pod \"dnsmasq-dns-9b86998b5-pwx2b\" (UID: \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\") " pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.049515 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cbc738b-7239-431b-9ea6-fde705a328a3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4cbc738b-7239-431b-9ea6-fde705a328a3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.053656 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cbc738b-7239-431b-9ea6-fde705a328a3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4cbc738b-7239-431b-9ea6-fde705a328a3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.064513 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnjr7\" (UniqueName: \"kubernetes.io/projected/4cbc738b-7239-431b-9ea6-fde705a328a3-kube-api-access-vnjr7\") pod \"nova-cell1-novncproxy-0\" (UID: \"4cbc738b-7239-431b-9ea6-fde705a328a3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.147052 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-pwx2b\" (UID: \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\") " pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.147695 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-dns-svc\") pod \"dnsmasq-dns-9b86998b5-pwx2b\" (UID: \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\") " pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.147730 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-pwx2b\" (UID: \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\") " pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.147758 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dk6g\" (UniqueName: \"kubernetes.io/projected/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-kube-api-access-4dk6g\") pod \"dnsmasq-dns-9b86998b5-pwx2b\" (UID: \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\") " pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.147816 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-pwx2b\" (UID: \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\") " pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.147935 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-config\") pod \"dnsmasq-dns-9b86998b5-pwx2b\" (UID: \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\") " pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.149645 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-config\") pod \"dnsmasq-dns-9b86998b5-pwx2b\" (UID: \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\") " pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.149022 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-pwx2b\" (UID: \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\") " pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.153022 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-pwx2b\" (UID: \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\") " pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.153649 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-dns-svc\") pod \"dnsmasq-dns-9b86998b5-pwx2b\" (UID: \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\") " pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.154442 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-pwx2b\" (UID: \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\") " pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.173843 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dk6g\" (UniqueName: \"kubernetes.io/projected/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-kube-api-access-4dk6g\") pod \"dnsmasq-dns-9b86998b5-pwx2b\" (UID: \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\") " pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.297634 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.318445 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.538793 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-ltbnn"] Feb 14 19:07:22 crc kubenswrapper[4897]: W0214 19:07:22.624845 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod883bcca0_6930_4d70_9386_657adbf063c9.slice/crio-10d922bbee54cf6a58c339126f2649fffbb94dd369435e8acf38babe46ded0f5 WatchSource:0}: Error finding container 10d922bbee54cf6a58c339126f2649fffbb94dd369435e8acf38babe46ded0f5: Status 404 returned error can't find the container with id 10d922bbee54cf6a58c339126f2649fffbb94dd369435e8acf38babe46ded0f5 Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.933307 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gfs45" Feb 14 19:07:22 crc kubenswrapper[4897]: I0214 19:07:22.935179 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gfs45" Feb 14 19:07:23 crc kubenswrapper[4897]: I0214 19:07:23.082736 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 19:07:23 crc kubenswrapper[4897]: I0214 19:07:23.121575 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 19:07:23 crc kubenswrapper[4897]: I0214 19:07:23.249500 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 19:07:23 crc kubenswrapper[4897]: I0214 19:07:23.378463 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-pwx2b"] Feb 14 19:07:23 crc kubenswrapper[4897]: W0214 19:07:23.433124 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cbc738b_7239_431b_9ea6_fde705a328a3.slice/crio-32a0140464de8175e39edcb4adeadcaaa91e8b017ada07dd9f0c29fa76641114 WatchSource:0}: Error finding container 32a0140464de8175e39edcb4adeadcaaa91e8b017ada07dd9f0c29fa76641114: Status 404 returned error can't find the container with id 32a0140464de8175e39edcb4adeadcaaa91e8b017ada07dd9f0c29fa76641114 Feb 14 19:07:23 crc kubenswrapper[4897]: I0214 19:07:23.456422 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 19:07:23 crc kubenswrapper[4897]: I0214 19:07:23.636824 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"454d8a2d-ab1c-41a6-810f-23687631a17b","Type":"ContainerStarted","Data":"f6463169d133ba94e5147c72709cdf39d3075970f4549c98a106ac6e78d3ef51"} Feb 14 19:07:23 crc kubenswrapper[4897]: I0214 19:07:23.645389 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" event={"ID":"7dc95e64-31a9-4a6a-87fe-bfe2d765966f","Type":"ContainerStarted","Data":"5201133335f271edf49e7996aca2bcdcd236b53880803e4ce1b1f30519d865f4"} Feb 14 19:07:23 crc kubenswrapper[4897]: I0214 19:07:23.648625 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9ed93b6d-b02e-486a-9451-fdb9604769a0","Type":"ContainerStarted","Data":"c88b79547d929453f81fc3b7d5a3f7e5225001975af0fab4d49390bf4d5295aa"} Feb 14 19:07:23 crc kubenswrapper[4897]: I0214 19:07:23.652578 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ltbnn" event={"ID":"883bcca0-6930-4d70-9386-657adbf063c9","Type":"ContainerStarted","Data":"6d7b1cc339eda193c2cebf96994400598d71e3b6213f79b28ff307fc70256467"} Feb 14 19:07:23 crc kubenswrapper[4897]: I0214 19:07:23.652619 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ltbnn" event={"ID":"883bcca0-6930-4d70-9386-657adbf063c9","Type":"ContainerStarted","Data":"10d922bbee54cf6a58c339126f2649fffbb94dd369435e8acf38babe46ded0f5"} Feb 14 19:07:23 crc kubenswrapper[4897]: I0214 19:07:23.654785 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4cbc738b-7239-431b-9ea6-fde705a328a3","Type":"ContainerStarted","Data":"32a0140464de8175e39edcb4adeadcaaa91e8b017ada07dd9f0c29fa76641114"} Feb 14 19:07:23 crc kubenswrapper[4897]: I0214 19:07:23.656884 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2250ea12-361c-47a8-89cf-8c46d41a0ab8","Type":"ContainerStarted","Data":"1c28d9a47d11df4390bcb06fa8ad8a91f900ba73a5b5be7acb14f3dcecd25452"} Feb 14 19:07:23 crc kubenswrapper[4897]: I0214 19:07:23.679527 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-ltbnn" podStartSLOduration=2.679506757 podStartE2EDuration="2.679506757s" podCreationTimestamp="2026-02-14 19:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:07:23.672847449 +0000 UTC m=+1496.649255932" watchObservedRunningTime="2026-02-14 19:07:23.679506757 +0000 UTC m=+1496.655915240" Feb 14 19:07:24 crc kubenswrapper[4897]: I0214 19:07:24.030356 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gfs45" podUID="7df22cbc-a251-4d73-8c0c-c83d17200278" containerName="registry-server" probeResult="failure" output=< Feb 14 19:07:24 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 19:07:24 crc kubenswrapper[4897]: > Feb 14 19:07:24 crc kubenswrapper[4897]: I0214 19:07:24.510297 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ttbmx"] Feb 14 19:07:24 crc kubenswrapper[4897]: I0214 19:07:24.511943 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ttbmx" Feb 14 19:07:24 crc kubenswrapper[4897]: I0214 19:07:24.516597 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 14 19:07:24 crc kubenswrapper[4897]: I0214 19:07:24.517176 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 14 19:07:24 crc kubenswrapper[4897]: I0214 19:07:24.528392 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ttbmx"] Feb 14 19:07:24 crc kubenswrapper[4897]: I0214 19:07:24.634201 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd-scripts\") pod \"nova-cell1-conductor-db-sync-ttbmx\" (UID: \"7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd\") " pod="openstack/nova-cell1-conductor-db-sync-ttbmx" Feb 14 19:07:24 crc kubenswrapper[4897]: I0214 19:07:24.634242 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ttbmx\" (UID: \"7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd\") " pod="openstack/nova-cell1-conductor-db-sync-ttbmx" Feb 14 19:07:24 crc kubenswrapper[4897]: I0214 19:07:24.634375 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qlld\" (UniqueName: \"kubernetes.io/projected/7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd-kube-api-access-5qlld\") pod \"nova-cell1-conductor-db-sync-ttbmx\" (UID: \"7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd\") " pod="openstack/nova-cell1-conductor-db-sync-ttbmx" Feb 14 19:07:24 crc kubenswrapper[4897]: I0214 19:07:24.634404 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd-config-data\") pod \"nova-cell1-conductor-db-sync-ttbmx\" (UID: \"7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd\") " pod="openstack/nova-cell1-conductor-db-sync-ttbmx" Feb 14 19:07:24 crc kubenswrapper[4897]: I0214 19:07:24.683235 4897 generic.go:334] "Generic (PLEG): container finished" podID="7dc95e64-31a9-4a6a-87fe-bfe2d765966f" containerID="d132f21138a128d59defb0fe7884725ac0bfcd7d800a88bde009e9b3e95b146e" exitCode=0 Feb 14 19:07:24 crc kubenswrapper[4897]: I0214 19:07:24.684100 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" event={"ID":"7dc95e64-31a9-4a6a-87fe-bfe2d765966f","Type":"ContainerDied","Data":"d132f21138a128d59defb0fe7884725ac0bfcd7d800a88bde009e9b3e95b146e"} Feb 14 19:07:24 crc kubenswrapper[4897]: I0214 19:07:24.740103 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qlld\" (UniqueName: \"kubernetes.io/projected/7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd-kube-api-access-5qlld\") pod \"nova-cell1-conductor-db-sync-ttbmx\" (UID: \"7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd\") " pod="openstack/nova-cell1-conductor-db-sync-ttbmx" Feb 14 19:07:24 crc kubenswrapper[4897]: I0214 19:07:24.740192 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd-config-data\") pod \"nova-cell1-conductor-db-sync-ttbmx\" (UID: \"7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd\") " pod="openstack/nova-cell1-conductor-db-sync-ttbmx" Feb 14 19:07:24 crc kubenswrapper[4897]: I0214 19:07:24.740410 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd-scripts\") pod \"nova-cell1-conductor-db-sync-ttbmx\" (UID: \"7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd\") " pod="openstack/nova-cell1-conductor-db-sync-ttbmx" Feb 14 19:07:24 crc kubenswrapper[4897]: I0214 19:07:24.740429 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ttbmx\" (UID: \"7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd\") " pod="openstack/nova-cell1-conductor-db-sync-ttbmx" Feb 14 19:07:24 crc kubenswrapper[4897]: I0214 19:07:24.761012 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd-config-data\") pod \"nova-cell1-conductor-db-sync-ttbmx\" (UID: \"7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd\") " pod="openstack/nova-cell1-conductor-db-sync-ttbmx" Feb 14 19:07:24 crc kubenswrapper[4897]: I0214 19:07:24.761641 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qlld\" (UniqueName: \"kubernetes.io/projected/7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd-kube-api-access-5qlld\") pod \"nova-cell1-conductor-db-sync-ttbmx\" (UID: \"7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd\") " pod="openstack/nova-cell1-conductor-db-sync-ttbmx" Feb 14 19:07:24 crc kubenswrapper[4897]: I0214 19:07:24.763553 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd-scripts\") pod \"nova-cell1-conductor-db-sync-ttbmx\" (UID: \"7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd\") " pod="openstack/nova-cell1-conductor-db-sync-ttbmx" Feb 14 19:07:24 crc kubenswrapper[4897]: I0214 19:07:24.768709 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ttbmx\" (UID: \"7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd\") " pod="openstack/nova-cell1-conductor-db-sync-ttbmx" Feb 14 19:07:24 crc kubenswrapper[4897]: I0214 19:07:24.843311 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ttbmx" Feb 14 19:07:25 crc kubenswrapper[4897]: I0214 19:07:25.293724 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 19:07:25 crc kubenswrapper[4897]: I0214 19:07:25.320166 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 19:07:30 crc kubenswrapper[4897]: I0214 19:07:30.276774 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 14 19:07:30 crc kubenswrapper[4897]: I0214 19:07:30.685797 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ttbmx"] Feb 14 19:07:30 crc kubenswrapper[4897]: W0214 19:07:30.690433 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a2e5b9c_9d4d_430e_8fbd_ae317ee1fdcd.slice/crio-97e77052e566422e9cc9cbd1399982b98d01abeca55b52ad13f7c834d305e238 WatchSource:0}: Error finding container 97e77052e566422e9cc9cbd1399982b98d01abeca55b52ad13f7c834d305e238: Status 404 returned error can't find the container with id 97e77052e566422e9cc9cbd1399982b98d01abeca55b52ad13f7c834d305e238 Feb 14 19:07:30 crc kubenswrapper[4897]: I0214 19:07:30.765341 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ttbmx" event={"ID":"7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd","Type":"ContainerStarted","Data":"97e77052e566422e9cc9cbd1399982b98d01abeca55b52ad13f7c834d305e238"} Feb 14 19:07:30 crc kubenswrapper[4897]: I0214 19:07:30.773157 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2250ea12-361c-47a8-89cf-8c46d41a0ab8","Type":"ContainerStarted","Data":"584773dfa5c2accb7dcb45806cd4e4889305d351908d7bf35b41798f8d407e48"} Feb 14 19:07:30 crc kubenswrapper[4897]: I0214 19:07:30.778529 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"454d8a2d-ab1c-41a6-810f-23687631a17b","Type":"ContainerStarted","Data":"73026f67c141207bb38b490bddeef87e8210761e4816c62e392b3e776f5bdff7"} Feb 14 19:07:30 crc kubenswrapper[4897]: I0214 19:07:30.784466 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" event={"ID":"7dc95e64-31a9-4a6a-87fe-bfe2d765966f","Type":"ContainerStarted","Data":"17dd65258588af3928d6ab2f5068f102834180603077de93e81b78d33318c68a"} Feb 14 19:07:30 crc kubenswrapper[4897]: I0214 19:07:30.787465 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" Feb 14 19:07:30 crc kubenswrapper[4897]: I0214 19:07:30.800186 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9ed93b6d-b02e-486a-9451-fdb9604769a0","Type":"ContainerStarted","Data":"606f058d7ef56fc1c1d80c661f2f5d1f1a25d5f930ad15bae53b2ac578487fd1"} Feb 14 19:07:30 crc kubenswrapper[4897]: I0214 19:07:30.804702 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-hb6vq" event={"ID":"b9ec880e-a3d2-47d3-86b2-b3e826d66a52","Type":"ContainerStarted","Data":"3dce2f8ca0ce29f937e9656ad397b0b4280859f17385c52f799bb314b5d7703d"} Feb 14 19:07:30 crc kubenswrapper[4897]: I0214 19:07:30.808224 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.674044956 podStartE2EDuration="9.808208822s" podCreationTimestamp="2026-02-14 19:07:21 +0000 UTC" firstStartedPulling="2026-02-14 19:07:23.131415887 +0000 UTC m=+1496.107824370" lastFinishedPulling="2026-02-14 19:07:30.265579743 +0000 UTC m=+1503.241988236" observedRunningTime="2026-02-14 19:07:30.787874133 +0000 UTC m=+1503.764282636" watchObservedRunningTime="2026-02-14 19:07:30.808208822 +0000 UTC m=+1503.784617305" Feb 14 19:07:30 crc kubenswrapper[4897]: I0214 19:07:30.810480 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4cbc738b-7239-431b-9ea6-fde705a328a3","Type":"ContainerStarted","Data":"e7f8ca1035fe4ab44b56a7b5335080d9770958fed63e28ce65ad1d38e20044cc"} Feb 14 19:07:30 crc kubenswrapper[4897]: I0214 19:07:30.810636 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="4cbc738b-7239-431b-9ea6-fde705a328a3" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://e7f8ca1035fe4ab44b56a7b5335080d9770958fed63e28ce65ad1d38e20044cc" gracePeriod=30 Feb 14 19:07:30 crc kubenswrapper[4897]: I0214 19:07:30.810693 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" podStartSLOduration=9.810685479 podStartE2EDuration="9.810685479s" podCreationTimestamp="2026-02-14 19:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:07:30.804565697 +0000 UTC m=+1503.780974200" watchObservedRunningTime="2026-02-14 19:07:30.810685479 +0000 UTC m=+1503.787093982" Feb 14 19:07:30 crc kubenswrapper[4897]: I0214 19:07:30.824462 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-hb6vq" podStartSLOduration=2.48540559 podStartE2EDuration="12.824444941s" podCreationTimestamp="2026-02-14 19:07:18 +0000 UTC" firstStartedPulling="2026-02-14 19:07:19.926525661 +0000 UTC m=+1492.902934144" lastFinishedPulling="2026-02-14 19:07:30.265565012 +0000 UTC m=+1503.241973495" observedRunningTime="2026-02-14 19:07:30.822016455 +0000 UTC m=+1503.798424938" watchObservedRunningTime="2026-02-14 19:07:30.824444941 +0000 UTC m=+1503.800853424" Feb 14 19:07:30 crc kubenswrapper[4897]: I0214 19:07:30.844021 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.022132239 podStartE2EDuration="9.843933602s" podCreationTimestamp="2026-02-14 19:07:21 +0000 UTC" firstStartedPulling="2026-02-14 19:07:23.435542581 +0000 UTC m=+1496.411951064" lastFinishedPulling="2026-02-14 19:07:30.257343904 +0000 UTC m=+1503.233752427" observedRunningTime="2026-02-14 19:07:30.839216115 +0000 UTC m=+1503.815624618" watchObservedRunningTime="2026-02-14 19:07:30.843933602 +0000 UTC m=+1503.820342085" Feb 14 19:07:31 crc kubenswrapper[4897]: I0214 19:07:31.837896 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ttbmx" event={"ID":"7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd","Type":"ContainerStarted","Data":"a311d23009ed170bc872291a802e4e523b9f509d873bf0226a3762c05d5826c8"} Feb 14 19:07:31 crc kubenswrapper[4897]: I0214 19:07:31.842588 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"454d8a2d-ab1c-41a6-810f-23687631a17b","Type":"ContainerStarted","Data":"cd08132736f280958403011f43f50dcc4964914eff69a5183d974affac19230d"} Feb 14 19:07:31 crc kubenswrapper[4897]: I0214 19:07:31.844947 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9ed93b6d-b02e-486a-9451-fdb9604769a0","Type":"ContainerStarted","Data":"55d237831fc44d68399ad5c4e55ea7f4b62d1f57ab613754f438f92554358cf5"} Feb 14 19:07:31 crc kubenswrapper[4897]: I0214 19:07:31.845115 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9ed93b6d-b02e-486a-9451-fdb9604769a0" containerName="nova-metadata-log" containerID="cri-o://606f058d7ef56fc1c1d80c661f2f5d1f1a25d5f930ad15bae53b2ac578487fd1" gracePeriod=30 Feb 14 19:07:31 crc kubenswrapper[4897]: I0214 19:07:31.845340 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9ed93b6d-b02e-486a-9451-fdb9604769a0" containerName="nova-metadata-metadata" containerID="cri-o://55d237831fc44d68399ad5c4e55ea7f4b62d1f57ab613754f438f92554358cf5" gracePeriod=30 Feb 14 19:07:31 crc kubenswrapper[4897]: I0214 19:07:31.852942 4897 generic.go:334] "Generic (PLEG): container finished" podID="883bcca0-6930-4d70-9386-657adbf063c9" containerID="6d7b1cc339eda193c2cebf96994400598d71e3b6213f79b28ff307fc70256467" exitCode=0 Feb 14 19:07:31 crc kubenswrapper[4897]: I0214 19:07:31.855725 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ltbnn" event={"ID":"883bcca0-6930-4d70-9386-657adbf063c9","Type":"ContainerDied","Data":"6d7b1cc339eda193c2cebf96994400598d71e3b6213f79b28ff307fc70256467"} Feb 14 19:07:31 crc kubenswrapper[4897]: I0214 19:07:31.903671 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.900440604 podStartE2EDuration="10.903646079s" podCreationTimestamp="2026-02-14 19:07:21 +0000 UTC" firstStartedPulling="2026-02-14 19:07:23.25576816 +0000 UTC m=+1496.232176643" lastFinishedPulling="2026-02-14 19:07:30.258973635 +0000 UTC m=+1503.235382118" observedRunningTime="2026-02-14 19:07:31.873054178 +0000 UTC m=+1504.849462671" watchObservedRunningTime="2026-02-14 19:07:31.903646079 +0000 UTC m=+1504.880054562" Feb 14 19:07:31 crc kubenswrapper[4897]: I0214 19:07:31.914740 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-ttbmx" podStartSLOduration=7.914704206 podStartE2EDuration="7.914704206s" podCreationTimestamp="2026-02-14 19:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:07:31.857115118 +0000 UTC m=+1504.833523601" watchObservedRunningTime="2026-02-14 19:07:31.914704206 +0000 UTC m=+1504.891112689" Feb 14 19:07:31 crc kubenswrapper[4897]: I0214 19:07:31.940260 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.771183516 podStartE2EDuration="10.940238747s" podCreationTimestamp="2026-02-14 19:07:21 +0000 UTC" firstStartedPulling="2026-02-14 19:07:23.110299864 +0000 UTC m=+1496.086708347" lastFinishedPulling="2026-02-14 19:07:30.279355095 +0000 UTC m=+1503.255763578" observedRunningTime="2026-02-14 19:07:31.908698727 +0000 UTC m=+1504.885107220" watchObservedRunningTime="2026-02-14 19:07:31.940238747 +0000 UTC m=+1504.916647230" Feb 14 19:07:31 crc kubenswrapper[4897]: I0214 19:07:31.943018 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 19:07:31 crc kubenswrapper[4897]: I0214 19:07:31.943083 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 19:07:31 crc kubenswrapper[4897]: I0214 19:07:31.985416 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 14 19:07:31 crc kubenswrapper[4897]: I0214 19:07:31.985543 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 14 19:07:32 crc kubenswrapper[4897]: I0214 19:07:32.007307 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 14 19:07:32 crc kubenswrapper[4897]: I0214 19:07:32.007376 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 14 19:07:32 crc kubenswrapper[4897]: I0214 19:07:32.016672 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 14 19:07:32 crc kubenswrapper[4897]: I0214 19:07:32.302231 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:07:32 crc kubenswrapper[4897]: I0214 19:07:32.491886 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 19:07:32 crc kubenswrapper[4897]: I0214 19:07:32.582906 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ed93b6d-b02e-486a-9451-fdb9604769a0-combined-ca-bundle\") pod \"9ed93b6d-b02e-486a-9451-fdb9604769a0\" (UID: \"9ed93b6d-b02e-486a-9451-fdb9604769a0\") " Feb 14 19:07:32 crc kubenswrapper[4897]: I0214 19:07:32.582985 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ed93b6d-b02e-486a-9451-fdb9604769a0-config-data\") pod \"9ed93b6d-b02e-486a-9451-fdb9604769a0\" (UID: \"9ed93b6d-b02e-486a-9451-fdb9604769a0\") " Feb 14 19:07:32 crc kubenswrapper[4897]: I0214 19:07:32.583326 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ed93b6d-b02e-486a-9451-fdb9604769a0-logs\") pod \"9ed93b6d-b02e-486a-9451-fdb9604769a0\" (UID: \"9ed93b6d-b02e-486a-9451-fdb9604769a0\") " Feb 14 19:07:32 crc kubenswrapper[4897]: I0214 19:07:32.583365 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2xml\" (UniqueName: \"kubernetes.io/projected/9ed93b6d-b02e-486a-9451-fdb9604769a0-kube-api-access-w2xml\") pod \"9ed93b6d-b02e-486a-9451-fdb9604769a0\" (UID: \"9ed93b6d-b02e-486a-9451-fdb9604769a0\") " Feb 14 19:07:32 crc kubenswrapper[4897]: I0214 19:07:32.583692 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ed93b6d-b02e-486a-9451-fdb9604769a0-logs" (OuterVolumeSpecName: "logs") pod "9ed93b6d-b02e-486a-9451-fdb9604769a0" (UID: "9ed93b6d-b02e-486a-9451-fdb9604769a0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:07:32 crc kubenswrapper[4897]: I0214 19:07:32.584427 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ed93b6d-b02e-486a-9451-fdb9604769a0-logs\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:32 crc kubenswrapper[4897]: I0214 19:07:32.589182 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ed93b6d-b02e-486a-9451-fdb9604769a0-kube-api-access-w2xml" (OuterVolumeSpecName: "kube-api-access-w2xml") pod "9ed93b6d-b02e-486a-9451-fdb9604769a0" (UID: "9ed93b6d-b02e-486a-9451-fdb9604769a0"). InnerVolumeSpecName "kube-api-access-w2xml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:07:32 crc kubenswrapper[4897]: I0214 19:07:32.621127 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ed93b6d-b02e-486a-9451-fdb9604769a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ed93b6d-b02e-486a-9451-fdb9604769a0" (UID: "9ed93b6d-b02e-486a-9451-fdb9604769a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:32 crc kubenswrapper[4897]: I0214 19:07:32.636290 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ed93b6d-b02e-486a-9451-fdb9604769a0-config-data" (OuterVolumeSpecName: "config-data") pod "9ed93b6d-b02e-486a-9451-fdb9604769a0" (UID: "9ed93b6d-b02e-486a-9451-fdb9604769a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:32 crc kubenswrapper[4897]: I0214 19:07:32.686550 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2xml\" (UniqueName: \"kubernetes.io/projected/9ed93b6d-b02e-486a-9451-fdb9604769a0-kube-api-access-w2xml\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:32 crc kubenswrapper[4897]: I0214 19:07:32.686596 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ed93b6d-b02e-486a-9451-fdb9604769a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:32 crc kubenswrapper[4897]: I0214 19:07:32.686610 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ed93b6d-b02e-486a-9451-fdb9604769a0-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:32 crc kubenswrapper[4897]: I0214 19:07:32.903931 4897 generic.go:334] "Generic (PLEG): container finished" podID="9ed93b6d-b02e-486a-9451-fdb9604769a0" containerID="55d237831fc44d68399ad5c4e55ea7f4b62d1f57ab613754f438f92554358cf5" exitCode=0 Feb 14 19:07:32 crc kubenswrapper[4897]: I0214 19:07:32.903961 4897 generic.go:334] "Generic (PLEG): container finished" podID="9ed93b6d-b02e-486a-9451-fdb9604769a0" containerID="606f058d7ef56fc1c1d80c661f2f5d1f1a25d5f930ad15bae53b2ac578487fd1" exitCode=143 Feb 14 19:07:32 crc kubenswrapper[4897]: I0214 19:07:32.904826 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 19:07:32 crc kubenswrapper[4897]: I0214 19:07:32.906635 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9ed93b6d-b02e-486a-9451-fdb9604769a0","Type":"ContainerDied","Data":"55d237831fc44d68399ad5c4e55ea7f4b62d1f57ab613754f438f92554358cf5"} Feb 14 19:07:32 crc kubenswrapper[4897]: I0214 19:07:32.906689 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9ed93b6d-b02e-486a-9451-fdb9604769a0","Type":"ContainerDied","Data":"606f058d7ef56fc1c1d80c661f2f5d1f1a25d5f930ad15bae53b2ac578487fd1"} Feb 14 19:07:32 crc kubenswrapper[4897]: I0214 19:07:32.906702 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9ed93b6d-b02e-486a-9451-fdb9604769a0","Type":"ContainerDied","Data":"c88b79547d929453f81fc3b7d5a3f7e5225001975af0fab4d49390bf4d5295aa"} Feb 14 19:07:32 crc kubenswrapper[4897]: I0214 19:07:32.906721 4897 scope.go:117] "RemoveContainer" containerID="55d237831fc44d68399ad5c4e55ea7f4b62d1f57ab613754f438f92554358cf5" Feb 14 19:07:32 crc kubenswrapper[4897]: E0214 19:07:32.920573 4897 kuberuntime_gc.go:389] "Failed to remove container log dead symlink" err="remove /var/log/containers/nova-metadata-0_openstack_nova-metadata-metadata-55d237831fc44d68399ad5c4e55ea7f4b62d1f57ab613754f438f92554358cf5.log: no such file or directory" path="/var/log/containers/nova-metadata-0_openstack_nova-metadata-metadata-55d237831fc44d68399ad5c4e55ea7f4b62d1f57ab613754f438f92554358cf5.log" Feb 14 19:07:32 crc kubenswrapper[4897]: I0214 19:07:32.955480 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 14 19:07:32 crc kubenswrapper[4897]: I0214 19:07:32.980012 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.000049 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.019227 4897 scope.go:117] "RemoveContainer" containerID="606f058d7ef56fc1c1d80c661f2f5d1f1a25d5f930ad15bae53b2ac578487fd1" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.028333 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="454d8a2d-ab1c-41a6-810f-23687631a17b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.247:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.028607 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="454d8a2d-ab1c-41a6-810f-23687631a17b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.247:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.029990 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 14 19:07:33 crc kubenswrapper[4897]: E0214 19:07:33.030711 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed93b6d-b02e-486a-9451-fdb9604769a0" containerName="nova-metadata-log" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.030732 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed93b6d-b02e-486a-9451-fdb9604769a0" containerName="nova-metadata-log" Feb 14 19:07:33 crc kubenswrapper[4897]: E0214 19:07:33.030794 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed93b6d-b02e-486a-9451-fdb9604769a0" containerName="nova-metadata-metadata" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.030804 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed93b6d-b02e-486a-9451-fdb9604769a0" containerName="nova-metadata-metadata" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.031133 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed93b6d-b02e-486a-9451-fdb9604769a0" containerName="nova-metadata-metadata" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.031158 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed93b6d-b02e-486a-9451-fdb9604769a0" containerName="nova-metadata-log" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.032864 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.037397 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.037597 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.041433 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.076349 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gfs45" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.107651 4897 scope.go:117] "RemoveContainer" containerID="55d237831fc44d68399ad5c4e55ea7f4b62d1f57ab613754f438f92554358cf5" Feb 14 19:07:33 crc kubenswrapper[4897]: E0214 19:07:33.108886 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55d237831fc44d68399ad5c4e55ea7f4b62d1f57ab613754f438f92554358cf5\": container with ID starting with 55d237831fc44d68399ad5c4e55ea7f4b62d1f57ab613754f438f92554358cf5 not found: ID does not exist" containerID="55d237831fc44d68399ad5c4e55ea7f4b62d1f57ab613754f438f92554358cf5" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.109185 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55d237831fc44d68399ad5c4e55ea7f4b62d1f57ab613754f438f92554358cf5"} err="failed to get container status \"55d237831fc44d68399ad5c4e55ea7f4b62d1f57ab613754f438f92554358cf5\": rpc error: code = NotFound desc = could not find container \"55d237831fc44d68399ad5c4e55ea7f4b62d1f57ab613754f438f92554358cf5\": container with ID starting with 55d237831fc44d68399ad5c4e55ea7f4b62d1f57ab613754f438f92554358cf5 not found: ID does not exist" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.109414 4897 scope.go:117] "RemoveContainer" containerID="606f058d7ef56fc1c1d80c661f2f5d1f1a25d5f930ad15bae53b2ac578487fd1" Feb 14 19:07:33 crc kubenswrapper[4897]: E0214 19:07:33.110406 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"606f058d7ef56fc1c1d80c661f2f5d1f1a25d5f930ad15bae53b2ac578487fd1\": container with ID starting with 606f058d7ef56fc1c1d80c661f2f5d1f1a25d5f930ad15bae53b2ac578487fd1 not found: ID does not exist" containerID="606f058d7ef56fc1c1d80c661f2f5d1f1a25d5f930ad15bae53b2ac578487fd1" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.111276 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"606f058d7ef56fc1c1d80c661f2f5d1f1a25d5f930ad15bae53b2ac578487fd1"} err="failed to get container status \"606f058d7ef56fc1c1d80c661f2f5d1f1a25d5f930ad15bae53b2ac578487fd1\": rpc error: code = NotFound desc = could not find container \"606f058d7ef56fc1c1d80c661f2f5d1f1a25d5f930ad15bae53b2ac578487fd1\": container with ID starting with 606f058d7ef56fc1c1d80c661f2f5d1f1a25d5f930ad15bae53b2ac578487fd1 not found: ID does not exist" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.111385 4897 scope.go:117] "RemoveContainer" containerID="55d237831fc44d68399ad5c4e55ea7f4b62d1f57ab613754f438f92554358cf5" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.112349 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55d237831fc44d68399ad5c4e55ea7f4b62d1f57ab613754f438f92554358cf5"} err="failed to get container status \"55d237831fc44d68399ad5c4e55ea7f4b62d1f57ab613754f438f92554358cf5\": rpc error: code = NotFound desc = could not find container \"55d237831fc44d68399ad5c4e55ea7f4b62d1f57ab613754f438f92554358cf5\": container with ID starting with 55d237831fc44d68399ad5c4e55ea7f4b62d1f57ab613754f438f92554358cf5 not found: ID does not exist" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.112384 4897 scope.go:117] "RemoveContainer" containerID="606f058d7ef56fc1c1d80c661f2f5d1f1a25d5f930ad15bae53b2ac578487fd1" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.112772 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"606f058d7ef56fc1c1d80c661f2f5d1f1a25d5f930ad15bae53b2ac578487fd1"} err="failed to get container status \"606f058d7ef56fc1c1d80c661f2f5d1f1a25d5f930ad15bae53b2ac578487fd1\": rpc error: code = NotFound desc = could not find container \"606f058d7ef56fc1c1d80c661f2f5d1f1a25d5f930ad15bae53b2ac578487fd1\": container with ID starting with 606f058d7ef56fc1c1d80c661f2f5d1f1a25d5f930ad15bae53b2ac578487fd1 not found: ID does not exist" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.170936 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gfs45" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.215565 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161\") " pod="openstack/nova-metadata-0" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.215654 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9fmn\" (UniqueName: \"kubernetes.io/projected/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-kube-api-access-h9fmn\") pod \"nova-metadata-0\" (UID: \"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161\") " pod="openstack/nova-metadata-0" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.215704 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161\") " pod="openstack/nova-metadata-0" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.215769 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-logs\") pod \"nova-metadata-0\" (UID: \"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161\") " pod="openstack/nova-metadata-0" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.215856 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-config-data\") pod \"nova-metadata-0\" (UID: \"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161\") " pod="openstack/nova-metadata-0" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.319680 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161\") " pod="openstack/nova-metadata-0" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.319777 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9fmn\" (UniqueName: \"kubernetes.io/projected/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-kube-api-access-h9fmn\") pod \"nova-metadata-0\" (UID: \"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161\") " pod="openstack/nova-metadata-0" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.319846 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161\") " pod="openstack/nova-metadata-0" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.320793 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-logs\") pod \"nova-metadata-0\" (UID: \"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161\") " pod="openstack/nova-metadata-0" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.321063 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-config-data\") pod \"nova-metadata-0\" (UID: \"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161\") " pod="openstack/nova-metadata-0" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.321791 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-logs\") pod \"nova-metadata-0\" (UID: \"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161\") " pod="openstack/nova-metadata-0" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.329711 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161\") " pod="openstack/nova-metadata-0" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.334345 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gfs45"] Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.334647 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-config-data\") pod \"nova-metadata-0\" (UID: \"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161\") " pod="openstack/nova-metadata-0" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.338071 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9fmn\" (UniqueName: \"kubernetes.io/projected/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-kube-api-access-h9fmn\") pod \"nova-metadata-0\" (UID: \"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161\") " pod="openstack/nova-metadata-0" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.350666 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161\") " pod="openstack/nova-metadata-0" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.364047 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.513813 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ltbnn" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.627329 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883bcca0-6930-4d70-9386-657adbf063c9-config-data\") pod \"883bcca0-6930-4d70-9386-657adbf063c9\" (UID: \"883bcca0-6930-4d70-9386-657adbf063c9\") " Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.629535 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883bcca0-6930-4d70-9386-657adbf063c9-scripts\") pod \"883bcca0-6930-4d70-9386-657adbf063c9\" (UID: \"883bcca0-6930-4d70-9386-657adbf063c9\") " Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.629621 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kclgj\" (UniqueName: \"kubernetes.io/projected/883bcca0-6930-4d70-9386-657adbf063c9-kube-api-access-kclgj\") pod \"883bcca0-6930-4d70-9386-657adbf063c9\" (UID: \"883bcca0-6930-4d70-9386-657adbf063c9\") " Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.629716 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883bcca0-6930-4d70-9386-657adbf063c9-combined-ca-bundle\") pod \"883bcca0-6930-4d70-9386-657adbf063c9\" (UID: \"883bcca0-6930-4d70-9386-657adbf063c9\") " Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.652104 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883bcca0-6930-4d70-9386-657adbf063c9-scripts" (OuterVolumeSpecName: "scripts") pod "883bcca0-6930-4d70-9386-657adbf063c9" (UID: "883bcca0-6930-4d70-9386-657adbf063c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.654198 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/883bcca0-6930-4d70-9386-657adbf063c9-kube-api-access-kclgj" (OuterVolumeSpecName: "kube-api-access-kclgj") pod "883bcca0-6930-4d70-9386-657adbf063c9" (UID: "883bcca0-6930-4d70-9386-657adbf063c9"). InnerVolumeSpecName "kube-api-access-kclgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.688478 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883bcca0-6930-4d70-9386-657adbf063c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "883bcca0-6930-4d70-9386-657adbf063c9" (UID: "883bcca0-6930-4d70-9386-657adbf063c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.712002 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883bcca0-6930-4d70-9386-657adbf063c9-config-data" (OuterVolumeSpecName: "config-data") pod "883bcca0-6930-4d70-9386-657adbf063c9" (UID: "883bcca0-6930-4d70-9386-657adbf063c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.733342 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883bcca0-6930-4d70-9386-657adbf063c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.733400 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883bcca0-6930-4d70-9386-657adbf063c9-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.733410 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883bcca0-6930-4d70-9386-657adbf063c9-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.733419 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kclgj\" (UniqueName: \"kubernetes.io/projected/883bcca0-6930-4d70-9386-657adbf063c9-kube-api-access-kclgj\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.814119 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ed93b6d-b02e-486a-9451-fdb9604769a0" path="/var/lib/kubelet/pods/9ed93b6d-b02e-486a-9451-fdb9604769a0/volumes" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.918638 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ltbnn" event={"ID":"883bcca0-6930-4d70-9386-657adbf063c9","Type":"ContainerDied","Data":"10d922bbee54cf6a58c339126f2649fffbb94dd369435e8acf38babe46ded0f5"} Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.918677 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10d922bbee54cf6a58c339126f2649fffbb94dd369435e8acf38babe46ded0f5" Feb 14 19:07:33 crc kubenswrapper[4897]: I0214 19:07:33.918749 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ltbnn" Feb 14 19:07:34 crc kubenswrapper[4897]: I0214 19:07:34.049909 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 19:07:34 crc kubenswrapper[4897]: I0214 19:07:34.073132 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 19:07:34 crc kubenswrapper[4897]: I0214 19:07:34.093116 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 19:07:34 crc kubenswrapper[4897]: I0214 19:07:34.130211 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 19:07:34 crc kubenswrapper[4897]: I0214 19:07:34.935430 4897 generic.go:334] "Generic (PLEG): container finished" podID="b9ec880e-a3d2-47d3-86b2-b3e826d66a52" containerID="3dce2f8ca0ce29f937e9656ad397b0b4280859f17385c52f799bb314b5d7703d" exitCode=0 Feb 14 19:07:34 crc kubenswrapper[4897]: I0214 19:07:34.935510 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-hb6vq" event={"ID":"b9ec880e-a3d2-47d3-86b2-b3e826d66a52","Type":"ContainerDied","Data":"3dce2f8ca0ce29f937e9656ad397b0b4280859f17385c52f799bb314b5d7703d"} Feb 14 19:07:34 crc kubenswrapper[4897]: I0214 19:07:34.938220 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161","Type":"ContainerStarted","Data":"4a771529b77da3107bf7598218eb12e5e79ea442afbb4ed2ea26bdae62f474b6"} Feb 14 19:07:34 crc kubenswrapper[4897]: I0214 19:07:34.938266 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161","Type":"ContainerStarted","Data":"13ace9822242cdd4466d44c25c2cc75782bef32490759595330f5cf6b28ec21d"} Feb 14 19:07:34 crc kubenswrapper[4897]: I0214 19:07:34.938294 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161","Type":"ContainerStarted","Data":"6454ad43eee33c91bf2f97806311e8702e91cdb2288d66fee43d0b3023cbc1a7"} Feb 14 19:07:34 crc kubenswrapper[4897]: I0214 19:07:34.938316 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5d30a9ab-233a-4fb5-b8f1-1abe51dc2161" containerName="nova-metadata-log" containerID="cri-o://13ace9822242cdd4466d44c25c2cc75782bef32490759595330f5cf6b28ec21d" gracePeriod=30 Feb 14 19:07:34 crc kubenswrapper[4897]: I0214 19:07:34.938379 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gfs45" podUID="7df22cbc-a251-4d73-8c0c-c83d17200278" containerName="registry-server" containerID="cri-o://17a12ecb487ee2f4ef7873cbc579963178c766a143418f254d6f2386feebe9d3" gracePeriod=2 Feb 14 19:07:34 crc kubenswrapper[4897]: I0214 19:07:34.938615 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="454d8a2d-ab1c-41a6-810f-23687631a17b" containerName="nova-api-log" containerID="cri-o://73026f67c141207bb38b490bddeef87e8210761e4816c62e392b3e776f5bdff7" gracePeriod=30 Feb 14 19:07:34 crc kubenswrapper[4897]: I0214 19:07:34.938649 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="454d8a2d-ab1c-41a6-810f-23687631a17b" containerName="nova-api-api" containerID="cri-o://cd08132736f280958403011f43f50dcc4964914eff69a5183d974affac19230d" gracePeriod=30 Feb 14 19:07:34 crc kubenswrapper[4897]: I0214 19:07:34.938829 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5d30a9ab-233a-4fb5-b8f1-1abe51dc2161" containerName="nova-metadata-metadata" containerID="cri-o://4a771529b77da3107bf7598218eb12e5e79ea442afbb4ed2ea26bdae62f474b6" gracePeriod=30 Feb 14 19:07:34 crc kubenswrapper[4897]: I0214 19:07:34.978308 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.978287238 podStartE2EDuration="2.978287238s" podCreationTimestamp="2026-02-14 19:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:07:34.977988728 +0000 UTC m=+1507.954397231" watchObservedRunningTime="2026-02-14 19:07:34.978287238 +0000 UTC m=+1507.954695721" Feb 14 19:07:35 crc kubenswrapper[4897]: I0214 19:07:35.543294 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gfs45" Feb 14 19:07:35 crc kubenswrapper[4897]: I0214 19:07:35.688985 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7df22cbc-a251-4d73-8c0c-c83d17200278-utilities\") pod \"7df22cbc-a251-4d73-8c0c-c83d17200278\" (UID: \"7df22cbc-a251-4d73-8c0c-c83d17200278\") " Feb 14 19:07:35 crc kubenswrapper[4897]: I0214 19:07:35.689177 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdqvk\" (UniqueName: \"kubernetes.io/projected/7df22cbc-a251-4d73-8c0c-c83d17200278-kube-api-access-gdqvk\") pod \"7df22cbc-a251-4d73-8c0c-c83d17200278\" (UID: \"7df22cbc-a251-4d73-8c0c-c83d17200278\") " Feb 14 19:07:35 crc kubenswrapper[4897]: I0214 19:07:35.689382 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7df22cbc-a251-4d73-8c0c-c83d17200278-catalog-content\") pod \"7df22cbc-a251-4d73-8c0c-c83d17200278\" (UID: \"7df22cbc-a251-4d73-8c0c-c83d17200278\") " Feb 14 19:07:35 crc kubenswrapper[4897]: I0214 19:07:35.689541 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7df22cbc-a251-4d73-8c0c-c83d17200278-utilities" (OuterVolumeSpecName: "utilities") pod "7df22cbc-a251-4d73-8c0c-c83d17200278" (UID: "7df22cbc-a251-4d73-8c0c-c83d17200278"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:07:35 crc kubenswrapper[4897]: I0214 19:07:35.689954 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7df22cbc-a251-4d73-8c0c-c83d17200278-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:35 crc kubenswrapper[4897]: I0214 19:07:35.701676 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df22cbc-a251-4d73-8c0c-c83d17200278-kube-api-access-gdqvk" (OuterVolumeSpecName: "kube-api-access-gdqvk") pod "7df22cbc-a251-4d73-8c0c-c83d17200278" (UID: "7df22cbc-a251-4d73-8c0c-c83d17200278"). InnerVolumeSpecName "kube-api-access-gdqvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:07:35 crc kubenswrapper[4897]: I0214 19:07:35.718484 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7df22cbc-a251-4d73-8c0c-c83d17200278-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7df22cbc-a251-4d73-8c0c-c83d17200278" (UID: "7df22cbc-a251-4d73-8c0c-c83d17200278"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:07:35 crc kubenswrapper[4897]: I0214 19:07:35.792168 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdqvk\" (UniqueName: \"kubernetes.io/projected/7df22cbc-a251-4d73-8c0c-c83d17200278-kube-api-access-gdqvk\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:35 crc kubenswrapper[4897]: I0214 19:07:35.792202 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7df22cbc-a251-4d73-8c0c-c83d17200278-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:35 crc kubenswrapper[4897]: I0214 19:07:35.954481 4897 generic.go:334] "Generic (PLEG): container finished" podID="5d30a9ab-233a-4fb5-b8f1-1abe51dc2161" containerID="13ace9822242cdd4466d44c25c2cc75782bef32490759595330f5cf6b28ec21d" exitCode=143 Feb 14 19:07:35 crc kubenswrapper[4897]: I0214 19:07:35.954550 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161","Type":"ContainerDied","Data":"13ace9822242cdd4466d44c25c2cc75782bef32490759595330f5cf6b28ec21d"} Feb 14 19:07:35 crc kubenswrapper[4897]: I0214 19:07:35.957852 4897 generic.go:334] "Generic (PLEG): container finished" podID="7df22cbc-a251-4d73-8c0c-c83d17200278" containerID="17a12ecb487ee2f4ef7873cbc579963178c766a143418f254d6f2386feebe9d3" exitCode=0 Feb 14 19:07:35 crc kubenswrapper[4897]: I0214 19:07:35.957907 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gfs45" event={"ID":"7df22cbc-a251-4d73-8c0c-c83d17200278","Type":"ContainerDied","Data":"17a12ecb487ee2f4ef7873cbc579963178c766a143418f254d6f2386feebe9d3"} Feb 14 19:07:35 crc kubenswrapper[4897]: I0214 19:07:35.957950 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gfs45" Feb 14 19:07:35 crc kubenswrapper[4897]: I0214 19:07:35.957965 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gfs45" event={"ID":"7df22cbc-a251-4d73-8c0c-c83d17200278","Type":"ContainerDied","Data":"0607c3846ec4dd194e3942c131a22b42b0c657ee99d2dd5d5757b40f8e3db189"} Feb 14 19:07:35 crc kubenswrapper[4897]: I0214 19:07:35.958053 4897 scope.go:117] "RemoveContainer" containerID="17a12ecb487ee2f4ef7873cbc579963178c766a143418f254d6f2386feebe9d3" Feb 14 19:07:35 crc kubenswrapper[4897]: I0214 19:07:35.960823 4897 generic.go:334] "Generic (PLEG): container finished" podID="454d8a2d-ab1c-41a6-810f-23687631a17b" containerID="73026f67c141207bb38b490bddeef87e8210761e4816c62e392b3e776f5bdff7" exitCode=143 Feb 14 19:07:35 crc kubenswrapper[4897]: I0214 19:07:35.961013 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2250ea12-361c-47a8-89cf-8c46d41a0ab8" containerName="nova-scheduler-scheduler" containerID="cri-o://584773dfa5c2accb7dcb45806cd4e4889305d351908d7bf35b41798f8d407e48" gracePeriod=30 Feb 14 19:07:35 crc kubenswrapper[4897]: I0214 19:07:35.961136 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"454d8a2d-ab1c-41a6-810f-23687631a17b","Type":"ContainerDied","Data":"73026f67c141207bb38b490bddeef87e8210761e4816c62e392b3e776f5bdff7"} Feb 14 19:07:36 crc kubenswrapper[4897]: I0214 19:07:36.000999 4897 scope.go:117] "RemoveContainer" containerID="8ee2e5c06f036a06c6f3c01f01873b375ed94b5f0b10049ff27363cdfb933da6" Feb 14 19:07:36 crc kubenswrapper[4897]: I0214 19:07:36.001180 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gfs45"] Feb 14 19:07:36 crc kubenswrapper[4897]: I0214 19:07:36.014697 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gfs45"] Feb 14 19:07:36 crc kubenswrapper[4897]: I0214 19:07:36.030150 4897 scope.go:117] "RemoveContainer" containerID="1ea3910cf04a4e02b502c0d982f57340eb3d169e53dc1cbe92715eb99b39389b" Feb 14 19:07:36 crc kubenswrapper[4897]: I0214 19:07:36.081270 4897 scope.go:117] "RemoveContainer" containerID="17a12ecb487ee2f4ef7873cbc579963178c766a143418f254d6f2386feebe9d3" Feb 14 19:07:36 crc kubenswrapper[4897]: E0214 19:07:36.082354 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17a12ecb487ee2f4ef7873cbc579963178c766a143418f254d6f2386feebe9d3\": container with ID starting with 17a12ecb487ee2f4ef7873cbc579963178c766a143418f254d6f2386feebe9d3 not found: ID does not exist" containerID="17a12ecb487ee2f4ef7873cbc579963178c766a143418f254d6f2386feebe9d3" Feb 14 19:07:36 crc kubenswrapper[4897]: I0214 19:07:36.082424 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17a12ecb487ee2f4ef7873cbc579963178c766a143418f254d6f2386feebe9d3"} err="failed to get container status \"17a12ecb487ee2f4ef7873cbc579963178c766a143418f254d6f2386feebe9d3\": rpc error: code = NotFound desc = could not find container \"17a12ecb487ee2f4ef7873cbc579963178c766a143418f254d6f2386feebe9d3\": container with ID starting with 17a12ecb487ee2f4ef7873cbc579963178c766a143418f254d6f2386feebe9d3 not found: ID does not exist" Feb 14 19:07:36 crc kubenswrapper[4897]: I0214 19:07:36.082474 4897 scope.go:117] "RemoveContainer" containerID="8ee2e5c06f036a06c6f3c01f01873b375ed94b5f0b10049ff27363cdfb933da6" Feb 14 19:07:36 crc kubenswrapper[4897]: E0214 19:07:36.083594 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ee2e5c06f036a06c6f3c01f01873b375ed94b5f0b10049ff27363cdfb933da6\": container with ID starting with 8ee2e5c06f036a06c6f3c01f01873b375ed94b5f0b10049ff27363cdfb933da6 not found: ID does not exist" containerID="8ee2e5c06f036a06c6f3c01f01873b375ed94b5f0b10049ff27363cdfb933da6" Feb 14 19:07:36 crc kubenswrapper[4897]: I0214 19:07:36.083680 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ee2e5c06f036a06c6f3c01f01873b375ed94b5f0b10049ff27363cdfb933da6"} err="failed to get container status \"8ee2e5c06f036a06c6f3c01f01873b375ed94b5f0b10049ff27363cdfb933da6\": rpc error: code = NotFound desc = could not find container \"8ee2e5c06f036a06c6f3c01f01873b375ed94b5f0b10049ff27363cdfb933da6\": container with ID starting with 8ee2e5c06f036a06c6f3c01f01873b375ed94b5f0b10049ff27363cdfb933da6 not found: ID does not exist" Feb 14 19:07:36 crc kubenswrapper[4897]: I0214 19:07:36.083701 4897 scope.go:117] "RemoveContainer" containerID="1ea3910cf04a4e02b502c0d982f57340eb3d169e53dc1cbe92715eb99b39389b" Feb 14 19:07:36 crc kubenswrapper[4897]: E0214 19:07:36.084104 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ea3910cf04a4e02b502c0d982f57340eb3d169e53dc1cbe92715eb99b39389b\": container with ID starting with 1ea3910cf04a4e02b502c0d982f57340eb3d169e53dc1cbe92715eb99b39389b not found: ID does not exist" containerID="1ea3910cf04a4e02b502c0d982f57340eb3d169e53dc1cbe92715eb99b39389b" Feb 14 19:07:36 crc kubenswrapper[4897]: I0214 19:07:36.084131 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ea3910cf04a4e02b502c0d982f57340eb3d169e53dc1cbe92715eb99b39389b"} err="failed to get container status \"1ea3910cf04a4e02b502c0d982f57340eb3d169e53dc1cbe92715eb99b39389b\": rpc error: code = NotFound desc = could not find container \"1ea3910cf04a4e02b502c0d982f57340eb3d169e53dc1cbe92715eb99b39389b\": container with ID starting with 1ea3910cf04a4e02b502c0d982f57340eb3d169e53dc1cbe92715eb99b39389b not found: ID does not exist" Feb 14 19:07:36 crc kubenswrapper[4897]: I0214 19:07:36.418075 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-hb6vq" Feb 14 19:07:36 crc kubenswrapper[4897]: I0214 19:07:36.510227 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9ec880e-a3d2-47d3-86b2-b3e826d66a52-scripts\") pod \"b9ec880e-a3d2-47d3-86b2-b3e826d66a52\" (UID: \"b9ec880e-a3d2-47d3-86b2-b3e826d66a52\") " Feb 14 19:07:36 crc kubenswrapper[4897]: I0214 19:07:36.510513 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9ec880e-a3d2-47d3-86b2-b3e826d66a52-combined-ca-bundle\") pod \"b9ec880e-a3d2-47d3-86b2-b3e826d66a52\" (UID: \"b9ec880e-a3d2-47d3-86b2-b3e826d66a52\") " Feb 14 19:07:36 crc kubenswrapper[4897]: I0214 19:07:36.510595 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9ec880e-a3d2-47d3-86b2-b3e826d66a52-config-data\") pod \"b9ec880e-a3d2-47d3-86b2-b3e826d66a52\" (UID: \"b9ec880e-a3d2-47d3-86b2-b3e826d66a52\") " Feb 14 19:07:36 crc kubenswrapper[4897]: I0214 19:07:36.510650 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmbc2\" (UniqueName: \"kubernetes.io/projected/b9ec880e-a3d2-47d3-86b2-b3e826d66a52-kube-api-access-tmbc2\") pod \"b9ec880e-a3d2-47d3-86b2-b3e826d66a52\" (UID: \"b9ec880e-a3d2-47d3-86b2-b3e826d66a52\") " Feb 14 19:07:36 crc kubenswrapper[4897]: I0214 19:07:36.516804 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9ec880e-a3d2-47d3-86b2-b3e826d66a52-kube-api-access-tmbc2" (OuterVolumeSpecName: "kube-api-access-tmbc2") pod "b9ec880e-a3d2-47d3-86b2-b3e826d66a52" (UID: "b9ec880e-a3d2-47d3-86b2-b3e826d66a52"). InnerVolumeSpecName "kube-api-access-tmbc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:07:36 crc kubenswrapper[4897]: I0214 19:07:36.519394 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9ec880e-a3d2-47d3-86b2-b3e826d66a52-scripts" (OuterVolumeSpecName: "scripts") pod "b9ec880e-a3d2-47d3-86b2-b3e826d66a52" (UID: "b9ec880e-a3d2-47d3-86b2-b3e826d66a52"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:36 crc kubenswrapper[4897]: I0214 19:07:36.547096 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9ec880e-a3d2-47d3-86b2-b3e826d66a52-config-data" (OuterVolumeSpecName: "config-data") pod "b9ec880e-a3d2-47d3-86b2-b3e826d66a52" (UID: "b9ec880e-a3d2-47d3-86b2-b3e826d66a52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:36 crc kubenswrapper[4897]: I0214 19:07:36.563121 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9ec880e-a3d2-47d3-86b2-b3e826d66a52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9ec880e-a3d2-47d3-86b2-b3e826d66a52" (UID: "b9ec880e-a3d2-47d3-86b2-b3e826d66a52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:36 crc kubenswrapper[4897]: I0214 19:07:36.613738 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmbc2\" (UniqueName: \"kubernetes.io/projected/b9ec880e-a3d2-47d3-86b2-b3e826d66a52-kube-api-access-tmbc2\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:36 crc kubenswrapper[4897]: I0214 19:07:36.613782 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9ec880e-a3d2-47d3-86b2-b3e826d66a52-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:36 crc kubenswrapper[4897]: I0214 19:07:36.613795 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9ec880e-a3d2-47d3-86b2-b3e826d66a52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:36 crc kubenswrapper[4897]: I0214 19:07:36.613806 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9ec880e-a3d2-47d3-86b2-b3e826d66a52-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:36 crc kubenswrapper[4897]: I0214 19:07:36.976771 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-hb6vq" event={"ID":"b9ec880e-a3d2-47d3-86b2-b3e826d66a52","Type":"ContainerDied","Data":"2c1b09a1077da6a4e9ead93cdbada3c23f6be62e1e287430379d603ab654ec72"} Feb 14 19:07:36 crc kubenswrapper[4897]: I0214 19:07:36.976820 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c1b09a1077da6a4e9ead93cdbada3c23f6be62e1e287430379d603ab654ec72" Feb 14 19:07:36 crc kubenswrapper[4897]: I0214 19:07:36.976821 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-hb6vq" Feb 14 19:07:36 crc kubenswrapper[4897]: E0214 19:07:36.987261 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="584773dfa5c2accb7dcb45806cd4e4889305d351908d7bf35b41798f8d407e48" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 14 19:07:36 crc kubenswrapper[4897]: E0214 19:07:36.988739 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="584773dfa5c2accb7dcb45806cd4e4889305d351908d7bf35b41798f8d407e48" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 14 19:07:36 crc kubenswrapper[4897]: E0214 19:07:36.991170 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="584773dfa5c2accb7dcb45806cd4e4889305d351908d7bf35b41798f8d407e48" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 14 19:07:36 crc kubenswrapper[4897]: E0214 19:07:36.991249 4897 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="2250ea12-361c-47a8-89cf-8c46d41a0ab8" containerName="nova-scheduler-scheduler" Feb 14 19:07:37 crc kubenswrapper[4897]: I0214 19:07:37.321197 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" Feb 14 19:07:37 crc kubenswrapper[4897]: I0214 19:07:37.442194 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-kt766"] Feb 14 19:07:37 crc kubenswrapper[4897]: I0214 19:07:37.442498 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-kt766" podUID="e56dc64b-fe4e-4e4c-9266-cb073ab171e8" containerName="dnsmasq-dns" containerID="cri-o://83274da8965f06d985e62ea5e9947a8492df9f55c1a320626386a00ae230fdc1" gracePeriod=10 Feb 14 19:07:37 crc kubenswrapper[4897]: I0214 19:07:37.659636 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 19:07:37 crc kubenswrapper[4897]: I0214 19:07:37.814850 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df22cbc-a251-4d73-8c0c-c83d17200278" path="/var/lib/kubelet/pods/7df22cbc-a251-4d73-8c0c-c83d17200278/volumes" Feb 14 19:07:37 crc kubenswrapper[4897]: I0214 19:07:37.842629 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2250ea12-361c-47a8-89cf-8c46d41a0ab8-config-data\") pod \"2250ea12-361c-47a8-89cf-8c46d41a0ab8\" (UID: \"2250ea12-361c-47a8-89cf-8c46d41a0ab8\") " Feb 14 19:07:37 crc kubenswrapper[4897]: I0214 19:07:37.842727 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdmx7\" (UniqueName: \"kubernetes.io/projected/2250ea12-361c-47a8-89cf-8c46d41a0ab8-kube-api-access-xdmx7\") pod \"2250ea12-361c-47a8-89cf-8c46d41a0ab8\" (UID: \"2250ea12-361c-47a8-89cf-8c46d41a0ab8\") " Feb 14 19:07:37 crc kubenswrapper[4897]: I0214 19:07:37.842882 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2250ea12-361c-47a8-89cf-8c46d41a0ab8-combined-ca-bundle\") pod \"2250ea12-361c-47a8-89cf-8c46d41a0ab8\" (UID: \"2250ea12-361c-47a8-89cf-8c46d41a0ab8\") " Feb 14 19:07:37 crc kubenswrapper[4897]: I0214 19:07:37.850675 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2250ea12-361c-47a8-89cf-8c46d41a0ab8-kube-api-access-xdmx7" (OuterVolumeSpecName: "kube-api-access-xdmx7") pod "2250ea12-361c-47a8-89cf-8c46d41a0ab8" (UID: "2250ea12-361c-47a8-89cf-8c46d41a0ab8"). InnerVolumeSpecName "kube-api-access-xdmx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:07:37 crc kubenswrapper[4897]: I0214 19:07:37.889106 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2250ea12-361c-47a8-89cf-8c46d41a0ab8-config-data" (OuterVolumeSpecName: "config-data") pod "2250ea12-361c-47a8-89cf-8c46d41a0ab8" (UID: "2250ea12-361c-47a8-89cf-8c46d41a0ab8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:37 crc kubenswrapper[4897]: I0214 19:07:37.907414 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2250ea12-361c-47a8-89cf-8c46d41a0ab8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2250ea12-361c-47a8-89cf-8c46d41a0ab8" (UID: "2250ea12-361c-47a8-89cf-8c46d41a0ab8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:37 crc kubenswrapper[4897]: I0214 19:07:37.945513 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2250ea12-361c-47a8-89cf-8c46d41a0ab8-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:37 crc kubenswrapper[4897]: I0214 19:07:37.945542 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdmx7\" (UniqueName: \"kubernetes.io/projected/2250ea12-361c-47a8-89cf-8c46d41a0ab8-kube-api-access-xdmx7\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:37 crc kubenswrapper[4897]: I0214 19:07:37.945553 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2250ea12-361c-47a8-89cf-8c46d41a0ab8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:37 crc kubenswrapper[4897]: I0214 19:07:37.994630 4897 generic.go:334] "Generic (PLEG): container finished" podID="2250ea12-361c-47a8-89cf-8c46d41a0ab8" containerID="584773dfa5c2accb7dcb45806cd4e4889305d351908d7bf35b41798f8d407e48" exitCode=0 Feb 14 19:07:37 crc kubenswrapper[4897]: I0214 19:07:37.994723 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 19:07:37 crc kubenswrapper[4897]: I0214 19:07:37.994682 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2250ea12-361c-47a8-89cf-8c46d41a0ab8","Type":"ContainerDied","Data":"584773dfa5c2accb7dcb45806cd4e4889305d351908d7bf35b41798f8d407e48"} Feb 14 19:07:37 crc kubenswrapper[4897]: I0214 19:07:37.994858 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2250ea12-361c-47a8-89cf-8c46d41a0ab8","Type":"ContainerDied","Data":"1c28d9a47d11df4390bcb06fa8ad8a91f900ba73a5b5be7acb14f3dcecd25452"} Feb 14 19:07:37 crc kubenswrapper[4897]: I0214 19:07:37.994878 4897 scope.go:117] "RemoveContainer" containerID="584773dfa5c2accb7dcb45806cd4e4889305d351908d7bf35b41798f8d407e48" Feb 14 19:07:37 crc kubenswrapper[4897]: I0214 19:07:37.999198 4897 generic.go:334] "Generic (PLEG): container finished" podID="e56dc64b-fe4e-4e4c-9266-cb073ab171e8" containerID="83274da8965f06d985e62ea5e9947a8492df9f55c1a320626386a00ae230fdc1" exitCode=0 Feb 14 19:07:37 crc kubenswrapper[4897]: I0214 19:07:37.999248 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-kt766" event={"ID":"e56dc64b-fe4e-4e4c-9266-cb073ab171e8","Type":"ContainerDied","Data":"83274da8965f06d985e62ea5e9947a8492df9f55c1a320626386a00ae230fdc1"} Feb 14 19:07:37 crc kubenswrapper[4897]: I0214 19:07:37.999274 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-kt766" event={"ID":"e56dc64b-fe4e-4e4c-9266-cb073ab171e8","Type":"ContainerDied","Data":"45a8a84ffb21aec9f7f04f5d839c5755d87a161f60fc5d3f56eac86256c4745a"} Feb 14 19:07:37 crc kubenswrapper[4897]: I0214 19:07:37.999285 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45a8a84ffb21aec9f7f04f5d839c5755d87a161f60fc5d3f56eac86256c4745a" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.058109 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-kt766" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.070247 4897 scope.go:117] "RemoveContainer" containerID="584773dfa5c2accb7dcb45806cd4e4889305d351908d7bf35b41798f8d407e48" Feb 14 19:07:38 crc kubenswrapper[4897]: E0214 19:07:38.070934 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"584773dfa5c2accb7dcb45806cd4e4889305d351908d7bf35b41798f8d407e48\": container with ID starting with 584773dfa5c2accb7dcb45806cd4e4889305d351908d7bf35b41798f8d407e48 not found: ID does not exist" containerID="584773dfa5c2accb7dcb45806cd4e4889305d351908d7bf35b41798f8d407e48" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.070980 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"584773dfa5c2accb7dcb45806cd4e4889305d351908d7bf35b41798f8d407e48"} err="failed to get container status \"584773dfa5c2accb7dcb45806cd4e4889305d351908d7bf35b41798f8d407e48\": rpc error: code = NotFound desc = could not find container \"584773dfa5c2accb7dcb45806cd4e4889305d351908d7bf35b41798f8d407e48\": container with ID starting with 584773dfa5c2accb7dcb45806cd4e4889305d351908d7bf35b41798f8d407e48 not found: ID does not exist" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.072641 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.092318 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.106442 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 19:07:38 crc kubenswrapper[4897]: E0214 19:07:38.106970 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56dc64b-fe4e-4e4c-9266-cb073ab171e8" containerName="init" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.106989 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56dc64b-fe4e-4e4c-9266-cb073ab171e8" containerName="init" Feb 14 19:07:38 crc kubenswrapper[4897]: E0214 19:07:38.107004 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df22cbc-a251-4d73-8c0c-c83d17200278" containerName="extract-utilities" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.107011 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df22cbc-a251-4d73-8c0c-c83d17200278" containerName="extract-utilities" Feb 14 19:07:38 crc kubenswrapper[4897]: E0214 19:07:38.107074 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2250ea12-361c-47a8-89cf-8c46d41a0ab8" containerName="nova-scheduler-scheduler" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.107082 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="2250ea12-361c-47a8-89cf-8c46d41a0ab8" containerName="nova-scheduler-scheduler" Feb 14 19:07:38 crc kubenswrapper[4897]: E0214 19:07:38.107095 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ec880e-a3d2-47d3-86b2-b3e826d66a52" containerName="aodh-db-sync" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.107101 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ec880e-a3d2-47d3-86b2-b3e826d66a52" containerName="aodh-db-sync" Feb 14 19:07:38 crc kubenswrapper[4897]: E0214 19:07:38.107120 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df22cbc-a251-4d73-8c0c-c83d17200278" containerName="registry-server" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.107127 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df22cbc-a251-4d73-8c0c-c83d17200278" containerName="registry-server" Feb 14 19:07:38 crc kubenswrapper[4897]: E0214 19:07:38.107159 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df22cbc-a251-4d73-8c0c-c83d17200278" containerName="extract-content" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.107165 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df22cbc-a251-4d73-8c0c-c83d17200278" containerName="extract-content" Feb 14 19:07:38 crc kubenswrapper[4897]: E0214 19:07:38.107175 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883bcca0-6930-4d70-9386-657adbf063c9" containerName="nova-manage" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.107181 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="883bcca0-6930-4d70-9386-657adbf063c9" containerName="nova-manage" Feb 14 19:07:38 crc kubenswrapper[4897]: E0214 19:07:38.107187 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56dc64b-fe4e-4e4c-9266-cb073ab171e8" containerName="dnsmasq-dns" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.107193 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56dc64b-fe4e-4e4c-9266-cb073ab171e8" containerName="dnsmasq-dns" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.107389 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="7df22cbc-a251-4d73-8c0c-c83d17200278" containerName="registry-server" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.107402 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9ec880e-a3d2-47d3-86b2-b3e826d66a52" containerName="aodh-db-sync" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.107415 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="e56dc64b-fe4e-4e4c-9266-cb073ab171e8" containerName="dnsmasq-dns" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.107427 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="883bcca0-6930-4d70-9386-657adbf063c9" containerName="nova-manage" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.107442 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="2250ea12-361c-47a8-89cf-8c46d41a0ab8" containerName="nova-scheduler-scheduler" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.108227 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.116939 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.142351 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.250967 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-ovsdbserver-sb\") pod \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\" (UID: \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\") " Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.251123 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqttv\" (UniqueName: \"kubernetes.io/projected/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-kube-api-access-jqttv\") pod \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\" (UID: \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\") " Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.251175 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-config\") pod \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\" (UID: \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\") " Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.251228 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-dns-swift-storage-0\") pod \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\" (UID: \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\") " Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.251403 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-dns-svc\") pod \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\" (UID: \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\") " Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.251501 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-ovsdbserver-nb\") pod \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\" (UID: \"e56dc64b-fe4e-4e4c-9266-cb073ab171e8\") " Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.252274 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv9k7\" (UniqueName: \"kubernetes.io/projected/95f8be13-487d-4d73-91c5-0996935e042c-kube-api-access-wv9k7\") pod \"nova-scheduler-0\" (UID: \"95f8be13-487d-4d73-91c5-0996935e042c\") " pod="openstack/nova-scheduler-0" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.252661 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95f8be13-487d-4d73-91c5-0996935e042c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"95f8be13-487d-4d73-91c5-0996935e042c\") " pod="openstack/nova-scheduler-0" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.252694 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95f8be13-487d-4d73-91c5-0996935e042c-config-data\") pod \"nova-scheduler-0\" (UID: \"95f8be13-487d-4d73-91c5-0996935e042c\") " pod="openstack/nova-scheduler-0" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.261501 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-kube-api-access-jqttv" (OuterVolumeSpecName: "kube-api-access-jqttv") pod "e56dc64b-fe4e-4e4c-9266-cb073ab171e8" (UID: "e56dc64b-fe4e-4e4c-9266-cb073ab171e8"). InnerVolumeSpecName "kube-api-access-jqttv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.358923 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95f8be13-487d-4d73-91c5-0996935e042c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"95f8be13-487d-4d73-91c5-0996935e042c\") " pod="openstack/nova-scheduler-0" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.358995 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95f8be13-487d-4d73-91c5-0996935e042c-config-data\") pod \"nova-scheduler-0\" (UID: \"95f8be13-487d-4d73-91c5-0996935e042c\") " pod="openstack/nova-scheduler-0" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.359220 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv9k7\" (UniqueName: \"kubernetes.io/projected/95f8be13-487d-4d73-91c5-0996935e042c-kube-api-access-wv9k7\") pod \"nova-scheduler-0\" (UID: \"95f8be13-487d-4d73-91c5-0996935e042c\") " pod="openstack/nova-scheduler-0" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.359427 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqttv\" (UniqueName: \"kubernetes.io/projected/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-kube-api-access-jqttv\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.364456 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.364516 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.373391 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95f8be13-487d-4d73-91c5-0996935e042c-config-data\") pod \"nova-scheduler-0\" (UID: \"95f8be13-487d-4d73-91c5-0996935e042c\") " pod="openstack/nova-scheduler-0" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.375671 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e56dc64b-fe4e-4e4c-9266-cb073ab171e8" (UID: "e56dc64b-fe4e-4e4c-9266-cb073ab171e8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.376547 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95f8be13-487d-4d73-91c5-0996935e042c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"95f8be13-487d-4d73-91c5-0996935e042c\") " pod="openstack/nova-scheduler-0" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.380710 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e56dc64b-fe4e-4e4c-9266-cb073ab171e8" (UID: "e56dc64b-fe4e-4e4c-9266-cb073ab171e8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.384556 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv9k7\" (UniqueName: \"kubernetes.io/projected/95f8be13-487d-4d73-91c5-0996935e042c-kube-api-access-wv9k7\") pod \"nova-scheduler-0\" (UID: \"95f8be13-487d-4d73-91c5-0996935e042c\") " pod="openstack/nova-scheduler-0" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.388501 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e56dc64b-fe4e-4e4c-9266-cb073ab171e8" (UID: "e56dc64b-fe4e-4e4c-9266-cb073ab171e8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.427013 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.428330 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-config" (OuterVolumeSpecName: "config") pod "e56dc64b-fe4e-4e4c-9266-cb073ab171e8" (UID: "e56dc64b-fe4e-4e4c-9266-cb073ab171e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.459545 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e56dc64b-fe4e-4e4c-9266-cb073ab171e8" (UID: "e56dc64b-fe4e-4e4c-9266-cb073ab171e8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.462396 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.462438 4897 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.462456 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.462467 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:38 crc kubenswrapper[4897]: I0214 19:07:38.462477 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e56dc64b-fe4e-4e4c-9266-cb073ab171e8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:39 crc kubenswrapper[4897]: I0214 19:07:39.003786 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 19:07:39 crc kubenswrapper[4897]: I0214 19:07:39.011344 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-kt766" Feb 14 19:07:39 crc kubenswrapper[4897]: I0214 19:07:39.194605 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-kt766"] Feb 14 19:07:39 crc kubenswrapper[4897]: I0214 19:07:39.205258 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-kt766"] Feb 14 19:07:39 crc kubenswrapper[4897]: I0214 19:07:39.227500 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 14 19:07:39 crc kubenswrapper[4897]: I0214 19:07:39.230608 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 19:07:39 crc kubenswrapper[4897]: I0214 19:07:39.237676 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 14 19:07:39 crc kubenswrapper[4897]: I0214 19:07:39.237853 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-5zcr5" Feb 14 19:07:39 crc kubenswrapper[4897]: I0214 19:07:39.238878 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 14 19:07:39 crc kubenswrapper[4897]: I0214 19:07:39.249186 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 14 19:07:39 crc kubenswrapper[4897]: I0214 19:07:39.386536 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88sln\" (UniqueName: \"kubernetes.io/projected/02935790-1dbb-42a8-8f04-1314338f3425-kube-api-access-88sln\") pod \"aodh-0\" (UID: \"02935790-1dbb-42a8-8f04-1314338f3425\") " pod="openstack/aodh-0" Feb 14 19:07:39 crc kubenswrapper[4897]: I0214 19:07:39.386714 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02935790-1dbb-42a8-8f04-1314338f3425-config-data\") pod \"aodh-0\" (UID: \"02935790-1dbb-42a8-8f04-1314338f3425\") " pod="openstack/aodh-0" Feb 14 19:07:39 crc kubenswrapper[4897]: I0214 19:07:39.386790 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02935790-1dbb-42a8-8f04-1314338f3425-combined-ca-bundle\") pod \"aodh-0\" (UID: \"02935790-1dbb-42a8-8f04-1314338f3425\") " pod="openstack/aodh-0" Feb 14 19:07:39 crc kubenswrapper[4897]: I0214 19:07:39.387240 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02935790-1dbb-42a8-8f04-1314338f3425-scripts\") pod \"aodh-0\" (UID: \"02935790-1dbb-42a8-8f04-1314338f3425\") " pod="openstack/aodh-0" Feb 14 19:07:39 crc kubenswrapper[4897]: I0214 19:07:39.489279 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02935790-1dbb-42a8-8f04-1314338f3425-scripts\") pod \"aodh-0\" (UID: \"02935790-1dbb-42a8-8f04-1314338f3425\") " pod="openstack/aodh-0" Feb 14 19:07:39 crc kubenswrapper[4897]: I0214 19:07:39.489424 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88sln\" (UniqueName: \"kubernetes.io/projected/02935790-1dbb-42a8-8f04-1314338f3425-kube-api-access-88sln\") pod \"aodh-0\" (UID: \"02935790-1dbb-42a8-8f04-1314338f3425\") " pod="openstack/aodh-0" Feb 14 19:07:39 crc kubenswrapper[4897]: I0214 19:07:39.489519 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02935790-1dbb-42a8-8f04-1314338f3425-config-data\") pod \"aodh-0\" (UID: \"02935790-1dbb-42a8-8f04-1314338f3425\") " pod="openstack/aodh-0" Feb 14 19:07:39 crc kubenswrapper[4897]: I0214 19:07:39.489561 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02935790-1dbb-42a8-8f04-1314338f3425-combined-ca-bundle\") pod \"aodh-0\" (UID: \"02935790-1dbb-42a8-8f04-1314338f3425\") " pod="openstack/aodh-0" Feb 14 19:07:39 crc kubenswrapper[4897]: I0214 19:07:39.495566 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02935790-1dbb-42a8-8f04-1314338f3425-config-data\") pod \"aodh-0\" (UID: \"02935790-1dbb-42a8-8f04-1314338f3425\") " pod="openstack/aodh-0" Feb 14 19:07:39 crc kubenswrapper[4897]: I0214 19:07:39.495801 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02935790-1dbb-42a8-8f04-1314338f3425-scripts\") pod \"aodh-0\" (UID: \"02935790-1dbb-42a8-8f04-1314338f3425\") " pod="openstack/aodh-0" Feb 14 19:07:39 crc kubenswrapper[4897]: I0214 19:07:39.496057 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02935790-1dbb-42a8-8f04-1314338f3425-combined-ca-bundle\") pod \"aodh-0\" (UID: \"02935790-1dbb-42a8-8f04-1314338f3425\") " pod="openstack/aodh-0" Feb 14 19:07:39 crc kubenswrapper[4897]: I0214 19:07:39.511494 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88sln\" (UniqueName: \"kubernetes.io/projected/02935790-1dbb-42a8-8f04-1314338f3425-kube-api-access-88sln\") pod \"aodh-0\" (UID: \"02935790-1dbb-42a8-8f04-1314338f3425\") " pod="openstack/aodh-0" Feb 14 19:07:39 crc kubenswrapper[4897]: I0214 19:07:39.552978 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 19:07:39 crc kubenswrapper[4897]: I0214 19:07:39.818592 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2250ea12-361c-47a8-89cf-8c46d41a0ab8" path="/var/lib/kubelet/pods/2250ea12-361c-47a8-89cf-8c46d41a0ab8/volumes" Feb 14 19:07:39 crc kubenswrapper[4897]: I0214 19:07:39.819514 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e56dc64b-fe4e-4e4c-9266-cb073ab171e8" path="/var/lib/kubelet/pods/e56dc64b-fe4e-4e4c-9266-cb073ab171e8/volumes" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.016222 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.042471 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"95f8be13-487d-4d73-91c5-0996935e042c","Type":"ContainerStarted","Data":"4584029be60adf77f36e9076ad681cf2a9a6c580f2839fcb7574d0a471a06f0f"} Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.043125 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"95f8be13-487d-4d73-91c5-0996935e042c","Type":"ContainerStarted","Data":"6aee2472e0548b996d4b4589752644f5b13af40f874da39ccbfbd1391f47f986"} Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.059923 4897 generic.go:334] "Generic (PLEG): container finished" podID="7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd" containerID="a311d23009ed170bc872291a802e4e523b9f509d873bf0226a3762c05d5826c8" exitCode=0 Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.060017 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ttbmx" event={"ID":"7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd","Type":"ContainerDied","Data":"a311d23009ed170bc872291a802e4e523b9f509d873bf0226a3762c05d5826c8"} Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.067493 4897 generic.go:334] "Generic (PLEG): container finished" podID="454d8a2d-ab1c-41a6-810f-23687631a17b" containerID="cd08132736f280958403011f43f50dcc4964914eff69a5183d974affac19230d" exitCode=0 Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.067545 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"454d8a2d-ab1c-41a6-810f-23687631a17b","Type":"ContainerDied","Data":"cd08132736f280958403011f43f50dcc4964914eff69a5183d974affac19230d"} Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.067553 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.067571 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"454d8a2d-ab1c-41a6-810f-23687631a17b","Type":"ContainerDied","Data":"f6463169d133ba94e5147c72709cdf39d3075970f4549c98a106ac6e78d3ef51"} Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.067589 4897 scope.go:117] "RemoveContainer" containerID="cd08132736f280958403011f43f50dcc4964914eff69a5183d974affac19230d" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.097208 4897 scope.go:117] "RemoveContainer" containerID="73026f67c141207bb38b490bddeef87e8210761e4816c62e392b3e776f5bdff7" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.103255 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.10323523 podStartE2EDuration="2.10323523s" podCreationTimestamp="2026-02-14 19:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:07:40.070654517 +0000 UTC m=+1513.047063000" watchObservedRunningTime="2026-02-14 19:07:40.10323523 +0000 UTC m=+1513.079643713" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.143837 4897 scope.go:117] "RemoveContainer" containerID="cd08132736f280958403011f43f50dcc4964914eff69a5183d974affac19230d" Feb 14 19:07:40 crc kubenswrapper[4897]: E0214 19:07:40.145131 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd08132736f280958403011f43f50dcc4964914eff69a5183d974affac19230d\": container with ID starting with cd08132736f280958403011f43f50dcc4964914eff69a5183d974affac19230d not found: ID does not exist" containerID="cd08132736f280958403011f43f50dcc4964914eff69a5183d974affac19230d" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.145178 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd08132736f280958403011f43f50dcc4964914eff69a5183d974affac19230d"} err="failed to get container status \"cd08132736f280958403011f43f50dcc4964914eff69a5183d974affac19230d\": rpc error: code = NotFound desc = could not find container \"cd08132736f280958403011f43f50dcc4964914eff69a5183d974affac19230d\": container with ID starting with cd08132736f280958403011f43f50dcc4964914eff69a5183d974affac19230d not found: ID does not exist" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.145203 4897 scope.go:117] "RemoveContainer" containerID="73026f67c141207bb38b490bddeef87e8210761e4816c62e392b3e776f5bdff7" Feb 14 19:07:40 crc kubenswrapper[4897]: E0214 19:07:40.145877 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73026f67c141207bb38b490bddeef87e8210761e4816c62e392b3e776f5bdff7\": container with ID starting with 73026f67c141207bb38b490bddeef87e8210761e4816c62e392b3e776f5bdff7 not found: ID does not exist" containerID="73026f67c141207bb38b490bddeef87e8210761e4816c62e392b3e776f5bdff7" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.145901 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73026f67c141207bb38b490bddeef87e8210761e4816c62e392b3e776f5bdff7"} err="failed to get container status \"73026f67c141207bb38b490bddeef87e8210761e4816c62e392b3e776f5bdff7\": rpc error: code = NotFound desc = could not find container \"73026f67c141207bb38b490bddeef87e8210761e4816c62e392b3e776f5bdff7\": container with ID starting with 73026f67c141207bb38b490bddeef87e8210761e4816c62e392b3e776f5bdff7 not found: ID does not exist" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.149634 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 14 19:07:40 crc kubenswrapper[4897]: W0214 19:07:40.153301 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02935790_1dbb_42a8_8f04_1314338f3425.slice/crio-e8bdf80060c70e21751d8dc942a31ee0062545c6a4edd9a36ba50f5a72fbe2f8 WatchSource:0}: Error finding container e8bdf80060c70e21751d8dc942a31ee0062545c6a4edd9a36ba50f5a72fbe2f8: Status 404 returned error can't find the container with id e8bdf80060c70e21751d8dc942a31ee0062545c6a4edd9a36ba50f5a72fbe2f8 Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.214068 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/454d8a2d-ab1c-41a6-810f-23687631a17b-config-data\") pod \"454d8a2d-ab1c-41a6-810f-23687631a17b\" (UID: \"454d8a2d-ab1c-41a6-810f-23687631a17b\") " Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.214184 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pmjl\" (UniqueName: \"kubernetes.io/projected/454d8a2d-ab1c-41a6-810f-23687631a17b-kube-api-access-4pmjl\") pod \"454d8a2d-ab1c-41a6-810f-23687631a17b\" (UID: \"454d8a2d-ab1c-41a6-810f-23687631a17b\") " Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.214227 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/454d8a2d-ab1c-41a6-810f-23687631a17b-combined-ca-bundle\") pod \"454d8a2d-ab1c-41a6-810f-23687631a17b\" (UID: \"454d8a2d-ab1c-41a6-810f-23687631a17b\") " Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.214423 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/454d8a2d-ab1c-41a6-810f-23687631a17b-logs\") pod \"454d8a2d-ab1c-41a6-810f-23687631a17b\" (UID: \"454d8a2d-ab1c-41a6-810f-23687631a17b\") " Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.216585 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/454d8a2d-ab1c-41a6-810f-23687631a17b-logs" (OuterVolumeSpecName: "logs") pod "454d8a2d-ab1c-41a6-810f-23687631a17b" (UID: "454d8a2d-ab1c-41a6-810f-23687631a17b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.229686 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/454d8a2d-ab1c-41a6-810f-23687631a17b-kube-api-access-4pmjl" (OuterVolumeSpecName: "kube-api-access-4pmjl") pod "454d8a2d-ab1c-41a6-810f-23687631a17b" (UID: "454d8a2d-ab1c-41a6-810f-23687631a17b"). InnerVolumeSpecName "kube-api-access-4pmjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.254192 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/454d8a2d-ab1c-41a6-810f-23687631a17b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "454d8a2d-ab1c-41a6-810f-23687631a17b" (UID: "454d8a2d-ab1c-41a6-810f-23687631a17b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.277120 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/454d8a2d-ab1c-41a6-810f-23687631a17b-config-data" (OuterVolumeSpecName: "config-data") pod "454d8a2d-ab1c-41a6-810f-23687631a17b" (UID: "454d8a2d-ab1c-41a6-810f-23687631a17b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.317329 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/454d8a2d-ab1c-41a6-810f-23687631a17b-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.317370 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pmjl\" (UniqueName: \"kubernetes.io/projected/454d8a2d-ab1c-41a6-810f-23687631a17b-kube-api-access-4pmjl\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.317385 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/454d8a2d-ab1c-41a6-810f-23687631a17b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.317394 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/454d8a2d-ab1c-41a6-810f-23687631a17b-logs\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.404668 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.432473 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.458489 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 14 19:07:40 crc kubenswrapper[4897]: E0214 19:07:40.459082 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="454d8a2d-ab1c-41a6-810f-23687631a17b" containerName="nova-api-log" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.459099 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="454d8a2d-ab1c-41a6-810f-23687631a17b" containerName="nova-api-log" Feb 14 19:07:40 crc kubenswrapper[4897]: E0214 19:07:40.459122 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="454d8a2d-ab1c-41a6-810f-23687631a17b" containerName="nova-api-api" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.459128 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="454d8a2d-ab1c-41a6-810f-23687631a17b" containerName="nova-api-api" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.459486 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="454d8a2d-ab1c-41a6-810f-23687631a17b" containerName="nova-api-api" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.459517 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="454d8a2d-ab1c-41a6-810f-23687631a17b" containerName="nova-api-log" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.460712 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.463046 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.483709 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.526351 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fb07f36-326e-4d0b-979e-26075640b85a-logs\") pod \"nova-api-0\" (UID: \"7fb07f36-326e-4d0b-979e-26075640b85a\") " pod="openstack/nova-api-0" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.526658 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fb07f36-326e-4d0b-979e-26075640b85a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7fb07f36-326e-4d0b-979e-26075640b85a\") " pod="openstack/nova-api-0" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.526767 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwjz6\" (UniqueName: \"kubernetes.io/projected/7fb07f36-326e-4d0b-979e-26075640b85a-kube-api-access-qwjz6\") pod \"nova-api-0\" (UID: \"7fb07f36-326e-4d0b-979e-26075640b85a\") " pod="openstack/nova-api-0" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.526824 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fb07f36-326e-4d0b-979e-26075640b85a-config-data\") pod \"nova-api-0\" (UID: \"7fb07f36-326e-4d0b-979e-26075640b85a\") " pod="openstack/nova-api-0" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.629393 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fb07f36-326e-4d0b-979e-26075640b85a-config-data\") pod \"nova-api-0\" (UID: \"7fb07f36-326e-4d0b-979e-26075640b85a\") " pod="openstack/nova-api-0" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.629502 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fb07f36-326e-4d0b-979e-26075640b85a-logs\") pod \"nova-api-0\" (UID: \"7fb07f36-326e-4d0b-979e-26075640b85a\") " pod="openstack/nova-api-0" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.629645 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fb07f36-326e-4d0b-979e-26075640b85a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7fb07f36-326e-4d0b-979e-26075640b85a\") " pod="openstack/nova-api-0" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.629694 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwjz6\" (UniqueName: \"kubernetes.io/projected/7fb07f36-326e-4d0b-979e-26075640b85a-kube-api-access-qwjz6\") pod \"nova-api-0\" (UID: \"7fb07f36-326e-4d0b-979e-26075640b85a\") " pod="openstack/nova-api-0" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.630094 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fb07f36-326e-4d0b-979e-26075640b85a-logs\") pod \"nova-api-0\" (UID: \"7fb07f36-326e-4d0b-979e-26075640b85a\") " pod="openstack/nova-api-0" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.633394 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fb07f36-326e-4d0b-979e-26075640b85a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7fb07f36-326e-4d0b-979e-26075640b85a\") " pod="openstack/nova-api-0" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.633941 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fb07f36-326e-4d0b-979e-26075640b85a-config-data\") pod \"nova-api-0\" (UID: \"7fb07f36-326e-4d0b-979e-26075640b85a\") " pod="openstack/nova-api-0" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.649673 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwjz6\" (UniqueName: \"kubernetes.io/projected/7fb07f36-326e-4d0b-979e-26075640b85a-kube-api-access-qwjz6\") pod \"nova-api-0\" (UID: \"7fb07f36-326e-4d0b-979e-26075640b85a\") " pod="openstack/nova-api-0" Feb 14 19:07:40 crc kubenswrapper[4897]: I0214 19:07:40.783174 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 19:07:41 crc kubenswrapper[4897]: I0214 19:07:41.117314 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"02935790-1dbb-42a8-8f04-1314338f3425","Type":"ContainerStarted","Data":"418d2798011a99aa5b8b7f21d3b60db521e1db1f9058e2d39a112e37d838134f"} Feb 14 19:07:41 crc kubenswrapper[4897]: I0214 19:07:41.117686 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"02935790-1dbb-42a8-8f04-1314338f3425","Type":"ContainerStarted","Data":"e8bdf80060c70e21751d8dc942a31ee0062545c6a4edd9a36ba50f5a72fbe2f8"} Feb 14 19:07:41 crc kubenswrapper[4897]: I0214 19:07:41.287773 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 19:07:41 crc kubenswrapper[4897]: I0214 19:07:41.778455 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ttbmx" Feb 14 19:07:41 crc kubenswrapper[4897]: I0214 19:07:41.810674 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="454d8a2d-ab1c-41a6-810f-23687631a17b" path="/var/lib/kubelet/pods/454d8a2d-ab1c-41a6-810f-23687631a17b/volumes" Feb 14 19:07:41 crc kubenswrapper[4897]: I0214 19:07:41.962604 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd-combined-ca-bundle\") pod \"7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd\" (UID: \"7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd\") " Feb 14 19:07:41 crc kubenswrapper[4897]: I0214 19:07:41.962750 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qlld\" (UniqueName: \"kubernetes.io/projected/7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd-kube-api-access-5qlld\") pod \"7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd\" (UID: \"7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd\") " Feb 14 19:07:41 crc kubenswrapper[4897]: I0214 19:07:41.962782 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd-config-data\") pod \"7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd\" (UID: \"7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd\") " Feb 14 19:07:41 crc kubenswrapper[4897]: I0214 19:07:41.964317 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd-scripts\") pod \"7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd\" (UID: \"7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd\") " Feb 14 19:07:41 crc kubenswrapper[4897]: I0214 19:07:41.968444 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd-scripts" (OuterVolumeSpecName: "scripts") pod "7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd" (UID: "7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:41 crc kubenswrapper[4897]: I0214 19:07:41.970286 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd-kube-api-access-5qlld" (OuterVolumeSpecName: "kube-api-access-5qlld") pod "7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd" (UID: "7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd"). InnerVolumeSpecName "kube-api-access-5qlld". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.005356 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd-config-data" (OuterVolumeSpecName: "config-data") pod "7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd" (UID: "7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.021800 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd" (UID: "7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.066119 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qlld\" (UniqueName: \"kubernetes.io/projected/7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd-kube-api-access-5qlld\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.066150 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.066158 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.066167 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.134761 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7fb07f36-326e-4d0b-979e-26075640b85a","Type":"ContainerStarted","Data":"e18080bfcdd4ee93a0bbe26ab97104aadfbb4e4778aaa6c8faafabc246290a60"} Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.134811 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7fb07f36-326e-4d0b-979e-26075640b85a","Type":"ContainerStarted","Data":"b9453114091ce0721c5829c8a28fcf2fdab580fada4c6f8accf9ca2b6c27bc6b"} Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.134821 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7fb07f36-326e-4d0b-979e-26075640b85a","Type":"ContainerStarted","Data":"097badb205d0d78180280ed54c477c05bad1395d6d48d9eb07f026442ae7baa0"} Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.140571 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ttbmx" event={"ID":"7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd","Type":"ContainerDied","Data":"97e77052e566422e9cc9cbd1399982b98d01abeca55b52ad13f7c834d305e238"} Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.140609 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97e77052e566422e9cc9cbd1399982b98d01abeca55b52ad13f7c834d305e238" Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.140663 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ttbmx" Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.154247 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.1542323039999998 podStartE2EDuration="2.154232304s" podCreationTimestamp="2026-02-14 19:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:07:42.152146619 +0000 UTC m=+1515.128555122" watchObservedRunningTime="2026-02-14 19:07:42.154232304 +0000 UTC m=+1515.130640787" Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.186616 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 14 19:07:42 crc kubenswrapper[4897]: E0214 19:07:42.187321 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd" containerName="nova-cell1-conductor-db-sync" Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.187421 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd" containerName="nova-cell1-conductor-db-sync" Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.187748 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd" containerName="nova-cell1-conductor-db-sync" Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.188614 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.200511 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.218442 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.372628 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd17288a-e92b-4fea-9b86-9cf6c22f1b34-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"fd17288a-e92b-4fea-9b86-9cf6c22f1b34\") " pod="openstack/nova-cell1-conductor-0" Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.372995 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd17288a-e92b-4fea-9b86-9cf6c22f1b34-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"fd17288a-e92b-4fea-9b86-9cf6c22f1b34\") " pod="openstack/nova-cell1-conductor-0" Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.373050 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt6k8\" (UniqueName: \"kubernetes.io/projected/fd17288a-e92b-4fea-9b86-9cf6c22f1b34-kube-api-access-rt6k8\") pod \"nova-cell1-conductor-0\" (UID: \"fd17288a-e92b-4fea-9b86-9cf6c22f1b34\") " pod="openstack/nova-cell1-conductor-0" Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.475076 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd17288a-e92b-4fea-9b86-9cf6c22f1b34-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"fd17288a-e92b-4fea-9b86-9cf6c22f1b34\") " pod="openstack/nova-cell1-conductor-0" Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.475280 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd17288a-e92b-4fea-9b86-9cf6c22f1b34-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"fd17288a-e92b-4fea-9b86-9cf6c22f1b34\") " pod="openstack/nova-cell1-conductor-0" Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.475304 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt6k8\" (UniqueName: \"kubernetes.io/projected/fd17288a-e92b-4fea-9b86-9cf6c22f1b34-kube-api-access-rt6k8\") pod \"nova-cell1-conductor-0\" (UID: \"fd17288a-e92b-4fea-9b86-9cf6c22f1b34\") " pod="openstack/nova-cell1-conductor-0" Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.480454 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd17288a-e92b-4fea-9b86-9cf6c22f1b34-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"fd17288a-e92b-4fea-9b86-9cf6c22f1b34\") " pod="openstack/nova-cell1-conductor-0" Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.481307 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd17288a-e92b-4fea-9b86-9cf6c22f1b34-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"fd17288a-e92b-4fea-9b86-9cf6c22f1b34\") " pod="openstack/nova-cell1-conductor-0" Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.494823 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt6k8\" (UniqueName: \"kubernetes.io/projected/fd17288a-e92b-4fea-9b86-9cf6c22f1b34-kube-api-access-rt6k8\") pod \"nova-cell1-conductor-0\" (UID: \"fd17288a-e92b-4fea-9b86-9cf6c22f1b34\") " pod="openstack/nova-cell1-conductor-0" Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.509587 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.654816 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.655104 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7bae8002-6e52-4df1-b7d6-e42290023f2f" containerName="ceilometer-central-agent" containerID="cri-o://58403a42b92c553e49ef2d748b8392444527d201c48ea6d6e9a6b5d51b768eaf" gracePeriod=30 Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.655292 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7bae8002-6e52-4df1-b7d6-e42290023f2f" containerName="proxy-httpd" containerID="cri-o://470efcc96ea10d6f36db068042218f2ae8ebb22f4fd9cacd4f039acc58b2afd8" gracePeriod=30 Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.655351 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7bae8002-6e52-4df1-b7d6-e42290023f2f" containerName="sg-core" containerID="cri-o://8cfb5f3a828c556fafc416a5991274b7fb91f13dc1dba21f25977b404f2c7c3e" gracePeriod=30 Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.655382 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7bae8002-6e52-4df1-b7d6-e42290023f2f" containerName="ceilometer-notification-agent" containerID="cri-o://9bc7ae3d8b6287e823d2e339beadc0d4fd4c05ad74e4eb2b3f028a866e9c4e6b" gracePeriod=30 Feb 14 19:07:42 crc kubenswrapper[4897]: I0214 19:07:42.666369 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="7bae8002-6e52-4df1-b7d6-e42290023f2f" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.244:3000/\": EOF" Feb 14 19:07:42 crc kubenswrapper[4897]: E0214 19:07:42.838107 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bae8002_6e52_4df1_b7d6_e42290023f2f.slice/crio-conmon-8cfb5f3a828c556fafc416a5991274b7fb91f13dc1dba21f25977b404f2c7c3e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bae8002_6e52_4df1_b7d6_e42290023f2f.slice/crio-8cfb5f3a828c556fafc416a5991274b7fb91f13dc1dba21f25977b404f2c7c3e.scope\": RecentStats: unable to find data in memory cache]" Feb 14 19:07:43 crc kubenswrapper[4897]: I0214 19:07:43.157794 4897 generic.go:334] "Generic (PLEG): container finished" podID="7bae8002-6e52-4df1-b7d6-e42290023f2f" containerID="470efcc96ea10d6f36db068042218f2ae8ebb22f4fd9cacd4f039acc58b2afd8" exitCode=0 Feb 14 19:07:43 crc kubenswrapper[4897]: I0214 19:07:43.158086 4897 generic.go:334] "Generic (PLEG): container finished" podID="7bae8002-6e52-4df1-b7d6-e42290023f2f" containerID="8cfb5f3a828c556fafc416a5991274b7fb91f13dc1dba21f25977b404f2c7c3e" exitCode=2 Feb 14 19:07:43 crc kubenswrapper[4897]: I0214 19:07:43.157865 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7bae8002-6e52-4df1-b7d6-e42290023f2f","Type":"ContainerDied","Data":"470efcc96ea10d6f36db068042218f2ae8ebb22f4fd9cacd4f039acc58b2afd8"} Feb 14 19:07:43 crc kubenswrapper[4897]: I0214 19:07:43.158137 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7bae8002-6e52-4df1-b7d6-e42290023f2f","Type":"ContainerDied","Data":"8cfb5f3a828c556fafc416a5991274b7fb91f13dc1dba21f25977b404f2c7c3e"} Feb 14 19:07:43 crc kubenswrapper[4897]: I0214 19:07:43.158152 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7bae8002-6e52-4df1-b7d6-e42290023f2f","Type":"ContainerDied","Data":"58403a42b92c553e49ef2d748b8392444527d201c48ea6d6e9a6b5d51b768eaf"} Feb 14 19:07:43 crc kubenswrapper[4897]: I0214 19:07:43.158099 4897 generic.go:334] "Generic (PLEG): container finished" podID="7bae8002-6e52-4df1-b7d6-e42290023f2f" containerID="58403a42b92c553e49ef2d748b8392444527d201c48ea6d6e9a6b5d51b768eaf" exitCode=0 Feb 14 19:07:43 crc kubenswrapper[4897]: I0214 19:07:43.164706 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"02935790-1dbb-42a8-8f04-1314338f3425","Type":"ContainerStarted","Data":"0f6bcc97305beb7768ff5b28ce7f58e796ad52e3e7f9815758c82419013fb212"} Feb 14 19:07:43 crc kubenswrapper[4897]: I0214 19:07:43.171903 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 14 19:07:43 crc kubenswrapper[4897]: I0214 19:07:43.267676 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 14 19:07:43 crc kubenswrapper[4897]: I0214 19:07:43.428394 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 14 19:07:44 crc kubenswrapper[4897]: I0214 19:07:44.174565 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"fd17288a-e92b-4fea-9b86-9cf6c22f1b34","Type":"ContainerStarted","Data":"1dcc4c8d9ef5793f9baca25825f234bc09a922a7ad96209794debf7dc3a012b5"} Feb 14 19:07:44 crc kubenswrapper[4897]: I0214 19:07:44.174897 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"fd17288a-e92b-4fea-9b86-9cf6c22f1b34","Type":"ContainerStarted","Data":"2deeedb1a72fd3a3e97facef85b27eaa3764674831d27e211f8001e6716f0214"} Feb 14 19:07:44 crc kubenswrapper[4897]: I0214 19:07:44.175071 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 14 19:07:44 crc kubenswrapper[4897]: I0214 19:07:44.766203 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="7bae8002-6e52-4df1-b7d6-e42290023f2f" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.244:3000/\": dial tcp 10.217.0.244:3000: connect: connection refused" Feb 14 19:07:45 crc kubenswrapper[4897]: I0214 19:07:45.213413 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"02935790-1dbb-42a8-8f04-1314338f3425","Type":"ContainerStarted","Data":"1843670e358a6514a4926d4c57fbef874c0393403f8ca8040d5eeb0807c8d34f"} Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.234584 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"02935790-1dbb-42a8-8f04-1314338f3425","Type":"ContainerStarted","Data":"f7cd8662cdc582c53a5a67669c742128da4283371a38819e41c7a28b0d5b8a56"} Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.235153 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="02935790-1dbb-42a8-8f04-1314338f3425" containerName="aodh-api" containerID="cri-o://418d2798011a99aa5b8b7f21d3b60db521e1db1f9058e2d39a112e37d838134f" gracePeriod=30 Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.235510 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="02935790-1dbb-42a8-8f04-1314338f3425" containerName="aodh-notifier" containerID="cri-o://1843670e358a6514a4926d4c57fbef874c0393403f8ca8040d5eeb0807c8d34f" gracePeriod=30 Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.235636 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="02935790-1dbb-42a8-8f04-1314338f3425" containerName="aodh-listener" containerID="cri-o://f7cd8662cdc582c53a5a67669c742128da4283371a38819e41c7a28b0d5b8a56" gracePeriod=30 Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.235692 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="02935790-1dbb-42a8-8f04-1314338f3425" containerName="aodh-evaluator" containerID="cri-o://0f6bcc97305beb7768ff5b28ce7f58e796ad52e3e7f9815758c82419013fb212" gracePeriod=30 Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.254698 4897 generic.go:334] "Generic (PLEG): container finished" podID="7bae8002-6e52-4df1-b7d6-e42290023f2f" containerID="9bc7ae3d8b6287e823d2e339beadc0d4fd4c05ad74e4eb2b3f028a866e9c4e6b" exitCode=0 Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.254738 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7bae8002-6e52-4df1-b7d6-e42290023f2f","Type":"ContainerDied","Data":"9bc7ae3d8b6287e823d2e339beadc0d4fd4c05ad74e4eb2b3f028a866e9c4e6b"} Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.279732 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.628757862 podStartE2EDuration="7.279714531s" podCreationTimestamp="2026-02-14 19:07:39 +0000 UTC" firstStartedPulling="2026-02-14 19:07:40.156633486 +0000 UTC m=+1513.133041969" lastFinishedPulling="2026-02-14 19:07:45.807590145 +0000 UTC m=+1518.783998638" observedRunningTime="2026-02-14 19:07:46.253719916 +0000 UTC m=+1519.230128399" watchObservedRunningTime="2026-02-14 19:07:46.279714531 +0000 UTC m=+1519.256123014" Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.280364 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=4.280357271 podStartE2EDuration="4.280357271s" podCreationTimestamp="2026-02-14 19:07:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:07:44.198271291 +0000 UTC m=+1517.174679784" watchObservedRunningTime="2026-02-14 19:07:46.280357271 +0000 UTC m=+1519.256765754" Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.306595 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.469210 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7bae8002-6e52-4df1-b7d6-e42290023f2f-run-httpd\") pod \"7bae8002-6e52-4df1-b7d6-e42290023f2f\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.469274 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94bz4\" (UniqueName: \"kubernetes.io/projected/7bae8002-6e52-4df1-b7d6-e42290023f2f-kube-api-access-94bz4\") pod \"7bae8002-6e52-4df1-b7d6-e42290023f2f\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.469419 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bae8002-6e52-4df1-b7d6-e42290023f2f-config-data\") pod \"7bae8002-6e52-4df1-b7d6-e42290023f2f\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.469441 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7bae8002-6e52-4df1-b7d6-e42290023f2f-log-httpd\") pod \"7bae8002-6e52-4df1-b7d6-e42290023f2f\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.469514 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bae8002-6e52-4df1-b7d6-e42290023f2f-scripts\") pod \"7bae8002-6e52-4df1-b7d6-e42290023f2f\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.469587 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bae8002-6e52-4df1-b7d6-e42290023f2f-combined-ca-bundle\") pod \"7bae8002-6e52-4df1-b7d6-e42290023f2f\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.469617 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7bae8002-6e52-4df1-b7d6-e42290023f2f-sg-core-conf-yaml\") pod \"7bae8002-6e52-4df1-b7d6-e42290023f2f\" (UID: \"7bae8002-6e52-4df1-b7d6-e42290023f2f\") " Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.473209 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bae8002-6e52-4df1-b7d6-e42290023f2f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7bae8002-6e52-4df1-b7d6-e42290023f2f" (UID: "7bae8002-6e52-4df1-b7d6-e42290023f2f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.474435 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bae8002-6e52-4df1-b7d6-e42290023f2f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7bae8002-6e52-4df1-b7d6-e42290023f2f" (UID: "7bae8002-6e52-4df1-b7d6-e42290023f2f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.479262 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bae8002-6e52-4df1-b7d6-e42290023f2f-kube-api-access-94bz4" (OuterVolumeSpecName: "kube-api-access-94bz4") pod "7bae8002-6e52-4df1-b7d6-e42290023f2f" (UID: "7bae8002-6e52-4df1-b7d6-e42290023f2f"). InnerVolumeSpecName "kube-api-access-94bz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.488639 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bae8002-6e52-4df1-b7d6-e42290023f2f-scripts" (OuterVolumeSpecName: "scripts") pod "7bae8002-6e52-4df1-b7d6-e42290023f2f" (UID: "7bae8002-6e52-4df1-b7d6-e42290023f2f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.510766 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bae8002-6e52-4df1-b7d6-e42290023f2f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7bae8002-6e52-4df1-b7d6-e42290023f2f" (UID: "7bae8002-6e52-4df1-b7d6-e42290023f2f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.572281 4897 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7bae8002-6e52-4df1-b7d6-e42290023f2f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.572610 4897 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7bae8002-6e52-4df1-b7d6-e42290023f2f-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.572621 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94bz4\" (UniqueName: \"kubernetes.io/projected/7bae8002-6e52-4df1-b7d6-e42290023f2f-kube-api-access-94bz4\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.572651 4897 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7bae8002-6e52-4df1-b7d6-e42290023f2f-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.572661 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bae8002-6e52-4df1-b7d6-e42290023f2f-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.580239 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bae8002-6e52-4df1-b7d6-e42290023f2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7bae8002-6e52-4df1-b7d6-e42290023f2f" (UID: "7bae8002-6e52-4df1-b7d6-e42290023f2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.611074 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bae8002-6e52-4df1-b7d6-e42290023f2f-config-data" (OuterVolumeSpecName: "config-data") pod "7bae8002-6e52-4df1-b7d6-e42290023f2f" (UID: "7bae8002-6e52-4df1-b7d6-e42290023f2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.675248 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bae8002-6e52-4df1-b7d6-e42290023f2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:46 crc kubenswrapper[4897]: I0214 19:07:46.675294 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bae8002-6e52-4df1-b7d6-e42290023f2f-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.278393 4897 generic.go:334] "Generic (PLEG): container finished" podID="02935790-1dbb-42a8-8f04-1314338f3425" containerID="0f6bcc97305beb7768ff5b28ce7f58e796ad52e3e7f9815758c82419013fb212" exitCode=0 Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.278421 4897 generic.go:334] "Generic (PLEG): container finished" podID="02935790-1dbb-42a8-8f04-1314338f3425" containerID="418d2798011a99aa5b8b7f21d3b60db521e1db1f9058e2d39a112e37d838134f" exitCode=0 Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.278457 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"02935790-1dbb-42a8-8f04-1314338f3425","Type":"ContainerDied","Data":"0f6bcc97305beb7768ff5b28ce7f58e796ad52e3e7f9815758c82419013fb212"} Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.278493 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"02935790-1dbb-42a8-8f04-1314338f3425","Type":"ContainerDied","Data":"418d2798011a99aa5b8b7f21d3b60db521e1db1f9058e2d39a112e37d838134f"} Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.281092 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7bae8002-6e52-4df1-b7d6-e42290023f2f","Type":"ContainerDied","Data":"133db7345cbddd31345ecd198055d455250004288cbca6229e896a68724ecd03"} Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.281117 4897 scope.go:117] "RemoveContainer" containerID="470efcc96ea10d6f36db068042218f2ae8ebb22f4fd9cacd4f039acc58b2afd8" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.281242 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.325000 4897 scope.go:117] "RemoveContainer" containerID="8cfb5f3a828c556fafc416a5991274b7fb91f13dc1dba21f25977b404f2c7c3e" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.327764 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.345379 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.362675 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:07:47 crc kubenswrapper[4897]: E0214 19:07:47.363464 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bae8002-6e52-4df1-b7d6-e42290023f2f" containerName="proxy-httpd" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.363477 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bae8002-6e52-4df1-b7d6-e42290023f2f" containerName="proxy-httpd" Feb 14 19:07:47 crc kubenswrapper[4897]: E0214 19:07:47.363493 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bae8002-6e52-4df1-b7d6-e42290023f2f" containerName="ceilometer-central-agent" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.363498 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bae8002-6e52-4df1-b7d6-e42290023f2f" containerName="ceilometer-central-agent" Feb 14 19:07:47 crc kubenswrapper[4897]: E0214 19:07:47.363513 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bae8002-6e52-4df1-b7d6-e42290023f2f" containerName="sg-core" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.363519 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bae8002-6e52-4df1-b7d6-e42290023f2f" containerName="sg-core" Feb 14 19:07:47 crc kubenswrapper[4897]: E0214 19:07:47.363555 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bae8002-6e52-4df1-b7d6-e42290023f2f" containerName="ceilometer-notification-agent" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.363561 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bae8002-6e52-4df1-b7d6-e42290023f2f" containerName="ceilometer-notification-agent" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.363776 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bae8002-6e52-4df1-b7d6-e42290023f2f" containerName="sg-core" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.363790 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bae8002-6e52-4df1-b7d6-e42290023f2f" containerName="ceilometer-notification-agent" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.363803 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bae8002-6e52-4df1-b7d6-e42290023f2f" containerName="ceilometer-central-agent" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.363819 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bae8002-6e52-4df1-b7d6-e42290023f2f" containerName="proxy-httpd" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.366615 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.370167 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.370359 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.392581 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.393116 4897 scope.go:117] "RemoveContainer" containerID="9bc7ae3d8b6287e823d2e339beadc0d4fd4c05ad74e4eb2b3f028a866e9c4e6b" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.436812 4897 scope.go:117] "RemoveContainer" containerID="58403a42b92c553e49ef2d748b8392444527d201c48ea6d6e9a6b5d51b768eaf" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.490712 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80f924e2-0f79-47b0-ac1b-c909d06c87d1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " pod="openstack/ceilometer-0" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.491484 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80f924e2-0f79-47b0-ac1b-c909d06c87d1-log-httpd\") pod \"ceilometer-0\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " pod="openstack/ceilometer-0" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.491617 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80f924e2-0f79-47b0-ac1b-c909d06c87d1-scripts\") pod \"ceilometer-0\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " pod="openstack/ceilometer-0" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.491787 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80f924e2-0f79-47b0-ac1b-c909d06c87d1-config-data\") pod \"ceilometer-0\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " pod="openstack/ceilometer-0" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.491851 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/80f924e2-0f79-47b0-ac1b-c909d06c87d1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " pod="openstack/ceilometer-0" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.491981 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80f924e2-0f79-47b0-ac1b-c909d06c87d1-run-httpd\") pod \"ceilometer-0\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " pod="openstack/ceilometer-0" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.492220 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zmts\" (UniqueName: \"kubernetes.io/projected/80f924e2-0f79-47b0-ac1b-c909d06c87d1-kube-api-access-5zmts\") pod \"ceilometer-0\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " pod="openstack/ceilometer-0" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.594286 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80f924e2-0f79-47b0-ac1b-c909d06c87d1-config-data\") pod \"ceilometer-0\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " pod="openstack/ceilometer-0" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.594337 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/80f924e2-0f79-47b0-ac1b-c909d06c87d1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " pod="openstack/ceilometer-0" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.594388 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80f924e2-0f79-47b0-ac1b-c909d06c87d1-run-httpd\") pod \"ceilometer-0\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " pod="openstack/ceilometer-0" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.594451 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zmts\" (UniqueName: \"kubernetes.io/projected/80f924e2-0f79-47b0-ac1b-c909d06c87d1-kube-api-access-5zmts\") pod \"ceilometer-0\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " pod="openstack/ceilometer-0" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.594526 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80f924e2-0f79-47b0-ac1b-c909d06c87d1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " pod="openstack/ceilometer-0" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.594554 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80f924e2-0f79-47b0-ac1b-c909d06c87d1-log-httpd\") pod \"ceilometer-0\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " pod="openstack/ceilometer-0" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.594603 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80f924e2-0f79-47b0-ac1b-c909d06c87d1-scripts\") pod \"ceilometer-0\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " pod="openstack/ceilometer-0" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.595492 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80f924e2-0f79-47b0-ac1b-c909d06c87d1-log-httpd\") pod \"ceilometer-0\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " pod="openstack/ceilometer-0" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.595728 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80f924e2-0f79-47b0-ac1b-c909d06c87d1-run-httpd\") pod \"ceilometer-0\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " pod="openstack/ceilometer-0" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.600386 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80f924e2-0f79-47b0-ac1b-c909d06c87d1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " pod="openstack/ceilometer-0" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.600762 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80f924e2-0f79-47b0-ac1b-c909d06c87d1-config-data\") pod \"ceilometer-0\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " pod="openstack/ceilometer-0" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.611119 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/80f924e2-0f79-47b0-ac1b-c909d06c87d1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " pod="openstack/ceilometer-0" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.611364 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80f924e2-0f79-47b0-ac1b-c909d06c87d1-scripts\") pod \"ceilometer-0\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " pod="openstack/ceilometer-0" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.618608 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zmts\" (UniqueName: \"kubernetes.io/projected/80f924e2-0f79-47b0-ac1b-c909d06c87d1-kube-api-access-5zmts\") pod \"ceilometer-0\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " pod="openstack/ceilometer-0" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.707064 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:07:47 crc kubenswrapper[4897]: I0214 19:07:47.831252 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bae8002-6e52-4df1-b7d6-e42290023f2f" path="/var/lib/kubelet/pods/7bae8002-6e52-4df1-b7d6-e42290023f2f/volumes" Feb 14 19:07:48 crc kubenswrapper[4897]: I0214 19:07:48.210993 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:07:48 crc kubenswrapper[4897]: I0214 19:07:48.299286 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80f924e2-0f79-47b0-ac1b-c909d06c87d1","Type":"ContainerStarted","Data":"3a8e0e273a28c32032f8f92270d07b95659cae62930909fd3ccaeb8dd3f4eef6"} Feb 14 19:07:48 crc kubenswrapper[4897]: I0214 19:07:48.428250 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 14 19:07:48 crc kubenswrapper[4897]: I0214 19:07:48.462252 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 14 19:07:49 crc kubenswrapper[4897]: I0214 19:07:49.316649 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80f924e2-0f79-47b0-ac1b-c909d06c87d1","Type":"ContainerStarted","Data":"98adb8ae7c7ef5cb4526aa6393959e73e6573b02b2592fa72f49a838c32054f8"} Feb 14 19:07:49 crc kubenswrapper[4897]: I0214 19:07:49.350082 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 14 19:07:50 crc kubenswrapper[4897]: I0214 19:07:50.338128 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80f924e2-0f79-47b0-ac1b-c909d06c87d1","Type":"ContainerStarted","Data":"ef13eb33e37343eb8b626255d7b1d9b5020c76826c4bbbc52fcd20a9cde2fd96"} Feb 14 19:07:50 crc kubenswrapper[4897]: I0214 19:07:50.784103 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 19:07:50 crc kubenswrapper[4897]: I0214 19:07:50.784514 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 19:07:51 crc kubenswrapper[4897]: I0214 19:07:51.350662 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80f924e2-0f79-47b0-ac1b-c909d06c87d1","Type":"ContainerStarted","Data":"cd08ce55d33a0e32a87e0964079a005504da1287de30eaea2b90fab4b03b0000"} Feb 14 19:07:51 crc kubenswrapper[4897]: I0214 19:07:51.866206 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7fb07f36-326e-4d0b-979e-26075640b85a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.0:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 19:07:51 crc kubenswrapper[4897]: I0214 19:07:51.866264 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7fb07f36-326e-4d0b-979e-26075640b85a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.0:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 19:07:52 crc kubenswrapper[4897]: I0214 19:07:52.379384 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80f924e2-0f79-47b0-ac1b-c909d06c87d1","Type":"ContainerStarted","Data":"7f0c1aef7c812ce1bfcac35588fc1e3a3f7454952bef0cedaf236abfc62ec363"} Feb 14 19:07:52 crc kubenswrapper[4897]: I0214 19:07:52.379779 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 19:07:52 crc kubenswrapper[4897]: I0214 19:07:52.447890 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.128930195 podStartE2EDuration="5.447863751s" podCreationTimestamp="2026-02-14 19:07:47 +0000 UTC" firstStartedPulling="2026-02-14 19:07:48.217292486 +0000 UTC m=+1521.193700979" lastFinishedPulling="2026-02-14 19:07:51.536226052 +0000 UTC m=+1524.512634535" observedRunningTime="2026-02-14 19:07:52.402210628 +0000 UTC m=+1525.378619151" watchObservedRunningTime="2026-02-14 19:07:52.447863751 +0000 UTC m=+1525.424272224" Feb 14 19:07:52 crc kubenswrapper[4897]: I0214 19:07:52.546406 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 14 19:08:00 crc kubenswrapper[4897]: I0214 19:08:00.798254 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 14 19:08:00 crc kubenswrapper[4897]: I0214 19:08:00.799490 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 14 19:08:00 crc kubenswrapper[4897]: I0214 19:08:00.800570 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 14 19:08:00 crc kubenswrapper[4897]: I0214 19:08:00.806079 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 14 19:08:01 crc kubenswrapper[4897]: I0214 19:08:01.504975 4897 generic.go:334] "Generic (PLEG): container finished" podID="4cbc738b-7239-431b-9ea6-fde705a328a3" containerID="e7f8ca1035fe4ab44b56a7b5335080d9770958fed63e28ce65ad1d38e20044cc" exitCode=137 Feb 14 19:08:01 crc kubenswrapper[4897]: I0214 19:08:01.507137 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4cbc738b-7239-431b-9ea6-fde705a328a3","Type":"ContainerDied","Data":"e7f8ca1035fe4ab44b56a7b5335080d9770958fed63e28ce65ad1d38e20044cc"} Feb 14 19:08:01 crc kubenswrapper[4897]: I0214 19:08:01.507191 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4cbc738b-7239-431b-9ea6-fde705a328a3","Type":"ContainerDied","Data":"32a0140464de8175e39edcb4adeadcaaa91e8b017ada07dd9f0c29fa76641114"} Feb 14 19:08:01 crc kubenswrapper[4897]: I0214 19:08:01.507212 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32a0140464de8175e39edcb4adeadcaaa91e8b017ada07dd9f0c29fa76641114" Feb 14 19:08:01 crc kubenswrapper[4897]: I0214 19:08:01.507235 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 14 19:08:01 crc kubenswrapper[4897]: I0214 19:08:01.512280 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 14 19:08:01 crc kubenswrapper[4897]: I0214 19:08:01.524386 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:08:01 crc kubenswrapper[4897]: I0214 19:08:01.669754 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cbc738b-7239-431b-9ea6-fde705a328a3-config-data\") pod \"4cbc738b-7239-431b-9ea6-fde705a328a3\" (UID: \"4cbc738b-7239-431b-9ea6-fde705a328a3\") " Feb 14 19:08:01 crc kubenswrapper[4897]: I0214 19:08:01.669905 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cbc738b-7239-431b-9ea6-fde705a328a3-combined-ca-bundle\") pod \"4cbc738b-7239-431b-9ea6-fde705a328a3\" (UID: \"4cbc738b-7239-431b-9ea6-fde705a328a3\") " Feb 14 19:08:01 crc kubenswrapper[4897]: I0214 19:08:01.669946 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnjr7\" (UniqueName: \"kubernetes.io/projected/4cbc738b-7239-431b-9ea6-fde705a328a3-kube-api-access-vnjr7\") pod \"4cbc738b-7239-431b-9ea6-fde705a328a3\" (UID: \"4cbc738b-7239-431b-9ea6-fde705a328a3\") " Feb 14 19:08:01 crc kubenswrapper[4897]: I0214 19:08:01.697214 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cbc738b-7239-431b-9ea6-fde705a328a3-kube-api-access-vnjr7" (OuterVolumeSpecName: "kube-api-access-vnjr7") pod "4cbc738b-7239-431b-9ea6-fde705a328a3" (UID: "4cbc738b-7239-431b-9ea6-fde705a328a3"). InnerVolumeSpecName "kube-api-access-vnjr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:08:01 crc kubenswrapper[4897]: I0214 19:08:01.723537 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cbc738b-7239-431b-9ea6-fde705a328a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4cbc738b-7239-431b-9ea6-fde705a328a3" (UID: "4cbc738b-7239-431b-9ea6-fde705a328a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:01 crc kubenswrapper[4897]: I0214 19:08:01.772888 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cbc738b-7239-431b-9ea6-fde705a328a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:01 crc kubenswrapper[4897]: I0214 19:08:01.777810 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnjr7\" (UniqueName: \"kubernetes.io/projected/4cbc738b-7239-431b-9ea6-fde705a328a3-kube-api-access-vnjr7\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:01 crc kubenswrapper[4897]: I0214 19:08:01.773273 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cbc738b-7239-431b-9ea6-fde705a328a3-config-data" (OuterVolumeSpecName: "config-data") pod "4cbc738b-7239-431b-9ea6-fde705a328a3" (UID: "4cbc738b-7239-431b-9ea6-fde705a328a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:01 crc kubenswrapper[4897]: I0214 19:08:01.782865 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-c8m59"] Feb 14 19:08:01 crc kubenswrapper[4897]: E0214 19:08:01.783397 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cbc738b-7239-431b-9ea6-fde705a328a3" containerName="nova-cell1-novncproxy-novncproxy" Feb 14 19:08:01 crc kubenswrapper[4897]: I0214 19:08:01.783411 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cbc738b-7239-431b-9ea6-fde705a328a3" containerName="nova-cell1-novncproxy-novncproxy" Feb 14 19:08:01 crc kubenswrapper[4897]: I0214 19:08:01.783753 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cbc738b-7239-431b-9ea6-fde705a328a3" containerName="nova-cell1-novncproxy-novncproxy" Feb 14 19:08:01 crc kubenswrapper[4897]: I0214 19:08:01.785830 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" Feb 14 19:08:01 crc kubenswrapper[4897]: I0214 19:08:01.820808 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-c8m59"] Feb 14 19:08:01 crc kubenswrapper[4897]: I0214 19:08:01.880015 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cbc738b-7239-431b-9ea6-fde705a328a3-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:01 crc kubenswrapper[4897]: I0214 19:08:01.981369 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-c8m59\" (UID: \"f87788c4-1596-41e8-9033-674336188dc7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" Feb 14 19:08:01 crc kubenswrapper[4897]: I0214 19:08:01.981438 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-c8m59\" (UID: \"f87788c4-1596-41e8-9033-674336188dc7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" Feb 14 19:08:01 crc kubenswrapper[4897]: I0214 19:08:01.981464 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-config\") pod \"dnsmasq-dns-6b7bbf7cf9-c8m59\" (UID: \"f87788c4-1596-41e8-9033-674336188dc7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" Feb 14 19:08:01 crc kubenswrapper[4897]: I0214 19:08:01.981631 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgcd2\" (UniqueName: \"kubernetes.io/projected/f87788c4-1596-41e8-9033-674336188dc7-kube-api-access-fgcd2\") pod \"dnsmasq-dns-6b7bbf7cf9-c8m59\" (UID: \"f87788c4-1596-41e8-9033-674336188dc7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" Feb 14 19:08:01 crc kubenswrapper[4897]: I0214 19:08:01.981906 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-c8m59\" (UID: \"f87788c4-1596-41e8-9033-674336188dc7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" Feb 14 19:08:01 crc kubenswrapper[4897]: I0214 19:08:01.981966 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-c8m59\" (UID: \"f87788c4-1596-41e8-9033-674336188dc7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" Feb 14 19:08:02 crc kubenswrapper[4897]: E0214 19:08:02.011011 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cbc738b_7239_431b_9ea6_fde705a328a3.slice/crio-32a0140464de8175e39edcb4adeadcaaa91e8b017ada07dd9f0c29fa76641114\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cbc738b_7239_431b_9ea6_fde705a328a3.slice\": RecentStats: unable to find data in memory cache]" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.084916 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-config\") pod \"dnsmasq-dns-6b7bbf7cf9-c8m59\" (UID: \"f87788c4-1596-41e8-9033-674336188dc7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.084976 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgcd2\" (UniqueName: \"kubernetes.io/projected/f87788c4-1596-41e8-9033-674336188dc7-kube-api-access-fgcd2\") pod \"dnsmasq-dns-6b7bbf7cf9-c8m59\" (UID: \"f87788c4-1596-41e8-9033-674336188dc7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.085123 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-c8m59\" (UID: \"f87788c4-1596-41e8-9033-674336188dc7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.085159 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-c8m59\" (UID: \"f87788c4-1596-41e8-9033-674336188dc7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.085375 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-c8m59\" (UID: \"f87788c4-1596-41e8-9033-674336188dc7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.085422 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-c8m59\" (UID: \"f87788c4-1596-41e8-9033-674336188dc7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.086184 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-config\") pod \"dnsmasq-dns-6b7bbf7cf9-c8m59\" (UID: \"f87788c4-1596-41e8-9033-674336188dc7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.086231 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-c8m59\" (UID: \"f87788c4-1596-41e8-9033-674336188dc7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.086425 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-c8m59\" (UID: \"f87788c4-1596-41e8-9033-674336188dc7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.086729 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-c8m59\" (UID: \"f87788c4-1596-41e8-9033-674336188dc7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.087190 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-c8m59\" (UID: \"f87788c4-1596-41e8-9033-674336188dc7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.104647 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgcd2\" (UniqueName: \"kubernetes.io/projected/f87788c4-1596-41e8-9033-674336188dc7-kube-api-access-fgcd2\") pod \"dnsmasq-dns-6b7bbf7cf9-c8m59\" (UID: \"f87788c4-1596-41e8-9033-674336188dc7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.138954 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.515648 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.573561 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.586739 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.644303 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.646913 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.649527 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.649752 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.649855 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.673936 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.723288 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea23c74e-626a-4a73-8056-0b261563e5da-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea23c74e-626a-4a73-8056-0b261563e5da\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.724366 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea23c74e-626a-4a73-8056-0b261563e5da-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea23c74e-626a-4a73-8056-0b261563e5da\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.724764 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea23c74e-626a-4a73-8056-0b261563e5da-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea23c74e-626a-4a73-8056-0b261563e5da\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.725180 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lrmc\" (UniqueName: \"kubernetes.io/projected/ea23c74e-626a-4a73-8056-0b261563e5da-kube-api-access-6lrmc\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea23c74e-626a-4a73-8056-0b261563e5da\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.725267 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea23c74e-626a-4a73-8056-0b261563e5da-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea23c74e-626a-4a73-8056-0b261563e5da\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.827486 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea23c74e-626a-4a73-8056-0b261563e5da-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea23c74e-626a-4a73-8056-0b261563e5da\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.828241 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lrmc\" (UniqueName: \"kubernetes.io/projected/ea23c74e-626a-4a73-8056-0b261563e5da-kube-api-access-6lrmc\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea23c74e-626a-4a73-8056-0b261563e5da\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.828291 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea23c74e-626a-4a73-8056-0b261563e5da-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea23c74e-626a-4a73-8056-0b261563e5da\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.828381 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea23c74e-626a-4a73-8056-0b261563e5da-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea23c74e-626a-4a73-8056-0b261563e5da\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.828497 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea23c74e-626a-4a73-8056-0b261563e5da-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea23c74e-626a-4a73-8056-0b261563e5da\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.836710 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea23c74e-626a-4a73-8056-0b261563e5da-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea23c74e-626a-4a73-8056-0b261563e5da\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.837974 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-c8m59"] Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.839288 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea23c74e-626a-4a73-8056-0b261563e5da-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea23c74e-626a-4a73-8056-0b261563e5da\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.839623 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea23c74e-626a-4a73-8056-0b261563e5da-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea23c74e-626a-4a73-8056-0b261563e5da\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.861881 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea23c74e-626a-4a73-8056-0b261563e5da-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea23c74e-626a-4a73-8056-0b261563e5da\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.864833 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lrmc\" (UniqueName: \"kubernetes.io/projected/ea23c74e-626a-4a73-8056-0b261563e5da-kube-api-access-6lrmc\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea23c74e-626a-4a73-8056-0b261563e5da\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:08:02 crc kubenswrapper[4897]: I0214 19:08:02.988216 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:08:03 crc kubenswrapper[4897]: I0214 19:08:03.497708 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 19:08:03 crc kubenswrapper[4897]: I0214 19:08:03.539525 4897 generic.go:334] "Generic (PLEG): container finished" podID="f87788c4-1596-41e8-9033-674336188dc7" containerID="fc09720803425945ff866cbee96a2d76fc93b5b1ab3c7a9eb8868851d53ac4e2" exitCode=0 Feb 14 19:08:03 crc kubenswrapper[4897]: I0214 19:08:03.539603 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" event={"ID":"f87788c4-1596-41e8-9033-674336188dc7","Type":"ContainerDied","Data":"fc09720803425945ff866cbee96a2d76fc93b5b1ab3c7a9eb8868851d53ac4e2"} Feb 14 19:08:03 crc kubenswrapper[4897]: I0214 19:08:03.539629 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" event={"ID":"f87788c4-1596-41e8-9033-674336188dc7","Type":"ContainerStarted","Data":"5ab40f2fb917b93dd5271a05fd8e568390e981dca20c0b7bd1576d8f818e5033"} Feb 14 19:08:03 crc kubenswrapper[4897]: I0214 19:08:03.544443 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ea23c74e-626a-4a73-8056-0b261563e5da","Type":"ContainerStarted","Data":"3fa521422f07d382ecfa075dd0228c2ee787b4a90a0368995a077887d96099df"} Feb 14 19:08:03 crc kubenswrapper[4897]: I0214 19:08:03.812149 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cbc738b-7239-431b-9ea6-fde705a328a3" path="/var/lib/kubelet/pods/4cbc738b-7239-431b-9ea6-fde705a328a3/volumes" Feb 14 19:08:04 crc kubenswrapper[4897]: I0214 19:08:04.543714 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:08:04 crc kubenswrapper[4897]: I0214 19:08:04.547478 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="80f924e2-0f79-47b0-ac1b-c909d06c87d1" containerName="proxy-httpd" containerID="cri-o://7f0c1aef7c812ce1bfcac35588fc1e3a3f7454952bef0cedaf236abfc62ec363" gracePeriod=30 Feb 14 19:08:04 crc kubenswrapper[4897]: I0214 19:08:04.547490 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="80f924e2-0f79-47b0-ac1b-c909d06c87d1" containerName="sg-core" containerID="cri-o://cd08ce55d33a0e32a87e0964079a005504da1287de30eaea2b90fab4b03b0000" gracePeriod=30 Feb 14 19:08:04 crc kubenswrapper[4897]: I0214 19:08:04.547509 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="80f924e2-0f79-47b0-ac1b-c909d06c87d1" containerName="ceilometer-notification-agent" containerID="cri-o://ef13eb33e37343eb8b626255d7b1d9b5020c76826c4bbbc52fcd20a9cde2fd96" gracePeriod=30 Feb 14 19:08:04 crc kubenswrapper[4897]: I0214 19:08:04.548063 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="80f924e2-0f79-47b0-ac1b-c909d06c87d1" containerName="ceilometer-central-agent" containerID="cri-o://98adb8ae7c7ef5cb4526aa6393959e73e6573b02b2592fa72f49a838c32054f8" gracePeriod=30 Feb 14 19:08:04 crc kubenswrapper[4897]: I0214 19:08:04.562761 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ea23c74e-626a-4a73-8056-0b261563e5da","Type":"ContainerStarted","Data":"7ff80f58b7ccb2582d63244f0887301a20d2e94b9c1e99dbaa1b58ab25ae99f1"} Feb 14 19:08:04 crc kubenswrapper[4897]: I0214 19:08:04.565086 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" event={"ID":"f87788c4-1596-41e8-9033-674336188dc7","Type":"ContainerStarted","Data":"c1baac32e31dbfa4cc88a8b99be9bc177ee8cd8ff6fae082ad5b22b97778dfff"} Feb 14 19:08:04 crc kubenswrapper[4897]: I0214 19:08:04.565372 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" Feb 14 19:08:04 crc kubenswrapper[4897]: I0214 19:08:04.578209 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="80f924e2-0f79-47b0-ac1b-c909d06c87d1" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 14 19:08:04 crc kubenswrapper[4897]: I0214 19:08:04.601601 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.601577561 podStartE2EDuration="2.601577561s" podCreationTimestamp="2026-02-14 19:08:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:08:04.592098514 +0000 UTC m=+1537.568507007" watchObservedRunningTime="2026-02-14 19:08:04.601577561 +0000 UTC m=+1537.577986054" Feb 14 19:08:04 crc kubenswrapper[4897]: I0214 19:08:04.613561 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" podStartSLOduration=3.613544977 podStartE2EDuration="3.613544977s" podCreationTimestamp="2026-02-14 19:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:08:04.613290389 +0000 UTC m=+1537.589698872" watchObservedRunningTime="2026-02-14 19:08:04.613544977 +0000 UTC m=+1537.589953460" Feb 14 19:08:04 crc kubenswrapper[4897]: I0214 19:08:04.898394 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 19:08:04 crc kubenswrapper[4897]: I0214 19:08:04.899020 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7fb07f36-326e-4d0b-979e-26075640b85a" containerName="nova-api-log" containerID="cri-o://b9453114091ce0721c5829c8a28fcf2fdab580fada4c6f8accf9ca2b6c27bc6b" gracePeriod=30 Feb 14 19:08:04 crc kubenswrapper[4897]: I0214 19:08:04.899155 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7fb07f36-326e-4d0b-979e-26075640b85a" containerName="nova-api-api" containerID="cri-o://e18080bfcdd4ee93a0bbe26ab97104aadfbb4e4778aaa6c8faafabc246290a60" gracePeriod=30 Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.593262 4897 generic.go:334] "Generic (PLEG): container finished" podID="5d30a9ab-233a-4fb5-b8f1-1abe51dc2161" containerID="4a771529b77da3107bf7598218eb12e5e79ea442afbb4ed2ea26bdae62f474b6" exitCode=137 Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.593630 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161","Type":"ContainerDied","Data":"4a771529b77da3107bf7598218eb12e5e79ea442afbb4ed2ea26bdae62f474b6"} Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.593658 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161","Type":"ContainerDied","Data":"6454ad43eee33c91bf2f97806311e8702e91cdb2288d66fee43d0b3023cbc1a7"} Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.593675 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6454ad43eee33c91bf2f97806311e8702e91cdb2288d66fee43d0b3023cbc1a7" Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.598506 4897 generic.go:334] "Generic (PLEG): container finished" podID="80f924e2-0f79-47b0-ac1b-c909d06c87d1" containerID="7f0c1aef7c812ce1bfcac35588fc1e3a3f7454952bef0cedaf236abfc62ec363" exitCode=0 Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.598536 4897 generic.go:334] "Generic (PLEG): container finished" podID="80f924e2-0f79-47b0-ac1b-c909d06c87d1" containerID="cd08ce55d33a0e32a87e0964079a005504da1287de30eaea2b90fab4b03b0000" exitCode=2 Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.598544 4897 generic.go:334] "Generic (PLEG): container finished" podID="80f924e2-0f79-47b0-ac1b-c909d06c87d1" containerID="ef13eb33e37343eb8b626255d7b1d9b5020c76826c4bbbc52fcd20a9cde2fd96" exitCode=0 Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.598551 4897 generic.go:334] "Generic (PLEG): container finished" podID="80f924e2-0f79-47b0-ac1b-c909d06c87d1" containerID="98adb8ae7c7ef5cb4526aa6393959e73e6573b02b2592fa72f49a838c32054f8" exitCode=0 Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.598589 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80f924e2-0f79-47b0-ac1b-c909d06c87d1","Type":"ContainerDied","Data":"7f0c1aef7c812ce1bfcac35588fc1e3a3f7454952bef0cedaf236abfc62ec363"} Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.598613 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80f924e2-0f79-47b0-ac1b-c909d06c87d1","Type":"ContainerDied","Data":"cd08ce55d33a0e32a87e0964079a005504da1287de30eaea2b90fab4b03b0000"} Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.598623 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80f924e2-0f79-47b0-ac1b-c909d06c87d1","Type":"ContainerDied","Data":"ef13eb33e37343eb8b626255d7b1d9b5020c76826c4bbbc52fcd20a9cde2fd96"} Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.598633 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80f924e2-0f79-47b0-ac1b-c909d06c87d1","Type":"ContainerDied","Data":"98adb8ae7c7ef5cb4526aa6393959e73e6573b02b2592fa72f49a838c32054f8"} Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.601542 4897 generic.go:334] "Generic (PLEG): container finished" podID="7fb07f36-326e-4d0b-979e-26075640b85a" containerID="b9453114091ce0721c5829c8a28fcf2fdab580fada4c6f8accf9ca2b6c27bc6b" exitCode=143 Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.601790 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7fb07f36-326e-4d0b-979e-26075640b85a","Type":"ContainerDied","Data":"b9453114091ce0721c5829c8a28fcf2fdab580fada4c6f8accf9ca2b6c27bc6b"} Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.679765 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.760561 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-config-data\") pod \"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161\" (UID: \"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161\") " Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.762843 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-logs\") pod \"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161\" (UID: \"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161\") " Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.762889 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-combined-ca-bundle\") pod \"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161\" (UID: \"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161\") " Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.762930 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-nova-metadata-tls-certs\") pod \"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161\" (UID: \"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161\") " Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.762966 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9fmn\" (UniqueName: \"kubernetes.io/projected/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-kube-api-access-h9fmn\") pod \"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161\" (UID: \"5d30a9ab-233a-4fb5-b8f1-1abe51dc2161\") " Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.764508 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-logs" (OuterVolumeSpecName: "logs") pod "5d30a9ab-233a-4fb5-b8f1-1abe51dc2161" (UID: "5d30a9ab-233a-4fb5-b8f1-1abe51dc2161"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.781340 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-kube-api-access-h9fmn" (OuterVolumeSpecName: "kube-api-access-h9fmn") pod "5d30a9ab-233a-4fb5-b8f1-1abe51dc2161" (UID: "5d30a9ab-233a-4fb5-b8f1-1abe51dc2161"). InnerVolumeSpecName "kube-api-access-h9fmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.805270 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-config-data" (OuterVolumeSpecName: "config-data") pod "5d30a9ab-233a-4fb5-b8f1-1abe51dc2161" (UID: "5d30a9ab-233a-4fb5-b8f1-1abe51dc2161"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.818632 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d30a9ab-233a-4fb5-b8f1-1abe51dc2161" (UID: "5d30a9ab-233a-4fb5-b8f1-1abe51dc2161"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.837724 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5d30a9ab-233a-4fb5-b8f1-1abe51dc2161" (UID: "5d30a9ab-233a-4fb5-b8f1-1abe51dc2161"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.866174 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-logs\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.866202 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.866214 4897 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.866224 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9fmn\" (UniqueName: \"kubernetes.io/projected/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-kube-api-access-h9fmn\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.866232 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.876760 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bh69p"] Feb 14 19:08:05 crc kubenswrapper[4897]: E0214 19:08:05.877330 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d30a9ab-233a-4fb5-b8f1-1abe51dc2161" containerName="nova-metadata-metadata" Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.877347 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d30a9ab-233a-4fb5-b8f1-1abe51dc2161" containerName="nova-metadata-metadata" Feb 14 19:08:05 crc kubenswrapper[4897]: E0214 19:08:05.877389 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d30a9ab-233a-4fb5-b8f1-1abe51dc2161" containerName="nova-metadata-log" Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.877396 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d30a9ab-233a-4fb5-b8f1-1abe51dc2161" containerName="nova-metadata-log" Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.878496 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d30a9ab-233a-4fb5-b8f1-1abe51dc2161" containerName="nova-metadata-metadata" Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.878520 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d30a9ab-233a-4fb5-b8f1-1abe51dc2161" containerName="nova-metadata-log" Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.881260 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bh69p"] Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.881357 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bh69p" Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.969337 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e7b8b61-a5fd-4bea-91f0-45342d6587f2-catalog-content\") pod \"certified-operators-bh69p\" (UID: \"9e7b8b61-a5fd-4bea-91f0-45342d6587f2\") " pod="openshift-marketplace/certified-operators-bh69p" Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.969489 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e7b8b61-a5fd-4bea-91f0-45342d6587f2-utilities\") pod \"certified-operators-bh69p\" (UID: \"9e7b8b61-a5fd-4bea-91f0-45342d6587f2\") " pod="openshift-marketplace/certified-operators-bh69p" Feb 14 19:08:05 crc kubenswrapper[4897]: I0214 19:08:05.969535 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z62dq\" (UniqueName: \"kubernetes.io/projected/9e7b8b61-a5fd-4bea-91f0-45342d6587f2-kube-api-access-z62dq\") pod \"certified-operators-bh69p\" (UID: \"9e7b8b61-a5fd-4bea-91f0-45342d6587f2\") " pod="openshift-marketplace/certified-operators-bh69p" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.071364 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e7b8b61-a5fd-4bea-91f0-45342d6587f2-catalog-content\") pod \"certified-operators-bh69p\" (UID: \"9e7b8b61-a5fd-4bea-91f0-45342d6587f2\") " pod="openshift-marketplace/certified-operators-bh69p" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.071518 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e7b8b61-a5fd-4bea-91f0-45342d6587f2-utilities\") pod \"certified-operators-bh69p\" (UID: \"9e7b8b61-a5fd-4bea-91f0-45342d6587f2\") " pod="openshift-marketplace/certified-operators-bh69p" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.071559 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z62dq\" (UniqueName: \"kubernetes.io/projected/9e7b8b61-a5fd-4bea-91f0-45342d6587f2-kube-api-access-z62dq\") pod \"certified-operators-bh69p\" (UID: \"9e7b8b61-a5fd-4bea-91f0-45342d6587f2\") " pod="openshift-marketplace/certified-operators-bh69p" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.071902 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e7b8b61-a5fd-4bea-91f0-45342d6587f2-catalog-content\") pod \"certified-operators-bh69p\" (UID: \"9e7b8b61-a5fd-4bea-91f0-45342d6587f2\") " pod="openshift-marketplace/certified-operators-bh69p" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.071981 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e7b8b61-a5fd-4bea-91f0-45342d6587f2-utilities\") pod \"certified-operators-bh69p\" (UID: \"9e7b8b61-a5fd-4bea-91f0-45342d6587f2\") " pod="openshift-marketplace/certified-operators-bh69p" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.095963 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z62dq\" (UniqueName: \"kubernetes.io/projected/9e7b8b61-a5fd-4bea-91f0-45342d6587f2-kube-api-access-z62dq\") pod \"certified-operators-bh69p\" (UID: \"9e7b8b61-a5fd-4bea-91f0-45342d6587f2\") " pod="openshift-marketplace/certified-operators-bh69p" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.203669 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bh69p" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.210227 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.278716 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80f924e2-0f79-47b0-ac1b-c909d06c87d1-log-httpd\") pod \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.278785 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80f924e2-0f79-47b0-ac1b-c909d06c87d1-combined-ca-bundle\") pod \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.278877 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80f924e2-0f79-47b0-ac1b-c909d06c87d1-run-httpd\") pod \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.278916 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80f924e2-0f79-47b0-ac1b-c909d06c87d1-scripts\") pod \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.278947 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zmts\" (UniqueName: \"kubernetes.io/projected/80f924e2-0f79-47b0-ac1b-c909d06c87d1-kube-api-access-5zmts\") pod \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.278980 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80f924e2-0f79-47b0-ac1b-c909d06c87d1-config-data\") pod \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.279015 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/80f924e2-0f79-47b0-ac1b-c909d06c87d1-sg-core-conf-yaml\") pod \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\" (UID: \"80f924e2-0f79-47b0-ac1b-c909d06c87d1\") " Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.279679 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80f924e2-0f79-47b0-ac1b-c909d06c87d1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "80f924e2-0f79-47b0-ac1b-c909d06c87d1" (UID: "80f924e2-0f79-47b0-ac1b-c909d06c87d1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.285148 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80f924e2-0f79-47b0-ac1b-c909d06c87d1-kube-api-access-5zmts" (OuterVolumeSpecName: "kube-api-access-5zmts") pod "80f924e2-0f79-47b0-ac1b-c909d06c87d1" (UID: "80f924e2-0f79-47b0-ac1b-c909d06c87d1"). InnerVolumeSpecName "kube-api-access-5zmts". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.291985 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80f924e2-0f79-47b0-ac1b-c909d06c87d1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "80f924e2-0f79-47b0-ac1b-c909d06c87d1" (UID: "80f924e2-0f79-47b0-ac1b-c909d06c87d1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.299494 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80f924e2-0f79-47b0-ac1b-c909d06c87d1-scripts" (OuterVolumeSpecName: "scripts") pod "80f924e2-0f79-47b0-ac1b-c909d06c87d1" (UID: "80f924e2-0f79-47b0-ac1b-c909d06c87d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.339646 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80f924e2-0f79-47b0-ac1b-c909d06c87d1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "80f924e2-0f79-47b0-ac1b-c909d06c87d1" (UID: "80f924e2-0f79-47b0-ac1b-c909d06c87d1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.382221 4897 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80f924e2-0f79-47b0-ac1b-c909d06c87d1-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.382251 4897 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80f924e2-0f79-47b0-ac1b-c909d06c87d1-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.382262 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80f924e2-0f79-47b0-ac1b-c909d06c87d1-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.382273 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zmts\" (UniqueName: \"kubernetes.io/projected/80f924e2-0f79-47b0-ac1b-c909d06c87d1-kube-api-access-5zmts\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.382283 4897 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/80f924e2-0f79-47b0-ac1b-c909d06c87d1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.447288 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80f924e2-0f79-47b0-ac1b-c909d06c87d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80f924e2-0f79-47b0-ac1b-c909d06c87d1" (UID: "80f924e2-0f79-47b0-ac1b-c909d06c87d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.484471 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80f924e2-0f79-47b0-ac1b-c909d06c87d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.512208 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80f924e2-0f79-47b0-ac1b-c909d06c87d1-config-data" (OuterVolumeSpecName: "config-data") pod "80f924e2-0f79-47b0-ac1b-c909d06c87d1" (UID: "80f924e2-0f79-47b0-ac1b-c909d06c87d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.590497 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80f924e2-0f79-47b0-ac1b-c909d06c87d1-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.615928 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.619200 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.619281 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80f924e2-0f79-47b0-ac1b-c909d06c87d1","Type":"ContainerDied","Data":"3a8e0e273a28c32032f8f92270d07b95659cae62930909fd3ccaeb8dd3f4eef6"} Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.619354 4897 scope.go:117] "RemoveContainer" containerID="7f0c1aef7c812ce1bfcac35588fc1e3a3f7454952bef0cedaf236abfc62ec363" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.650398 4897 scope.go:117] "RemoveContainer" containerID="cd08ce55d33a0e32a87e0964079a005504da1287de30eaea2b90fab4b03b0000" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.671414 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.686848 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.693911 4897 scope.go:117] "RemoveContainer" containerID="ef13eb33e37343eb8b626255d7b1d9b5020c76826c4bbbc52fcd20a9cde2fd96" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.709157 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.740781 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 14 19:08:06 crc kubenswrapper[4897]: E0214 19:08:06.741298 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80f924e2-0f79-47b0-ac1b-c909d06c87d1" containerName="proxy-httpd" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.741313 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="80f924e2-0f79-47b0-ac1b-c909d06c87d1" containerName="proxy-httpd" Feb 14 19:08:06 crc kubenswrapper[4897]: E0214 19:08:06.741326 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80f924e2-0f79-47b0-ac1b-c909d06c87d1" containerName="ceilometer-notification-agent" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.741332 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="80f924e2-0f79-47b0-ac1b-c909d06c87d1" containerName="ceilometer-notification-agent" Feb 14 19:08:06 crc kubenswrapper[4897]: E0214 19:08:06.741367 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80f924e2-0f79-47b0-ac1b-c909d06c87d1" containerName="sg-core" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.741374 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="80f924e2-0f79-47b0-ac1b-c909d06c87d1" containerName="sg-core" Feb 14 19:08:06 crc kubenswrapper[4897]: E0214 19:08:06.741394 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80f924e2-0f79-47b0-ac1b-c909d06c87d1" containerName="ceilometer-central-agent" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.741400 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="80f924e2-0f79-47b0-ac1b-c909d06c87d1" containerName="ceilometer-central-agent" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.741615 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="80f924e2-0f79-47b0-ac1b-c909d06c87d1" containerName="ceilometer-central-agent" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.741640 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="80f924e2-0f79-47b0-ac1b-c909d06c87d1" containerName="ceilometer-notification-agent" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.741652 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="80f924e2-0f79-47b0-ac1b-c909d06c87d1" containerName="proxy-httpd" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.741662 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="80f924e2-0f79-47b0-ac1b-c909d06c87d1" containerName="sg-core" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.742864 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.745427 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.745662 4897 scope.go:117] "RemoveContainer" containerID="98adb8ae7c7ef5cb4526aa6393959e73e6573b02b2592fa72f49a838c32054f8" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.745718 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.752075 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.774272 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.788245 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.791551 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.794380 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.794701 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.800219 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.815582 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bh69p"] Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.900727 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5333e82-9d18-40af-bd85-733c1db3df87-run-httpd\") pod \"ceilometer-0\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " pod="openstack/ceilometer-0" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.900775 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds9lp\" (UniqueName: \"kubernetes.io/projected/482f17ca-b3a8-485b-bacc-58b97547974a-kube-api-access-ds9lp\") pod \"nova-metadata-0\" (UID: \"482f17ca-b3a8-485b-bacc-58b97547974a\") " pod="openstack/nova-metadata-0" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.901117 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5333e82-9d18-40af-bd85-733c1db3df87-scripts\") pod \"ceilometer-0\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " pod="openstack/ceilometer-0" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.901200 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/482f17ca-b3a8-485b-bacc-58b97547974a-logs\") pod \"nova-metadata-0\" (UID: \"482f17ca-b3a8-485b-bacc-58b97547974a\") " pod="openstack/nova-metadata-0" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.901297 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg46c\" (UniqueName: \"kubernetes.io/projected/b5333e82-9d18-40af-bd85-733c1db3df87-kube-api-access-xg46c\") pod \"ceilometer-0\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " pod="openstack/ceilometer-0" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.901328 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/482f17ca-b3a8-485b-bacc-58b97547974a-config-data\") pod \"nova-metadata-0\" (UID: \"482f17ca-b3a8-485b-bacc-58b97547974a\") " pod="openstack/nova-metadata-0" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.901386 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5333e82-9d18-40af-bd85-733c1db3df87-log-httpd\") pod \"ceilometer-0\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " pod="openstack/ceilometer-0" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.901443 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5333e82-9d18-40af-bd85-733c1db3df87-config-data\") pod \"ceilometer-0\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " pod="openstack/ceilometer-0" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.901479 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5333e82-9d18-40af-bd85-733c1db3df87-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " pod="openstack/ceilometer-0" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.901538 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/482f17ca-b3a8-485b-bacc-58b97547974a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"482f17ca-b3a8-485b-bacc-58b97547974a\") " pod="openstack/nova-metadata-0" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.901563 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/482f17ca-b3a8-485b-bacc-58b97547974a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"482f17ca-b3a8-485b-bacc-58b97547974a\") " pod="openstack/nova-metadata-0" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.901632 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5333e82-9d18-40af-bd85-733c1db3df87-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " pod="openstack/ceilometer-0" Feb 14 19:08:06 crc kubenswrapper[4897]: I0214 19:08:06.980684 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:08:06 crc kubenswrapper[4897]: E0214 19:08:06.981641 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-xg46c log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="b5333e82-9d18-40af-bd85-733c1db3df87" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.003729 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5333e82-9d18-40af-bd85-733c1db3df87-run-httpd\") pod \"ceilometer-0\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " pod="openstack/ceilometer-0" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.003780 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds9lp\" (UniqueName: \"kubernetes.io/projected/482f17ca-b3a8-485b-bacc-58b97547974a-kube-api-access-ds9lp\") pod \"nova-metadata-0\" (UID: \"482f17ca-b3a8-485b-bacc-58b97547974a\") " pod="openstack/nova-metadata-0" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.003860 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5333e82-9d18-40af-bd85-733c1db3df87-scripts\") pod \"ceilometer-0\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " pod="openstack/ceilometer-0" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.003902 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/482f17ca-b3a8-485b-bacc-58b97547974a-logs\") pod \"nova-metadata-0\" (UID: \"482f17ca-b3a8-485b-bacc-58b97547974a\") " pod="openstack/nova-metadata-0" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.003931 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg46c\" (UniqueName: \"kubernetes.io/projected/b5333e82-9d18-40af-bd85-733c1db3df87-kube-api-access-xg46c\") pod \"ceilometer-0\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " pod="openstack/ceilometer-0" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.003950 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/482f17ca-b3a8-485b-bacc-58b97547974a-config-data\") pod \"nova-metadata-0\" (UID: \"482f17ca-b3a8-485b-bacc-58b97547974a\") " pod="openstack/nova-metadata-0" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.003984 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5333e82-9d18-40af-bd85-733c1db3df87-log-httpd\") pod \"ceilometer-0\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " pod="openstack/ceilometer-0" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.004022 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5333e82-9d18-40af-bd85-733c1db3df87-config-data\") pod \"ceilometer-0\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " pod="openstack/ceilometer-0" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.004059 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5333e82-9d18-40af-bd85-733c1db3df87-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " pod="openstack/ceilometer-0" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.004092 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/482f17ca-b3a8-485b-bacc-58b97547974a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"482f17ca-b3a8-485b-bacc-58b97547974a\") " pod="openstack/nova-metadata-0" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.004113 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/482f17ca-b3a8-485b-bacc-58b97547974a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"482f17ca-b3a8-485b-bacc-58b97547974a\") " pod="openstack/nova-metadata-0" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.004161 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5333e82-9d18-40af-bd85-733c1db3df87-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " pod="openstack/ceilometer-0" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.005801 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5333e82-9d18-40af-bd85-733c1db3df87-log-httpd\") pod \"ceilometer-0\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " pod="openstack/ceilometer-0" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.005982 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5333e82-9d18-40af-bd85-733c1db3df87-run-httpd\") pod \"ceilometer-0\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " pod="openstack/ceilometer-0" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.006288 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/482f17ca-b3a8-485b-bacc-58b97547974a-logs\") pod \"nova-metadata-0\" (UID: \"482f17ca-b3a8-485b-bacc-58b97547974a\") " pod="openstack/nova-metadata-0" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.009776 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5333e82-9d18-40af-bd85-733c1db3df87-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " pod="openstack/ceilometer-0" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.010058 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/482f17ca-b3a8-485b-bacc-58b97547974a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"482f17ca-b3a8-485b-bacc-58b97547974a\") " pod="openstack/nova-metadata-0" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.010082 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5333e82-9d18-40af-bd85-733c1db3df87-config-data\") pod \"ceilometer-0\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " pod="openstack/ceilometer-0" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.010658 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5333e82-9d18-40af-bd85-733c1db3df87-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " pod="openstack/ceilometer-0" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.014740 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/482f17ca-b3a8-485b-bacc-58b97547974a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"482f17ca-b3a8-485b-bacc-58b97547974a\") " pod="openstack/nova-metadata-0" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.015986 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/482f17ca-b3a8-485b-bacc-58b97547974a-config-data\") pod \"nova-metadata-0\" (UID: \"482f17ca-b3a8-485b-bacc-58b97547974a\") " pod="openstack/nova-metadata-0" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.019979 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5333e82-9d18-40af-bd85-733c1db3df87-scripts\") pod \"ceilometer-0\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " pod="openstack/ceilometer-0" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.024978 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg46c\" (UniqueName: \"kubernetes.io/projected/b5333e82-9d18-40af-bd85-733c1db3df87-kube-api-access-xg46c\") pod \"ceilometer-0\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " pod="openstack/ceilometer-0" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.030889 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds9lp\" (UniqueName: \"kubernetes.io/projected/482f17ca-b3a8-485b-bacc-58b97547974a-kube-api-access-ds9lp\") pod \"nova-metadata-0\" (UID: \"482f17ca-b3a8-485b-bacc-58b97547974a\") " pod="openstack/nova-metadata-0" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.082386 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.587493 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 19:08:07 crc kubenswrapper[4897]: W0214 19:08:07.588557 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod482f17ca_b3a8_485b_bacc_58b97547974a.slice/crio-bc7e119c15b1d60a0386c6deefbecd91ee1c9d3d5029c5fd86b4059c3b577261 WatchSource:0}: Error finding container bc7e119c15b1d60a0386c6deefbecd91ee1c9d3d5029c5fd86b4059c3b577261: Status 404 returned error can't find the container with id bc7e119c15b1d60a0386c6deefbecd91ee1c9d3d5029c5fd86b4059c3b577261 Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.628233 4897 generic.go:334] "Generic (PLEG): container finished" podID="9e7b8b61-a5fd-4bea-91f0-45342d6587f2" containerID="17fe01670a807dcad902d8f36f4e5252a8f4156362bef44c4a1b0d03ed3d50f4" exitCode=0 Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.628322 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bh69p" event={"ID":"9e7b8b61-a5fd-4bea-91f0-45342d6587f2","Type":"ContainerDied","Data":"17fe01670a807dcad902d8f36f4e5252a8f4156362bef44c4a1b0d03ed3d50f4"} Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.628350 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bh69p" event={"ID":"9e7b8b61-a5fd-4bea-91f0-45342d6587f2","Type":"ContainerStarted","Data":"dabfcef4f7bb4bc0ad1f2c1f71126ff805cfe1ac60b78883873c6c90a266267d"} Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.630077 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"482f17ca-b3a8-485b-bacc-58b97547974a","Type":"ContainerStarted","Data":"bc7e119c15b1d60a0386c6deefbecd91ee1c9d3d5029c5fd86b4059c3b577261"} Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.632074 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.808437 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.809798 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d30a9ab-233a-4fb5-b8f1-1abe51dc2161" path="/var/lib/kubelet/pods/5d30a9ab-233a-4fb5-b8f1-1abe51dc2161/volumes" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.811298 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80f924e2-0f79-47b0-ac1b-c909d06c87d1" path="/var/lib/kubelet/pods/80f924e2-0f79-47b0-ac1b-c909d06c87d1/volumes" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.944336 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5333e82-9d18-40af-bd85-733c1db3df87-run-httpd\") pod \"b5333e82-9d18-40af-bd85-733c1db3df87\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.944728 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5333e82-9d18-40af-bd85-733c1db3df87-sg-core-conf-yaml\") pod \"b5333e82-9d18-40af-bd85-733c1db3df87\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.944782 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5333e82-9d18-40af-bd85-733c1db3df87-scripts\") pod \"b5333e82-9d18-40af-bd85-733c1db3df87\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.945225 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5333e82-9d18-40af-bd85-733c1db3df87-combined-ca-bundle\") pod \"b5333e82-9d18-40af-bd85-733c1db3df87\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.945278 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5333e82-9d18-40af-bd85-733c1db3df87-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b5333e82-9d18-40af-bd85-733c1db3df87" (UID: "b5333e82-9d18-40af-bd85-733c1db3df87"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.945394 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5333e82-9d18-40af-bd85-733c1db3df87-config-data\") pod \"b5333e82-9d18-40af-bd85-733c1db3df87\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.945438 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg46c\" (UniqueName: \"kubernetes.io/projected/b5333e82-9d18-40af-bd85-733c1db3df87-kube-api-access-xg46c\") pod \"b5333e82-9d18-40af-bd85-733c1db3df87\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.945588 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5333e82-9d18-40af-bd85-733c1db3df87-log-httpd\") pod \"b5333e82-9d18-40af-bd85-733c1db3df87\" (UID: \"b5333e82-9d18-40af-bd85-733c1db3df87\") " Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.946592 4897 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5333e82-9d18-40af-bd85-733c1db3df87-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.947130 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5333e82-9d18-40af-bd85-733c1db3df87-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b5333e82-9d18-40af-bd85-733c1db3df87" (UID: "b5333e82-9d18-40af-bd85-733c1db3df87"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.949308 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5333e82-9d18-40af-bd85-733c1db3df87-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b5333e82-9d18-40af-bd85-733c1db3df87" (UID: "b5333e82-9d18-40af-bd85-733c1db3df87"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.951836 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5333e82-9d18-40af-bd85-733c1db3df87-scripts" (OuterVolumeSpecName: "scripts") pod "b5333e82-9d18-40af-bd85-733c1db3df87" (UID: "b5333e82-9d18-40af-bd85-733c1db3df87"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.952227 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5333e82-9d18-40af-bd85-733c1db3df87-kube-api-access-xg46c" (OuterVolumeSpecName: "kube-api-access-xg46c") pod "b5333e82-9d18-40af-bd85-733c1db3df87" (UID: "b5333e82-9d18-40af-bd85-733c1db3df87"). InnerVolumeSpecName "kube-api-access-xg46c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.952475 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5333e82-9d18-40af-bd85-733c1db3df87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5333e82-9d18-40af-bd85-733c1db3df87" (UID: "b5333e82-9d18-40af-bd85-733c1db3df87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.952944 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5333e82-9d18-40af-bd85-733c1db3df87-config-data" (OuterVolumeSpecName: "config-data") pod "b5333e82-9d18-40af-bd85-733c1db3df87" (UID: "b5333e82-9d18-40af-bd85-733c1db3df87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:07 crc kubenswrapper[4897]: I0214 19:08:07.989339 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.049933 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg46c\" (UniqueName: \"kubernetes.io/projected/b5333e82-9d18-40af-bd85-733c1db3df87-kube-api-access-xg46c\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.050201 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5333e82-9d18-40af-bd85-733c1db3df87-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.050288 4897 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5333e82-9d18-40af-bd85-733c1db3df87-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.050385 4897 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5333e82-9d18-40af-bd85-733c1db3df87-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.050458 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5333e82-9d18-40af-bd85-733c1db3df87-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.050510 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5333e82-9d18-40af-bd85-733c1db3df87-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.646888 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"482f17ca-b3a8-485b-bacc-58b97547974a","Type":"ContainerStarted","Data":"c0c5d828d890f790335115cf7cc36aa5e10fa411011ec6a3ef4df81041e287ed"} Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.647296 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"482f17ca-b3a8-485b-bacc-58b97547974a","Type":"ContainerStarted","Data":"60ed2b81ddf2f413c01e12978ae43b9013b26e3752dd24a40797129171d89369"} Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.662439 4897 generic.go:334] "Generic (PLEG): container finished" podID="7fb07f36-326e-4d0b-979e-26075640b85a" containerID="e18080bfcdd4ee93a0bbe26ab97104aadfbb4e4778aaa6c8faafabc246290a60" exitCode=0 Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.662480 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7fb07f36-326e-4d0b-979e-26075640b85a","Type":"ContainerDied","Data":"e18080bfcdd4ee93a0bbe26ab97104aadfbb4e4778aaa6c8faafabc246290a60"} Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.662550 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7fb07f36-326e-4d0b-979e-26075640b85a","Type":"ContainerDied","Data":"097badb205d0d78180280ed54c477c05bad1395d6d48d9eb07f026442ae7baa0"} Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.662563 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="097badb205d0d78180280ed54c477c05bad1395d6d48d9eb07f026442ae7baa0" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.665448 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.665495 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bh69p" event={"ID":"9e7b8b61-a5fd-4bea-91f0-45342d6587f2","Type":"ContainerStarted","Data":"8615cdfdb92d9ec4c3cf137cd8c3e47a83d5ddef0e3adb10a4d14e99628a9e32"} Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.680490 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.680466686 podStartE2EDuration="2.680466686s" podCreationTimestamp="2026-02-14 19:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:08:08.674077265 +0000 UTC m=+1541.650485768" watchObservedRunningTime="2026-02-14 19:08:08.680466686 +0000 UTC m=+1541.656875159" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.725050 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.774437 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.790812 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.802555 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:08:08 crc kubenswrapper[4897]: E0214 19:08:08.803108 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fb07f36-326e-4d0b-979e-26075640b85a" containerName="nova-api-api" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.803120 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fb07f36-326e-4d0b-979e-26075640b85a" containerName="nova-api-api" Feb 14 19:08:08 crc kubenswrapper[4897]: E0214 19:08:08.803133 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fb07f36-326e-4d0b-979e-26075640b85a" containerName="nova-api-log" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.803140 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fb07f36-326e-4d0b-979e-26075640b85a" containerName="nova-api-log" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.803358 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fb07f36-326e-4d0b-979e-26075640b85a" containerName="nova-api-api" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.803369 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fb07f36-326e-4d0b-979e-26075640b85a" containerName="nova-api-log" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.807711 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.811059 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.811268 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.829749 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.875021 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fb07f36-326e-4d0b-979e-26075640b85a-logs\") pod \"7fb07f36-326e-4d0b-979e-26075640b85a\" (UID: \"7fb07f36-326e-4d0b-979e-26075640b85a\") " Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.875533 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fb07f36-326e-4d0b-979e-26075640b85a-config-data\") pod \"7fb07f36-326e-4d0b-979e-26075640b85a\" (UID: \"7fb07f36-326e-4d0b-979e-26075640b85a\") " Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.875693 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fb07f36-326e-4d0b-979e-26075640b85a-combined-ca-bundle\") pod \"7fb07f36-326e-4d0b-979e-26075640b85a\" (UID: \"7fb07f36-326e-4d0b-979e-26075640b85a\") " Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.875744 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwjz6\" (UniqueName: \"kubernetes.io/projected/7fb07f36-326e-4d0b-979e-26075640b85a-kube-api-access-qwjz6\") pod \"7fb07f36-326e-4d0b-979e-26075640b85a\" (UID: \"7fb07f36-326e-4d0b-979e-26075640b85a\") " Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.876765 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fb07f36-326e-4d0b-979e-26075640b85a-logs" (OuterVolumeSpecName: "logs") pod "7fb07f36-326e-4d0b-979e-26075640b85a" (UID: "7fb07f36-326e-4d0b-979e-26075640b85a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.884263 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fb07f36-326e-4d0b-979e-26075640b85a-kube-api-access-qwjz6" (OuterVolumeSpecName: "kube-api-access-qwjz6") pod "7fb07f36-326e-4d0b-979e-26075640b85a" (UID: "7fb07f36-326e-4d0b-979e-26075640b85a"). InnerVolumeSpecName "kube-api-access-qwjz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.908460 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fb07f36-326e-4d0b-979e-26075640b85a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fb07f36-326e-4d0b-979e-26075640b85a" (UID: "7fb07f36-326e-4d0b-979e-26075640b85a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.912338 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fb07f36-326e-4d0b-979e-26075640b85a-config-data" (OuterVolumeSpecName: "config-data") pod "7fb07f36-326e-4d0b-979e-26075640b85a" (UID: "7fb07f36-326e-4d0b-979e-26075640b85a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.979120 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65ae60bb-0390-4729-8e95-a59633606a95-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " pod="openstack/ceilometer-0" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.979294 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65ae60bb-0390-4729-8e95-a59633606a95-scripts\") pod \"ceilometer-0\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " pod="openstack/ceilometer-0" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.979338 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65ae60bb-0390-4729-8e95-a59633606a95-log-httpd\") pod \"ceilometer-0\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " pod="openstack/ceilometer-0" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.979423 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9vvb\" (UniqueName: \"kubernetes.io/projected/65ae60bb-0390-4729-8e95-a59633606a95-kube-api-access-k9vvb\") pod \"ceilometer-0\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " pod="openstack/ceilometer-0" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.979472 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65ae60bb-0390-4729-8e95-a59633606a95-run-httpd\") pod \"ceilometer-0\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " pod="openstack/ceilometer-0" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.979553 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ae60bb-0390-4729-8e95-a59633606a95-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " pod="openstack/ceilometer-0" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.979601 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65ae60bb-0390-4729-8e95-a59633606a95-config-data\") pod \"ceilometer-0\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " pod="openstack/ceilometer-0" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.979671 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fb07f36-326e-4d0b-979e-26075640b85a-logs\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.979687 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fb07f36-326e-4d0b-979e-26075640b85a-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.979697 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fb07f36-326e-4d0b-979e-26075640b85a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:08 crc kubenswrapper[4897]: I0214 19:08:08.979706 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwjz6\" (UniqueName: \"kubernetes.io/projected/7fb07f36-326e-4d0b-979e-26075640b85a-kube-api-access-qwjz6\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.082122 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65ae60bb-0390-4729-8e95-a59633606a95-scripts\") pod \"ceilometer-0\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " pod="openstack/ceilometer-0" Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.082181 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65ae60bb-0390-4729-8e95-a59633606a95-log-httpd\") pod \"ceilometer-0\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " pod="openstack/ceilometer-0" Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.082249 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9vvb\" (UniqueName: \"kubernetes.io/projected/65ae60bb-0390-4729-8e95-a59633606a95-kube-api-access-k9vvb\") pod \"ceilometer-0\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " pod="openstack/ceilometer-0" Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.082279 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65ae60bb-0390-4729-8e95-a59633606a95-run-httpd\") pod \"ceilometer-0\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " pod="openstack/ceilometer-0" Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.082337 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ae60bb-0390-4729-8e95-a59633606a95-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " pod="openstack/ceilometer-0" Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.082375 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65ae60bb-0390-4729-8e95-a59633606a95-config-data\") pod \"ceilometer-0\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " pod="openstack/ceilometer-0" Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.082419 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65ae60bb-0390-4729-8e95-a59633606a95-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " pod="openstack/ceilometer-0" Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.082725 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65ae60bb-0390-4729-8e95-a59633606a95-log-httpd\") pod \"ceilometer-0\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " pod="openstack/ceilometer-0" Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.083009 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65ae60bb-0390-4729-8e95-a59633606a95-run-httpd\") pod \"ceilometer-0\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " pod="openstack/ceilometer-0" Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.085427 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65ae60bb-0390-4729-8e95-a59633606a95-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " pod="openstack/ceilometer-0" Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.085461 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65ae60bb-0390-4729-8e95-a59633606a95-scripts\") pod \"ceilometer-0\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " pod="openstack/ceilometer-0" Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.086390 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65ae60bb-0390-4729-8e95-a59633606a95-config-data\") pod \"ceilometer-0\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " pod="openstack/ceilometer-0" Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.086835 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ae60bb-0390-4729-8e95-a59633606a95-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " pod="openstack/ceilometer-0" Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.099706 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9vvb\" (UniqueName: \"kubernetes.io/projected/65ae60bb-0390-4729-8e95-a59633606a95-kube-api-access-k9vvb\") pod \"ceilometer-0\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " pod="openstack/ceilometer-0" Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.142633 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:08:09 crc kubenswrapper[4897]: W0214 19:08:09.620808 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65ae60bb_0390_4729_8e95_a59633606a95.slice/crio-ead88010b77c6475e09b191cd04d5446515d3b53fdd5bca36b4e9c4fd636503d WatchSource:0}: Error finding container ead88010b77c6475e09b191cd04d5446515d3b53fdd5bca36b4e9c4fd636503d: Status 404 returned error can't find the container with id ead88010b77c6475e09b191cd04d5446515d3b53fdd5bca36b4e9c4fd636503d Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.624537 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.677831 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65ae60bb-0390-4729-8e95-a59633606a95","Type":"ContainerStarted","Data":"ead88010b77c6475e09b191cd04d5446515d3b53fdd5bca36b4e9c4fd636503d"} Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.680337 4897 generic.go:334] "Generic (PLEG): container finished" podID="9e7b8b61-a5fd-4bea-91f0-45342d6587f2" containerID="8615cdfdb92d9ec4c3cf137cd8c3e47a83d5ddef0e3adb10a4d14e99628a9e32" exitCode=0 Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.680433 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bh69p" event={"ID":"9e7b8b61-a5fd-4bea-91f0-45342d6587f2","Type":"ContainerDied","Data":"8615cdfdb92d9ec4c3cf137cd8c3e47a83d5ddef0e3adb10a4d14e99628a9e32"} Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.680480 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.725661 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.736162 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.787364 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.789266 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.793088 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.793245 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.793347 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.830587 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fb07f36-326e-4d0b-979e-26075640b85a" path="/var/lib/kubelet/pods/7fb07f36-326e-4d0b-979e-26075640b85a/volumes" Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.831377 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5333e82-9d18-40af-bd85-733c1db3df87" path="/var/lib/kubelet/pods/b5333e82-9d18-40af-bd85-733c1db3df87/volumes" Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.831793 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.911769 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk7gn\" (UniqueName: \"kubernetes.io/projected/9b9564cf-9082-4da6-8197-229a5a16f424-kube-api-access-qk7gn\") pod \"nova-api-0\" (UID: \"9b9564cf-9082-4da6-8197-229a5a16f424\") " pod="openstack/nova-api-0" Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.912847 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b9564cf-9082-4da6-8197-229a5a16f424-config-data\") pod \"nova-api-0\" (UID: \"9b9564cf-9082-4da6-8197-229a5a16f424\") " pod="openstack/nova-api-0" Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.913206 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b9564cf-9082-4da6-8197-229a5a16f424-public-tls-certs\") pod \"nova-api-0\" (UID: \"9b9564cf-9082-4da6-8197-229a5a16f424\") " pod="openstack/nova-api-0" Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.913408 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b9564cf-9082-4da6-8197-229a5a16f424-logs\") pod \"nova-api-0\" (UID: \"9b9564cf-9082-4da6-8197-229a5a16f424\") " pod="openstack/nova-api-0" Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.917534 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9564cf-9082-4da6-8197-229a5a16f424-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9b9564cf-9082-4da6-8197-229a5a16f424\") " pod="openstack/nova-api-0" Feb 14 19:08:09 crc kubenswrapper[4897]: I0214 19:08:09.918241 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b9564cf-9082-4da6-8197-229a5a16f424-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9b9564cf-9082-4da6-8197-229a5a16f424\") " pod="openstack/nova-api-0" Feb 14 19:08:10 crc kubenswrapper[4897]: I0214 19:08:10.021362 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9564cf-9082-4da6-8197-229a5a16f424-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9b9564cf-9082-4da6-8197-229a5a16f424\") " pod="openstack/nova-api-0" Feb 14 19:08:10 crc kubenswrapper[4897]: I0214 19:08:10.021510 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b9564cf-9082-4da6-8197-229a5a16f424-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9b9564cf-9082-4da6-8197-229a5a16f424\") " pod="openstack/nova-api-0" Feb 14 19:08:10 crc kubenswrapper[4897]: I0214 19:08:10.021618 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk7gn\" (UniqueName: \"kubernetes.io/projected/9b9564cf-9082-4da6-8197-229a5a16f424-kube-api-access-qk7gn\") pod \"nova-api-0\" (UID: \"9b9564cf-9082-4da6-8197-229a5a16f424\") " pod="openstack/nova-api-0" Feb 14 19:08:10 crc kubenswrapper[4897]: I0214 19:08:10.021696 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b9564cf-9082-4da6-8197-229a5a16f424-config-data\") pod \"nova-api-0\" (UID: \"9b9564cf-9082-4da6-8197-229a5a16f424\") " pod="openstack/nova-api-0" Feb 14 19:08:10 crc kubenswrapper[4897]: I0214 19:08:10.021844 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b9564cf-9082-4da6-8197-229a5a16f424-public-tls-certs\") pod \"nova-api-0\" (UID: \"9b9564cf-9082-4da6-8197-229a5a16f424\") " pod="openstack/nova-api-0" Feb 14 19:08:10 crc kubenswrapper[4897]: I0214 19:08:10.022093 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b9564cf-9082-4da6-8197-229a5a16f424-logs\") pod \"nova-api-0\" (UID: \"9b9564cf-9082-4da6-8197-229a5a16f424\") " pod="openstack/nova-api-0" Feb 14 19:08:10 crc kubenswrapper[4897]: I0214 19:08:10.023699 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b9564cf-9082-4da6-8197-229a5a16f424-logs\") pod \"nova-api-0\" (UID: \"9b9564cf-9082-4da6-8197-229a5a16f424\") " pod="openstack/nova-api-0" Feb 14 19:08:10 crc kubenswrapper[4897]: I0214 19:08:10.030979 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b9564cf-9082-4da6-8197-229a5a16f424-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9b9564cf-9082-4da6-8197-229a5a16f424\") " pod="openstack/nova-api-0" Feb 14 19:08:10 crc kubenswrapper[4897]: I0214 19:08:10.031235 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9564cf-9082-4da6-8197-229a5a16f424-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9b9564cf-9082-4da6-8197-229a5a16f424\") " pod="openstack/nova-api-0" Feb 14 19:08:10 crc kubenswrapper[4897]: I0214 19:08:10.031402 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b9564cf-9082-4da6-8197-229a5a16f424-public-tls-certs\") pod \"nova-api-0\" (UID: \"9b9564cf-9082-4da6-8197-229a5a16f424\") " pod="openstack/nova-api-0" Feb 14 19:08:10 crc kubenswrapper[4897]: I0214 19:08:10.044319 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk7gn\" (UniqueName: \"kubernetes.io/projected/9b9564cf-9082-4da6-8197-229a5a16f424-kube-api-access-qk7gn\") pod \"nova-api-0\" (UID: \"9b9564cf-9082-4da6-8197-229a5a16f424\") " pod="openstack/nova-api-0" Feb 14 19:08:10 crc kubenswrapper[4897]: I0214 19:08:10.049081 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b9564cf-9082-4da6-8197-229a5a16f424-config-data\") pod \"nova-api-0\" (UID: \"9b9564cf-9082-4da6-8197-229a5a16f424\") " pod="openstack/nova-api-0" Feb 14 19:08:10 crc kubenswrapper[4897]: I0214 19:08:10.111141 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 19:08:10 crc kubenswrapper[4897]: I0214 19:08:10.695909 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65ae60bb-0390-4729-8e95-a59633606a95","Type":"ContainerStarted","Data":"35004ae46b90519f0a307a00890566c224d828b57484ce86c8db5cce73276be7"} Feb 14 19:08:10 crc kubenswrapper[4897]: I0214 19:08:10.699346 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bh69p" event={"ID":"9e7b8b61-a5fd-4bea-91f0-45342d6587f2","Type":"ContainerStarted","Data":"25ac1c266c308368ef0f3ad2d172626a8f334786428c4b10284869b9b8bbf151"} Feb 14 19:08:10 crc kubenswrapper[4897]: W0214 19:08:10.723271 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b9564cf_9082_4da6_8197_229a5a16f424.slice/crio-c99ec54e762c23685c9aa974cb34a93f005e9a9336d1bce25ee77e9a467c01a8 WatchSource:0}: Error finding container c99ec54e762c23685c9aa974cb34a93f005e9a9336d1bce25ee77e9a467c01a8: Status 404 returned error can't find the container with id c99ec54e762c23685c9aa974cb34a93f005e9a9336d1bce25ee77e9a467c01a8 Feb 14 19:08:10 crc kubenswrapper[4897]: I0214 19:08:10.730642 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 19:08:10 crc kubenswrapper[4897]: I0214 19:08:10.742425 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bh69p" podStartSLOduration=3.183654134 podStartE2EDuration="5.742403683s" podCreationTimestamp="2026-02-14 19:08:05 +0000 UTC" firstStartedPulling="2026-02-14 19:08:07.633356715 +0000 UTC m=+1540.609765198" lastFinishedPulling="2026-02-14 19:08:10.192106264 +0000 UTC m=+1543.168514747" observedRunningTime="2026-02-14 19:08:10.721389274 +0000 UTC m=+1543.697797767" watchObservedRunningTime="2026-02-14 19:08:10.742403683 +0000 UTC m=+1543.718812166" Feb 14 19:08:11 crc kubenswrapper[4897]: I0214 19:08:11.392940 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tgsgv"] Feb 14 19:08:11 crc kubenswrapper[4897]: I0214 19:08:11.397894 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tgsgv" Feb 14 19:08:11 crc kubenswrapper[4897]: I0214 19:08:11.414393 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tgsgv"] Feb 14 19:08:11 crc kubenswrapper[4897]: I0214 19:08:11.559566 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee068b46-5bbc-4442-b2be-6b0f086d1edb-catalog-content\") pod \"community-operators-tgsgv\" (UID: \"ee068b46-5bbc-4442-b2be-6b0f086d1edb\") " pod="openshift-marketplace/community-operators-tgsgv" Feb 14 19:08:11 crc kubenswrapper[4897]: I0214 19:08:11.559617 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8msft\" (UniqueName: \"kubernetes.io/projected/ee068b46-5bbc-4442-b2be-6b0f086d1edb-kube-api-access-8msft\") pod \"community-operators-tgsgv\" (UID: \"ee068b46-5bbc-4442-b2be-6b0f086d1edb\") " pod="openshift-marketplace/community-operators-tgsgv" Feb 14 19:08:11 crc kubenswrapper[4897]: I0214 19:08:11.559644 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee068b46-5bbc-4442-b2be-6b0f086d1edb-utilities\") pod \"community-operators-tgsgv\" (UID: \"ee068b46-5bbc-4442-b2be-6b0f086d1edb\") " pod="openshift-marketplace/community-operators-tgsgv" Feb 14 19:08:11 crc kubenswrapper[4897]: I0214 19:08:11.662945 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee068b46-5bbc-4442-b2be-6b0f086d1edb-catalog-content\") pod \"community-operators-tgsgv\" (UID: \"ee068b46-5bbc-4442-b2be-6b0f086d1edb\") " pod="openshift-marketplace/community-operators-tgsgv" Feb 14 19:08:11 crc kubenswrapper[4897]: I0214 19:08:11.662998 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8msft\" (UniqueName: \"kubernetes.io/projected/ee068b46-5bbc-4442-b2be-6b0f086d1edb-kube-api-access-8msft\") pod \"community-operators-tgsgv\" (UID: \"ee068b46-5bbc-4442-b2be-6b0f086d1edb\") " pod="openshift-marketplace/community-operators-tgsgv" Feb 14 19:08:11 crc kubenswrapper[4897]: I0214 19:08:11.663051 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee068b46-5bbc-4442-b2be-6b0f086d1edb-utilities\") pod \"community-operators-tgsgv\" (UID: \"ee068b46-5bbc-4442-b2be-6b0f086d1edb\") " pod="openshift-marketplace/community-operators-tgsgv" Feb 14 19:08:11 crc kubenswrapper[4897]: I0214 19:08:11.663493 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee068b46-5bbc-4442-b2be-6b0f086d1edb-catalog-content\") pod \"community-operators-tgsgv\" (UID: \"ee068b46-5bbc-4442-b2be-6b0f086d1edb\") " pod="openshift-marketplace/community-operators-tgsgv" Feb 14 19:08:11 crc kubenswrapper[4897]: I0214 19:08:11.664251 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee068b46-5bbc-4442-b2be-6b0f086d1edb-utilities\") pod \"community-operators-tgsgv\" (UID: \"ee068b46-5bbc-4442-b2be-6b0f086d1edb\") " pod="openshift-marketplace/community-operators-tgsgv" Feb 14 19:08:11 crc kubenswrapper[4897]: I0214 19:08:11.695971 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8msft\" (UniqueName: \"kubernetes.io/projected/ee068b46-5bbc-4442-b2be-6b0f086d1edb-kube-api-access-8msft\") pod \"community-operators-tgsgv\" (UID: \"ee068b46-5bbc-4442-b2be-6b0f086d1edb\") " pod="openshift-marketplace/community-operators-tgsgv" Feb 14 19:08:11 crc kubenswrapper[4897]: I0214 19:08:11.726658 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tgsgv" Feb 14 19:08:11 crc kubenswrapper[4897]: I0214 19:08:11.733369 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b9564cf-9082-4da6-8197-229a5a16f424","Type":"ContainerStarted","Data":"b9adf73bc90557a49b5277f4e1c17e9d8547ab4da364480f92f937bdac9ed118"} Feb 14 19:08:11 crc kubenswrapper[4897]: I0214 19:08:11.733416 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b9564cf-9082-4da6-8197-229a5a16f424","Type":"ContainerStarted","Data":"5ceee8207abb0f8e607590cb3be3de1accb9c50f6b0ec1319bfc1c74c7561608"} Feb 14 19:08:11 crc kubenswrapper[4897]: I0214 19:08:11.733429 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b9564cf-9082-4da6-8197-229a5a16f424","Type":"ContainerStarted","Data":"c99ec54e762c23685c9aa974cb34a93f005e9a9336d1bce25ee77e9a467c01a8"} Feb 14 19:08:11 crc kubenswrapper[4897]: I0214 19:08:11.736040 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65ae60bb-0390-4729-8e95-a59633606a95","Type":"ContainerStarted","Data":"9431cb9bbf0fb3594f8c7eb95f93b749700eb842dfc5699bd6c14340599534a8"} Feb 14 19:08:11 crc kubenswrapper[4897]: I0214 19:08:11.767993 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.767974378 podStartE2EDuration="2.767974378s" podCreationTimestamp="2026-02-14 19:08:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:08:11.755565319 +0000 UTC m=+1544.731973812" watchObservedRunningTime="2026-02-14 19:08:11.767974378 +0000 UTC m=+1544.744382861" Feb 14 19:08:12 crc kubenswrapper[4897]: I0214 19:08:12.082959 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 14 19:08:12 crc kubenswrapper[4897]: I0214 19:08:12.083207 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 14 19:08:12 crc kubenswrapper[4897]: I0214 19:08:12.141221 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" Feb 14 19:08:12 crc kubenswrapper[4897]: I0214 19:08:12.215271 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-pwx2b"] Feb 14 19:08:12 crc kubenswrapper[4897]: I0214 19:08:12.215544 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" podUID="7dc95e64-31a9-4a6a-87fe-bfe2d765966f" containerName="dnsmasq-dns" containerID="cri-o://17dd65258588af3928d6ab2f5068f102834180603077de93e81b78d33318c68a" gracePeriod=10 Feb 14 19:08:12 crc kubenswrapper[4897]: I0214 19:08:12.304280 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tgsgv"] Feb 14 19:08:12 crc kubenswrapper[4897]: I0214 19:08:12.320439 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" podUID="7dc95e64-31a9-4a6a-87fe-bfe2d765966f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.251:5353: connect: connection refused" Feb 14 19:08:12 crc kubenswrapper[4897]: I0214 19:08:12.754405 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65ae60bb-0390-4729-8e95-a59633606a95","Type":"ContainerStarted","Data":"6486ffdafc27d5a8464330b77e4278af807c73c5fb798426c68013d04ef615ba"} Feb 14 19:08:12 crc kubenswrapper[4897]: I0214 19:08:12.762013 4897 generic.go:334] "Generic (PLEG): container finished" podID="7dc95e64-31a9-4a6a-87fe-bfe2d765966f" containerID="17dd65258588af3928d6ab2f5068f102834180603077de93e81b78d33318c68a" exitCode=0 Feb 14 19:08:12 crc kubenswrapper[4897]: I0214 19:08:12.762099 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" event={"ID":"7dc95e64-31a9-4a6a-87fe-bfe2d765966f","Type":"ContainerDied","Data":"17dd65258588af3928d6ab2f5068f102834180603077de93e81b78d33318c68a"} Feb 14 19:08:12 crc kubenswrapper[4897]: I0214 19:08:12.763769 4897 generic.go:334] "Generic (PLEG): container finished" podID="ee068b46-5bbc-4442-b2be-6b0f086d1edb" containerID="0111513b4da7df65bece61640a470423757585ab43fafbb2e9ec4f38b9532834" exitCode=0 Feb 14 19:08:12 crc kubenswrapper[4897]: I0214 19:08:12.763833 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgsgv" event={"ID":"ee068b46-5bbc-4442-b2be-6b0f086d1edb","Type":"ContainerDied","Data":"0111513b4da7df65bece61640a470423757585ab43fafbb2e9ec4f38b9532834"} Feb 14 19:08:12 crc kubenswrapper[4897]: I0214 19:08:12.763913 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgsgv" event={"ID":"ee068b46-5bbc-4442-b2be-6b0f086d1edb","Type":"ContainerStarted","Data":"a45d57c48601dab592e67e536c05b3bf93c1270e756f21d581c1df86f6bdc35e"} Feb 14 19:08:12 crc kubenswrapper[4897]: I0214 19:08:12.969956 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" Feb 14 19:08:12 crc kubenswrapper[4897]: I0214 19:08:12.990867 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.047902 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.107902 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-dns-svc\") pod \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\" (UID: \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\") " Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.107978 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-ovsdbserver-nb\") pod \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\" (UID: \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\") " Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.108013 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-dns-swift-storage-0\") pod \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\" (UID: \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\") " Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.108135 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-config\") pod \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\" (UID: \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\") " Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.108249 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-ovsdbserver-sb\") pod \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\" (UID: \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\") " Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.108277 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dk6g\" (UniqueName: \"kubernetes.io/projected/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-kube-api-access-4dk6g\") pod \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\" (UID: \"7dc95e64-31a9-4a6a-87fe-bfe2d765966f\") " Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.123734 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-kube-api-access-4dk6g" (OuterVolumeSpecName: "kube-api-access-4dk6g") pod "7dc95e64-31a9-4a6a-87fe-bfe2d765966f" (UID: "7dc95e64-31a9-4a6a-87fe-bfe2d765966f"). InnerVolumeSpecName "kube-api-access-4dk6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.203012 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7dc95e64-31a9-4a6a-87fe-bfe2d765966f" (UID: "7dc95e64-31a9-4a6a-87fe-bfe2d765966f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.209209 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-config" (OuterVolumeSpecName: "config") pod "7dc95e64-31a9-4a6a-87fe-bfe2d765966f" (UID: "7dc95e64-31a9-4a6a-87fe-bfe2d765966f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.212874 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.212905 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.212922 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dk6g\" (UniqueName: \"kubernetes.io/projected/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-kube-api-access-4dk6g\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.218566 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7dc95e64-31a9-4a6a-87fe-bfe2d765966f" (UID: "7dc95e64-31a9-4a6a-87fe-bfe2d765966f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.220637 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7dc95e64-31a9-4a6a-87fe-bfe2d765966f" (UID: "7dc95e64-31a9-4a6a-87fe-bfe2d765966f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.225529 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7dc95e64-31a9-4a6a-87fe-bfe2d765966f" (UID: "7dc95e64-31a9-4a6a-87fe-bfe2d765966f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.315651 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.315683 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.315696 4897 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7dc95e64-31a9-4a6a-87fe-bfe2d765966f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.776105 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgsgv" event={"ID":"ee068b46-5bbc-4442-b2be-6b0f086d1edb","Type":"ContainerStarted","Data":"1d4f237909a7f9f07a4c8f14c78f3c6041a0a0ad2695c86b0c9556799c3a84a2"} Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.779473 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65ae60bb-0390-4729-8e95-a59633606a95","Type":"ContainerStarted","Data":"f5e64bfb563f23c6ec0fe6f5e0a4a36bee8bfe647cc7003eb26406bc73ce47a6"} Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.779613 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.782978 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" event={"ID":"7dc95e64-31a9-4a6a-87fe-bfe2d765966f","Type":"ContainerDied","Data":"5201133335f271edf49e7996aca2bcdcd236b53880803e4ce1b1f30519d865f4"} Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.783014 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-pwx2b" Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.783023 4897 scope.go:117] "RemoveContainer" containerID="17dd65258588af3928d6ab2f5068f102834180603077de93e81b78d33318c68a" Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.826601 4897 scope.go:117] "RemoveContainer" containerID="d132f21138a128d59defb0fe7884725ac0bfcd7d800a88bde009e9b3e95b146e" Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.888610 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.234169374 podStartE2EDuration="5.888590548s" podCreationTimestamp="2026-02-14 19:08:08 +0000 UTC" firstStartedPulling="2026-02-14 19:08:09.623071186 +0000 UTC m=+1542.599479669" lastFinishedPulling="2026-02-14 19:08:13.27749236 +0000 UTC m=+1546.253900843" observedRunningTime="2026-02-14 19:08:13.872916646 +0000 UTC m=+1546.849325149" watchObservedRunningTime="2026-02-14 19:08:13.888590548 +0000 UTC m=+1546.864999021" Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.941103 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-pwx2b"] Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.984611 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-pwx2b"] Feb 14 19:08:13 crc kubenswrapper[4897]: I0214 19:08:13.984736 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 14 19:08:14 crc kubenswrapper[4897]: I0214 19:08:14.294716 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-hz8g2"] Feb 14 19:08:14 crc kubenswrapper[4897]: E0214 19:08:14.295348 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc95e64-31a9-4a6a-87fe-bfe2d765966f" containerName="init" Feb 14 19:08:14 crc kubenswrapper[4897]: I0214 19:08:14.295365 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc95e64-31a9-4a6a-87fe-bfe2d765966f" containerName="init" Feb 14 19:08:14 crc kubenswrapper[4897]: E0214 19:08:14.295387 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc95e64-31a9-4a6a-87fe-bfe2d765966f" containerName="dnsmasq-dns" Feb 14 19:08:14 crc kubenswrapper[4897]: I0214 19:08:14.295396 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc95e64-31a9-4a6a-87fe-bfe2d765966f" containerName="dnsmasq-dns" Feb 14 19:08:14 crc kubenswrapper[4897]: I0214 19:08:14.295707 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dc95e64-31a9-4a6a-87fe-bfe2d765966f" containerName="dnsmasq-dns" Feb 14 19:08:14 crc kubenswrapper[4897]: I0214 19:08:14.296743 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hz8g2" Feb 14 19:08:14 crc kubenswrapper[4897]: I0214 19:08:14.298314 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 14 19:08:14 crc kubenswrapper[4897]: I0214 19:08:14.304697 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 14 19:08:14 crc kubenswrapper[4897]: I0214 19:08:14.334290 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-hz8g2"] Feb 14 19:08:14 crc kubenswrapper[4897]: I0214 19:08:14.389628 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxrqc\" (UniqueName: \"kubernetes.io/projected/22457187-fe82-4c9a-b565-95c7e561611f-kube-api-access-kxrqc\") pod \"nova-cell1-cell-mapping-hz8g2\" (UID: \"22457187-fe82-4c9a-b565-95c7e561611f\") " pod="openstack/nova-cell1-cell-mapping-hz8g2" Feb 14 19:08:14 crc kubenswrapper[4897]: I0214 19:08:14.390018 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22457187-fe82-4c9a-b565-95c7e561611f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hz8g2\" (UID: \"22457187-fe82-4c9a-b565-95c7e561611f\") " pod="openstack/nova-cell1-cell-mapping-hz8g2" Feb 14 19:08:14 crc kubenswrapper[4897]: I0214 19:08:14.390069 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22457187-fe82-4c9a-b565-95c7e561611f-scripts\") pod \"nova-cell1-cell-mapping-hz8g2\" (UID: \"22457187-fe82-4c9a-b565-95c7e561611f\") " pod="openstack/nova-cell1-cell-mapping-hz8g2" Feb 14 19:08:14 crc kubenswrapper[4897]: I0214 19:08:14.390102 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22457187-fe82-4c9a-b565-95c7e561611f-config-data\") pod \"nova-cell1-cell-mapping-hz8g2\" (UID: \"22457187-fe82-4c9a-b565-95c7e561611f\") " pod="openstack/nova-cell1-cell-mapping-hz8g2" Feb 14 19:08:14 crc kubenswrapper[4897]: I0214 19:08:14.491653 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22457187-fe82-4c9a-b565-95c7e561611f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hz8g2\" (UID: \"22457187-fe82-4c9a-b565-95c7e561611f\") " pod="openstack/nova-cell1-cell-mapping-hz8g2" Feb 14 19:08:14 crc kubenswrapper[4897]: I0214 19:08:14.491739 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22457187-fe82-4c9a-b565-95c7e561611f-scripts\") pod \"nova-cell1-cell-mapping-hz8g2\" (UID: \"22457187-fe82-4c9a-b565-95c7e561611f\") " pod="openstack/nova-cell1-cell-mapping-hz8g2" Feb 14 19:08:14 crc kubenswrapper[4897]: I0214 19:08:14.491784 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22457187-fe82-4c9a-b565-95c7e561611f-config-data\") pod \"nova-cell1-cell-mapping-hz8g2\" (UID: \"22457187-fe82-4c9a-b565-95c7e561611f\") " pod="openstack/nova-cell1-cell-mapping-hz8g2" Feb 14 19:08:14 crc kubenswrapper[4897]: I0214 19:08:14.491988 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxrqc\" (UniqueName: \"kubernetes.io/projected/22457187-fe82-4c9a-b565-95c7e561611f-kube-api-access-kxrqc\") pod \"nova-cell1-cell-mapping-hz8g2\" (UID: \"22457187-fe82-4c9a-b565-95c7e561611f\") " pod="openstack/nova-cell1-cell-mapping-hz8g2" Feb 14 19:08:14 crc kubenswrapper[4897]: I0214 19:08:14.502649 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22457187-fe82-4c9a-b565-95c7e561611f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hz8g2\" (UID: \"22457187-fe82-4c9a-b565-95c7e561611f\") " pod="openstack/nova-cell1-cell-mapping-hz8g2" Feb 14 19:08:14 crc kubenswrapper[4897]: I0214 19:08:14.502854 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22457187-fe82-4c9a-b565-95c7e561611f-scripts\") pod \"nova-cell1-cell-mapping-hz8g2\" (UID: \"22457187-fe82-4c9a-b565-95c7e561611f\") " pod="openstack/nova-cell1-cell-mapping-hz8g2" Feb 14 19:08:14 crc kubenswrapper[4897]: I0214 19:08:14.506855 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22457187-fe82-4c9a-b565-95c7e561611f-config-data\") pod \"nova-cell1-cell-mapping-hz8g2\" (UID: \"22457187-fe82-4c9a-b565-95c7e561611f\") " pod="openstack/nova-cell1-cell-mapping-hz8g2" Feb 14 19:08:14 crc kubenswrapper[4897]: I0214 19:08:14.530684 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxrqc\" (UniqueName: \"kubernetes.io/projected/22457187-fe82-4c9a-b565-95c7e561611f-kube-api-access-kxrqc\") pod \"nova-cell1-cell-mapping-hz8g2\" (UID: \"22457187-fe82-4c9a-b565-95c7e561611f\") " pod="openstack/nova-cell1-cell-mapping-hz8g2" Feb 14 19:08:14 crc kubenswrapper[4897]: I0214 19:08:14.621995 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hz8g2" Feb 14 19:08:15 crc kubenswrapper[4897]: I0214 19:08:15.188319 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-hz8g2"] Feb 14 19:08:15 crc kubenswrapper[4897]: I0214 19:08:15.805931 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dc95e64-31a9-4a6a-87fe-bfe2d765966f" path="/var/lib/kubelet/pods/7dc95e64-31a9-4a6a-87fe-bfe2d765966f/volumes" Feb 14 19:08:15 crc kubenswrapper[4897]: I0214 19:08:15.817505 4897 generic.go:334] "Generic (PLEG): container finished" podID="ee068b46-5bbc-4442-b2be-6b0f086d1edb" containerID="1d4f237909a7f9f07a4c8f14c78f3c6041a0a0ad2695c86b0c9556799c3a84a2" exitCode=0 Feb 14 19:08:15 crc kubenswrapper[4897]: I0214 19:08:15.817562 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgsgv" event={"ID":"ee068b46-5bbc-4442-b2be-6b0f086d1edb","Type":"ContainerDied","Data":"1d4f237909a7f9f07a4c8f14c78f3c6041a0a0ad2695c86b0c9556799c3a84a2"} Feb 14 19:08:15 crc kubenswrapper[4897]: I0214 19:08:15.819853 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hz8g2" event={"ID":"22457187-fe82-4c9a-b565-95c7e561611f","Type":"ContainerStarted","Data":"fcee3462510703d0a5ac1111e66700e2e376aad9911ff3c41e21495c8b737986"} Feb 14 19:08:15 crc kubenswrapper[4897]: I0214 19:08:15.819879 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hz8g2" event={"ID":"22457187-fe82-4c9a-b565-95c7e561611f","Type":"ContainerStarted","Data":"255f5aeac36f3e91c4244e38b6607aa6886f641b2c19ea09b81af3e5d91a769b"} Feb 14 19:08:15 crc kubenswrapper[4897]: I0214 19:08:15.871260 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-hz8g2" podStartSLOduration=1.871233407 podStartE2EDuration="1.871233407s" podCreationTimestamp="2026-02-14 19:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:08:15.863489194 +0000 UTC m=+1548.839897677" watchObservedRunningTime="2026-02-14 19:08:15.871233407 +0000 UTC m=+1548.847641890" Feb 14 19:08:16 crc kubenswrapper[4897]: I0214 19:08:16.204435 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bh69p" Feb 14 19:08:16 crc kubenswrapper[4897]: I0214 19:08:16.204778 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bh69p" Feb 14 19:08:16 crc kubenswrapper[4897]: I0214 19:08:16.287604 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bh69p" Feb 14 19:08:16 crc kubenswrapper[4897]: I0214 19:08:16.836823 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgsgv" event={"ID":"ee068b46-5bbc-4442-b2be-6b0f086d1edb","Type":"ContainerStarted","Data":"e0463e2fb0434a637bd3ed41363edfb93ebdf8134ff51849ebf48cfcd25976de"} Feb 14 19:08:16 crc kubenswrapper[4897]: I0214 19:08:16.851322 4897 generic.go:334] "Generic (PLEG): container finished" podID="02935790-1dbb-42a8-8f04-1314338f3425" containerID="f7cd8662cdc582c53a5a67669c742128da4283371a38819e41c7a28b0d5b8a56" exitCode=137 Feb 14 19:08:16 crc kubenswrapper[4897]: I0214 19:08:16.851354 4897 generic.go:334] "Generic (PLEG): container finished" podID="02935790-1dbb-42a8-8f04-1314338f3425" containerID="1843670e358a6514a4926d4c57fbef874c0393403f8ca8040d5eeb0807c8d34f" exitCode=137 Feb 14 19:08:16 crc kubenswrapper[4897]: I0214 19:08:16.852061 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"02935790-1dbb-42a8-8f04-1314338f3425","Type":"ContainerDied","Data":"f7cd8662cdc582c53a5a67669c742128da4283371a38819e41c7a28b0d5b8a56"} Feb 14 19:08:16 crc kubenswrapper[4897]: I0214 19:08:16.852100 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"02935790-1dbb-42a8-8f04-1314338f3425","Type":"ContainerDied","Data":"1843670e358a6514a4926d4c57fbef874c0393403f8ca8040d5eeb0807c8d34f"} Feb 14 19:08:16 crc kubenswrapper[4897]: I0214 19:08:16.885674 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tgsgv" podStartSLOduration=2.125601853 podStartE2EDuration="5.885651622s" podCreationTimestamp="2026-02-14 19:08:11 +0000 UTC" firstStartedPulling="2026-02-14 19:08:12.767109913 +0000 UTC m=+1545.743518386" lastFinishedPulling="2026-02-14 19:08:16.527159672 +0000 UTC m=+1549.503568155" observedRunningTime="2026-02-14 19:08:16.863465756 +0000 UTC m=+1549.839874239" watchObservedRunningTime="2026-02-14 19:08:16.885651622 +0000 UTC m=+1549.862060105" Feb 14 19:08:16 crc kubenswrapper[4897]: I0214 19:08:16.940245 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bh69p" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.083061 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.083095 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.302647 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.479322 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02935790-1dbb-42a8-8f04-1314338f3425-config-data\") pod \"02935790-1dbb-42a8-8f04-1314338f3425\" (UID: \"02935790-1dbb-42a8-8f04-1314338f3425\") " Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.481287 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02935790-1dbb-42a8-8f04-1314338f3425-scripts\") pod \"02935790-1dbb-42a8-8f04-1314338f3425\" (UID: \"02935790-1dbb-42a8-8f04-1314338f3425\") " Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.481605 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02935790-1dbb-42a8-8f04-1314338f3425-combined-ca-bundle\") pod \"02935790-1dbb-42a8-8f04-1314338f3425\" (UID: \"02935790-1dbb-42a8-8f04-1314338f3425\") " Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.481970 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88sln\" (UniqueName: \"kubernetes.io/projected/02935790-1dbb-42a8-8f04-1314338f3425-kube-api-access-88sln\") pod \"02935790-1dbb-42a8-8f04-1314338f3425\" (UID: \"02935790-1dbb-42a8-8f04-1314338f3425\") " Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.488156 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02935790-1dbb-42a8-8f04-1314338f3425-kube-api-access-88sln" (OuterVolumeSpecName: "kube-api-access-88sln") pod "02935790-1dbb-42a8-8f04-1314338f3425" (UID: "02935790-1dbb-42a8-8f04-1314338f3425"). InnerVolumeSpecName "kube-api-access-88sln". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.488473 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02935790-1dbb-42a8-8f04-1314338f3425-scripts" (OuterVolumeSpecName: "scripts") pod "02935790-1dbb-42a8-8f04-1314338f3425" (UID: "02935790-1dbb-42a8-8f04-1314338f3425"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.584650 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02935790-1dbb-42a8-8f04-1314338f3425-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.584682 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88sln\" (UniqueName: \"kubernetes.io/projected/02935790-1dbb-42a8-8f04-1314338f3425-kube-api-access-88sln\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.651580 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02935790-1dbb-42a8-8f04-1314338f3425-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02935790-1dbb-42a8-8f04-1314338f3425" (UID: "02935790-1dbb-42a8-8f04-1314338f3425"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.653116 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02935790-1dbb-42a8-8f04-1314338f3425-config-data" (OuterVolumeSpecName: "config-data") pod "02935790-1dbb-42a8-8f04-1314338f3425" (UID: "02935790-1dbb-42a8-8f04-1314338f3425"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.686419 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02935790-1dbb-42a8-8f04-1314338f3425-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.686454 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02935790-1dbb-42a8-8f04-1314338f3425-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.776685 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bh69p"] Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.863513 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.863513 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"02935790-1dbb-42a8-8f04-1314338f3425","Type":"ContainerDied","Data":"e8bdf80060c70e21751d8dc942a31ee0062545c6a4edd9a36ba50f5a72fbe2f8"} Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.863597 4897 scope.go:117] "RemoveContainer" containerID="f7cd8662cdc582c53a5a67669c742128da4283371a38819e41c7a28b0d5b8a56" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.899015 4897 scope.go:117] "RemoveContainer" containerID="1843670e358a6514a4926d4c57fbef874c0393403f8ca8040d5eeb0807c8d34f" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.899177 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.912144 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.926111 4897 scope.go:117] "RemoveContainer" containerID="0f6bcc97305beb7768ff5b28ce7f58e796ad52e3e7f9815758c82419013fb212" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.945467 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 14 19:08:17 crc kubenswrapper[4897]: E0214 19:08:17.945969 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02935790-1dbb-42a8-8f04-1314338f3425" containerName="aodh-notifier" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.945985 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="02935790-1dbb-42a8-8f04-1314338f3425" containerName="aodh-notifier" Feb 14 19:08:17 crc kubenswrapper[4897]: E0214 19:08:17.946005 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02935790-1dbb-42a8-8f04-1314338f3425" containerName="aodh-api" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.946012 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="02935790-1dbb-42a8-8f04-1314338f3425" containerName="aodh-api" Feb 14 19:08:17 crc kubenswrapper[4897]: E0214 19:08:17.946053 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02935790-1dbb-42a8-8f04-1314338f3425" containerName="aodh-evaluator" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.946059 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="02935790-1dbb-42a8-8f04-1314338f3425" containerName="aodh-evaluator" Feb 14 19:08:17 crc kubenswrapper[4897]: E0214 19:08:17.946081 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02935790-1dbb-42a8-8f04-1314338f3425" containerName="aodh-listener" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.946086 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="02935790-1dbb-42a8-8f04-1314338f3425" containerName="aodh-listener" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.946320 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="02935790-1dbb-42a8-8f04-1314338f3425" containerName="aodh-listener" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.946342 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="02935790-1dbb-42a8-8f04-1314338f3425" containerName="aodh-api" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.946358 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="02935790-1dbb-42a8-8f04-1314338f3425" containerName="aodh-notifier" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.946371 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="02935790-1dbb-42a8-8f04-1314338f3425" containerName="aodh-evaluator" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.948378 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.954469 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.954638 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.954941 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-5zcr5" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.955158 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.955325 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.961093 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 14 19:08:17 crc kubenswrapper[4897]: I0214 19:08:17.988990 4897 scope.go:117] "RemoveContainer" containerID="418d2798011a99aa5b8b7f21d3b60db521e1db1f9058e2d39a112e37d838134f" Feb 14 19:08:18 crc kubenswrapper[4897]: I0214 19:08:17.998197 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-public-tls-certs\") pod \"aodh-0\" (UID: \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\") " pod="openstack/aodh-0" Feb 14 19:08:18 crc kubenswrapper[4897]: I0214 19:08:17.998274 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-scripts\") pod \"aodh-0\" (UID: \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\") " pod="openstack/aodh-0" Feb 14 19:08:18 crc kubenswrapper[4897]: I0214 19:08:17.998363 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppnhp\" (UniqueName: \"kubernetes.io/projected/944b8f01-b27e-4d2a-b198-b44a9b10e47b-kube-api-access-ppnhp\") pod \"aodh-0\" (UID: \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\") " pod="openstack/aodh-0" Feb 14 19:08:18 crc kubenswrapper[4897]: I0214 19:08:17.998423 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\") " pod="openstack/aodh-0" Feb 14 19:08:18 crc kubenswrapper[4897]: I0214 19:08:17.998501 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-internal-tls-certs\") pod \"aodh-0\" (UID: \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\") " pod="openstack/aodh-0" Feb 14 19:08:18 crc kubenswrapper[4897]: I0214 19:08:17.998525 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-config-data\") pod \"aodh-0\" (UID: \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\") " pod="openstack/aodh-0" Feb 14 19:08:18 crc kubenswrapper[4897]: I0214 19:08:18.096176 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="482f17ca-b3a8-485b-bacc-58b97547974a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 19:08:18 crc kubenswrapper[4897]: I0214 19:08:18.096456 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="482f17ca-b3a8-485b-bacc-58b97547974a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 19:08:18 crc kubenswrapper[4897]: I0214 19:08:18.100332 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppnhp\" (UniqueName: \"kubernetes.io/projected/944b8f01-b27e-4d2a-b198-b44a9b10e47b-kube-api-access-ppnhp\") pod \"aodh-0\" (UID: \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\") " pod="openstack/aodh-0" Feb 14 19:08:18 crc kubenswrapper[4897]: I0214 19:08:18.100393 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\") " pod="openstack/aodh-0" Feb 14 19:08:18 crc kubenswrapper[4897]: I0214 19:08:18.100459 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-internal-tls-certs\") pod \"aodh-0\" (UID: \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\") " pod="openstack/aodh-0" Feb 14 19:08:18 crc kubenswrapper[4897]: I0214 19:08:18.100483 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-config-data\") pod \"aodh-0\" (UID: \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\") " pod="openstack/aodh-0" Feb 14 19:08:18 crc kubenswrapper[4897]: I0214 19:08:18.100549 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-public-tls-certs\") pod \"aodh-0\" (UID: \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\") " pod="openstack/aodh-0" Feb 14 19:08:18 crc kubenswrapper[4897]: I0214 19:08:18.100584 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-scripts\") pod \"aodh-0\" (UID: \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\") " pod="openstack/aodh-0" Feb 14 19:08:18 crc kubenswrapper[4897]: I0214 19:08:18.105658 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-internal-tls-certs\") pod \"aodh-0\" (UID: \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\") " pod="openstack/aodh-0" Feb 14 19:08:18 crc kubenswrapper[4897]: I0214 19:08:18.106499 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\") " pod="openstack/aodh-0" Feb 14 19:08:18 crc kubenswrapper[4897]: I0214 19:08:18.106848 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-public-tls-certs\") pod \"aodh-0\" (UID: \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\") " pod="openstack/aodh-0" Feb 14 19:08:18 crc kubenswrapper[4897]: I0214 19:08:18.107944 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-config-data\") pod \"aodh-0\" (UID: \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\") " pod="openstack/aodh-0" Feb 14 19:08:18 crc kubenswrapper[4897]: I0214 19:08:18.108065 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-scripts\") pod \"aodh-0\" (UID: \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\") " pod="openstack/aodh-0" Feb 14 19:08:18 crc kubenswrapper[4897]: I0214 19:08:18.126505 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppnhp\" (UniqueName: \"kubernetes.io/projected/944b8f01-b27e-4d2a-b198-b44a9b10e47b-kube-api-access-ppnhp\") pod \"aodh-0\" (UID: \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\") " pod="openstack/aodh-0" Feb 14 19:08:18 crc kubenswrapper[4897]: I0214 19:08:18.274843 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 19:08:18 crc kubenswrapper[4897]: I0214 19:08:18.859934 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 14 19:08:18 crc kubenswrapper[4897]: I0214 19:08:18.910098 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"944b8f01-b27e-4d2a-b198-b44a9b10e47b","Type":"ContainerStarted","Data":"8c077af132f308fc9645f8f948119cfeb743e6123a9f67d199ceff7fb4a926da"} Feb 14 19:08:18 crc kubenswrapper[4897]: I0214 19:08:18.910163 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bh69p" podUID="9e7b8b61-a5fd-4bea-91f0-45342d6587f2" containerName="registry-server" containerID="cri-o://25ac1c266c308368ef0f3ad2d172626a8f334786428c4b10284869b9b8bbf151" gracePeriod=2 Feb 14 19:08:19 crc kubenswrapper[4897]: I0214 19:08:19.868170 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02935790-1dbb-42a8-8f04-1314338f3425" path="/var/lib/kubelet/pods/02935790-1dbb-42a8-8f04-1314338f3425/volumes" Feb 14 19:08:19 crc kubenswrapper[4897]: I0214 19:08:19.999512 4897 generic.go:334] "Generic (PLEG): container finished" podID="9e7b8b61-a5fd-4bea-91f0-45342d6587f2" containerID="25ac1c266c308368ef0f3ad2d172626a8f334786428c4b10284869b9b8bbf151" exitCode=0 Feb 14 19:08:19 crc kubenswrapper[4897]: I0214 19:08:19.999589 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bh69p" event={"ID":"9e7b8b61-a5fd-4bea-91f0-45342d6587f2","Type":"ContainerDied","Data":"25ac1c266c308368ef0f3ad2d172626a8f334786428c4b10284869b9b8bbf151"} Feb 14 19:08:20 crc kubenswrapper[4897]: I0214 19:08:20.016202 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"944b8f01-b27e-4d2a-b198-b44a9b10e47b","Type":"ContainerStarted","Data":"fbf363a8091a98962cc88d42321a52dcefe63a2ba2c0f4dead34de765a46d9b1"} Feb 14 19:08:20 crc kubenswrapper[4897]: I0214 19:08:20.114342 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 19:08:20 crc kubenswrapper[4897]: I0214 19:08:20.114739 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 19:08:20 crc kubenswrapper[4897]: I0214 19:08:20.173934 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bh69p" Feb 14 19:08:20 crc kubenswrapper[4897]: I0214 19:08:20.356736 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z62dq\" (UniqueName: \"kubernetes.io/projected/9e7b8b61-a5fd-4bea-91f0-45342d6587f2-kube-api-access-z62dq\") pod \"9e7b8b61-a5fd-4bea-91f0-45342d6587f2\" (UID: \"9e7b8b61-a5fd-4bea-91f0-45342d6587f2\") " Feb 14 19:08:20 crc kubenswrapper[4897]: I0214 19:08:20.356852 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e7b8b61-a5fd-4bea-91f0-45342d6587f2-catalog-content\") pod \"9e7b8b61-a5fd-4bea-91f0-45342d6587f2\" (UID: \"9e7b8b61-a5fd-4bea-91f0-45342d6587f2\") " Feb 14 19:08:20 crc kubenswrapper[4897]: I0214 19:08:20.357088 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e7b8b61-a5fd-4bea-91f0-45342d6587f2-utilities\") pod \"9e7b8b61-a5fd-4bea-91f0-45342d6587f2\" (UID: \"9e7b8b61-a5fd-4bea-91f0-45342d6587f2\") " Feb 14 19:08:20 crc kubenswrapper[4897]: I0214 19:08:20.357720 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e7b8b61-a5fd-4bea-91f0-45342d6587f2-utilities" (OuterVolumeSpecName: "utilities") pod "9e7b8b61-a5fd-4bea-91f0-45342d6587f2" (UID: "9e7b8b61-a5fd-4bea-91f0-45342d6587f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:08:20 crc kubenswrapper[4897]: I0214 19:08:20.362912 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e7b8b61-a5fd-4bea-91f0-45342d6587f2-kube-api-access-z62dq" (OuterVolumeSpecName: "kube-api-access-z62dq") pod "9e7b8b61-a5fd-4bea-91f0-45342d6587f2" (UID: "9e7b8b61-a5fd-4bea-91f0-45342d6587f2"). InnerVolumeSpecName "kube-api-access-z62dq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:08:20 crc kubenswrapper[4897]: I0214 19:08:20.403661 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e7b8b61-a5fd-4bea-91f0-45342d6587f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e7b8b61-a5fd-4bea-91f0-45342d6587f2" (UID: "9e7b8b61-a5fd-4bea-91f0-45342d6587f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:08:20 crc kubenswrapper[4897]: I0214 19:08:20.460511 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e7b8b61-a5fd-4bea-91f0-45342d6587f2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:20 crc kubenswrapper[4897]: I0214 19:08:20.460558 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e7b8b61-a5fd-4bea-91f0-45342d6587f2-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:20 crc kubenswrapper[4897]: I0214 19:08:20.460571 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z62dq\" (UniqueName: \"kubernetes.io/projected/9e7b8b61-a5fd-4bea-91f0-45342d6587f2-kube-api-access-z62dq\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:21 crc kubenswrapper[4897]: I0214 19:08:21.029272 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"944b8f01-b27e-4d2a-b198-b44a9b10e47b","Type":"ContainerStarted","Data":"fa44bbb6b1b409a42c81589c5d6d3fe0a9db210143a884471f840efec692a131"} Feb 14 19:08:21 crc kubenswrapper[4897]: I0214 19:08:21.031388 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bh69p" event={"ID":"9e7b8b61-a5fd-4bea-91f0-45342d6587f2","Type":"ContainerDied","Data":"dabfcef4f7bb4bc0ad1f2c1f71126ff805cfe1ac60b78883873c6c90a266267d"} Feb 14 19:08:21 crc kubenswrapper[4897]: I0214 19:08:21.031600 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bh69p" Feb 14 19:08:21 crc kubenswrapper[4897]: I0214 19:08:21.031645 4897 scope.go:117] "RemoveContainer" containerID="25ac1c266c308368ef0f3ad2d172626a8f334786428c4b10284869b9b8bbf151" Feb 14 19:08:21 crc kubenswrapper[4897]: I0214 19:08:21.061302 4897 scope.go:117] "RemoveContainer" containerID="8615cdfdb92d9ec4c3cf137cd8c3e47a83d5ddef0e3adb10a4d14e99628a9e32" Feb 14 19:08:21 crc kubenswrapper[4897]: I0214 19:08:21.120800 4897 scope.go:117] "RemoveContainer" containerID="17fe01670a807dcad902d8f36f4e5252a8f4156362bef44c4a1b0d03ed3d50f4" Feb 14 19:08:21 crc kubenswrapper[4897]: I0214 19:08:21.131185 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bh69p"] Feb 14 19:08:21 crc kubenswrapper[4897]: I0214 19:08:21.134264 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9b9564cf-9082-4da6-8197-229a5a16f424" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.9:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 19:08:21 crc kubenswrapper[4897]: I0214 19:08:21.134300 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9b9564cf-9082-4da6-8197-229a5a16f424" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.9:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 19:08:21 crc kubenswrapper[4897]: I0214 19:08:21.144576 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bh69p"] Feb 14 19:08:21 crc kubenswrapper[4897]: I0214 19:08:21.727754 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tgsgv" Feb 14 19:08:21 crc kubenswrapper[4897]: I0214 19:08:21.727807 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tgsgv" Feb 14 19:08:21 crc kubenswrapper[4897]: I0214 19:08:21.784552 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tgsgv" Feb 14 19:08:21 crc kubenswrapper[4897]: I0214 19:08:21.808143 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e7b8b61-a5fd-4bea-91f0-45342d6587f2" path="/var/lib/kubelet/pods/9e7b8b61-a5fd-4bea-91f0-45342d6587f2/volumes" Feb 14 19:08:22 crc kubenswrapper[4897]: I0214 19:08:22.041633 4897 generic.go:334] "Generic (PLEG): container finished" podID="22457187-fe82-4c9a-b565-95c7e561611f" containerID="fcee3462510703d0a5ac1111e66700e2e376aad9911ff3c41e21495c8b737986" exitCode=0 Feb 14 19:08:22 crc kubenswrapper[4897]: I0214 19:08:22.041708 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hz8g2" event={"ID":"22457187-fe82-4c9a-b565-95c7e561611f","Type":"ContainerDied","Data":"fcee3462510703d0a5ac1111e66700e2e376aad9911ff3c41e21495c8b737986"} Feb 14 19:08:22 crc kubenswrapper[4897]: I0214 19:08:22.045436 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"944b8f01-b27e-4d2a-b198-b44a9b10e47b","Type":"ContainerStarted","Data":"c495243fd15aec00ff4c52117972de70493f702eca329f10008045249c618c50"} Feb 14 19:08:22 crc kubenswrapper[4897]: I0214 19:08:22.045485 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"944b8f01-b27e-4d2a-b198-b44a9b10e47b","Type":"ContainerStarted","Data":"75bf987ce8bfc743c8e7002ca68f6bd80b0cd27a2bcb19f9d8e8481a23063b43"} Feb 14 19:08:22 crc kubenswrapper[4897]: I0214 19:08:22.096264 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.322835856 podStartE2EDuration="5.096245502s" podCreationTimestamp="2026-02-14 19:08:17 +0000 UTC" firstStartedPulling="2026-02-14 19:08:18.866355581 +0000 UTC m=+1551.842764064" lastFinishedPulling="2026-02-14 19:08:21.639765217 +0000 UTC m=+1554.616173710" observedRunningTime="2026-02-14 19:08:22.095635253 +0000 UTC m=+1555.072043726" watchObservedRunningTime="2026-02-14 19:08:22.096245502 +0000 UTC m=+1555.072653975" Feb 14 19:08:22 crc kubenswrapper[4897]: I0214 19:08:22.140923 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tgsgv" Feb 14 19:08:23 crc kubenswrapper[4897]: I0214 19:08:23.464986 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hz8g2" Feb 14 19:08:23 crc kubenswrapper[4897]: I0214 19:08:23.649781 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22457187-fe82-4c9a-b565-95c7e561611f-scripts\") pod \"22457187-fe82-4c9a-b565-95c7e561611f\" (UID: \"22457187-fe82-4c9a-b565-95c7e561611f\") " Feb 14 19:08:23 crc kubenswrapper[4897]: I0214 19:08:23.650107 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22457187-fe82-4c9a-b565-95c7e561611f-combined-ca-bundle\") pod \"22457187-fe82-4c9a-b565-95c7e561611f\" (UID: \"22457187-fe82-4c9a-b565-95c7e561611f\") " Feb 14 19:08:23 crc kubenswrapper[4897]: I0214 19:08:23.650195 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22457187-fe82-4c9a-b565-95c7e561611f-config-data\") pod \"22457187-fe82-4c9a-b565-95c7e561611f\" (UID: \"22457187-fe82-4c9a-b565-95c7e561611f\") " Feb 14 19:08:23 crc kubenswrapper[4897]: I0214 19:08:23.650232 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxrqc\" (UniqueName: \"kubernetes.io/projected/22457187-fe82-4c9a-b565-95c7e561611f-kube-api-access-kxrqc\") pod \"22457187-fe82-4c9a-b565-95c7e561611f\" (UID: \"22457187-fe82-4c9a-b565-95c7e561611f\") " Feb 14 19:08:23 crc kubenswrapper[4897]: I0214 19:08:23.656438 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22457187-fe82-4c9a-b565-95c7e561611f-scripts" (OuterVolumeSpecName: "scripts") pod "22457187-fe82-4c9a-b565-95c7e561611f" (UID: "22457187-fe82-4c9a-b565-95c7e561611f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:23 crc kubenswrapper[4897]: I0214 19:08:23.656820 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22457187-fe82-4c9a-b565-95c7e561611f-kube-api-access-kxrqc" (OuterVolumeSpecName: "kube-api-access-kxrqc") pod "22457187-fe82-4c9a-b565-95c7e561611f" (UID: "22457187-fe82-4c9a-b565-95c7e561611f"). InnerVolumeSpecName "kube-api-access-kxrqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:08:23 crc kubenswrapper[4897]: I0214 19:08:23.736172 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22457187-fe82-4c9a-b565-95c7e561611f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22457187-fe82-4c9a-b565-95c7e561611f" (UID: "22457187-fe82-4c9a-b565-95c7e561611f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:23 crc kubenswrapper[4897]: I0214 19:08:23.754575 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxrqc\" (UniqueName: \"kubernetes.io/projected/22457187-fe82-4c9a-b565-95c7e561611f-kube-api-access-kxrqc\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:23 crc kubenswrapper[4897]: I0214 19:08:23.754604 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22457187-fe82-4c9a-b565-95c7e561611f-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:23 crc kubenswrapper[4897]: I0214 19:08:23.754613 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22457187-fe82-4c9a-b565-95c7e561611f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:23 crc kubenswrapper[4897]: I0214 19:08:23.787207 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22457187-fe82-4c9a-b565-95c7e561611f-config-data" (OuterVolumeSpecName: "config-data") pod "22457187-fe82-4c9a-b565-95c7e561611f" (UID: "22457187-fe82-4c9a-b565-95c7e561611f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:23 crc kubenswrapper[4897]: I0214 19:08:23.814143 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tgsgv"] Feb 14 19:08:23 crc kubenswrapper[4897]: I0214 19:08:23.856692 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22457187-fe82-4c9a-b565-95c7e561611f-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:24 crc kubenswrapper[4897]: I0214 19:08:24.066791 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hz8g2" event={"ID":"22457187-fe82-4c9a-b565-95c7e561611f","Type":"ContainerDied","Data":"255f5aeac36f3e91c4244e38b6607aa6886f641b2c19ea09b81af3e5d91a769b"} Feb 14 19:08:24 crc kubenswrapper[4897]: I0214 19:08:24.067175 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="255f5aeac36f3e91c4244e38b6607aa6886f641b2c19ea09b81af3e5d91a769b" Feb 14 19:08:24 crc kubenswrapper[4897]: I0214 19:08:24.066907 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tgsgv" podUID="ee068b46-5bbc-4442-b2be-6b0f086d1edb" containerName="registry-server" containerID="cri-o://e0463e2fb0434a637bd3ed41363edfb93ebdf8134ff51849ebf48cfcd25976de" gracePeriod=2 Feb 14 19:08:24 crc kubenswrapper[4897]: I0214 19:08:24.066808 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hz8g2" Feb 14 19:08:24 crc kubenswrapper[4897]: I0214 19:08:24.330658 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 19:08:24 crc kubenswrapper[4897]: I0214 19:08:24.330927 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="95f8be13-487d-4d73-91c5-0996935e042c" containerName="nova-scheduler-scheduler" containerID="cri-o://4584029be60adf77f36e9076ad681cf2a9a6c580f2839fcb7574d0a471a06f0f" gracePeriod=30 Feb 14 19:08:24 crc kubenswrapper[4897]: I0214 19:08:24.344735 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 19:08:24 crc kubenswrapper[4897]: I0214 19:08:24.344968 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9b9564cf-9082-4da6-8197-229a5a16f424" containerName="nova-api-log" containerID="cri-o://5ceee8207abb0f8e607590cb3be3de1accb9c50f6b0ec1319bfc1c74c7561608" gracePeriod=30 Feb 14 19:08:24 crc kubenswrapper[4897]: I0214 19:08:24.345841 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9b9564cf-9082-4da6-8197-229a5a16f424" containerName="nova-api-api" containerID="cri-o://b9adf73bc90557a49b5277f4e1c17e9d8547ab4da364480f92f937bdac9ed118" gracePeriod=30 Feb 14 19:08:24 crc kubenswrapper[4897]: I0214 19:08:24.381382 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 19:08:24 crc kubenswrapper[4897]: I0214 19:08:24.381627 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="482f17ca-b3a8-485b-bacc-58b97547974a" containerName="nova-metadata-log" containerID="cri-o://60ed2b81ddf2f413c01e12978ae43b9013b26e3752dd24a40797129171d89369" gracePeriod=30 Feb 14 19:08:24 crc kubenswrapper[4897]: I0214 19:08:24.381914 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="482f17ca-b3a8-485b-bacc-58b97547974a" containerName="nova-metadata-metadata" containerID="cri-o://c0c5d828d890f790335115cf7cc36aa5e10fa411011ec6a3ef4df81041e287ed" gracePeriod=30 Feb 14 19:08:24 crc kubenswrapper[4897]: I0214 19:08:24.812215 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tgsgv" Feb 14 19:08:24 crc kubenswrapper[4897]: I0214 19:08:24.914810 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee068b46-5bbc-4442-b2be-6b0f086d1edb-catalog-content\") pod \"ee068b46-5bbc-4442-b2be-6b0f086d1edb\" (UID: \"ee068b46-5bbc-4442-b2be-6b0f086d1edb\") " Feb 14 19:08:24 crc kubenswrapper[4897]: I0214 19:08:24.914885 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee068b46-5bbc-4442-b2be-6b0f086d1edb-utilities\") pod \"ee068b46-5bbc-4442-b2be-6b0f086d1edb\" (UID: \"ee068b46-5bbc-4442-b2be-6b0f086d1edb\") " Feb 14 19:08:24 crc kubenswrapper[4897]: I0214 19:08:24.914994 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8msft\" (UniqueName: \"kubernetes.io/projected/ee068b46-5bbc-4442-b2be-6b0f086d1edb-kube-api-access-8msft\") pod \"ee068b46-5bbc-4442-b2be-6b0f086d1edb\" (UID: \"ee068b46-5bbc-4442-b2be-6b0f086d1edb\") " Feb 14 19:08:24 crc kubenswrapper[4897]: I0214 19:08:24.915497 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee068b46-5bbc-4442-b2be-6b0f086d1edb-utilities" (OuterVolumeSpecName: "utilities") pod "ee068b46-5bbc-4442-b2be-6b0f086d1edb" (UID: "ee068b46-5bbc-4442-b2be-6b0f086d1edb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:08:24 crc kubenswrapper[4897]: I0214 19:08:24.916543 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee068b46-5bbc-4442-b2be-6b0f086d1edb-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:24 crc kubenswrapper[4897]: I0214 19:08:24.921229 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee068b46-5bbc-4442-b2be-6b0f086d1edb-kube-api-access-8msft" (OuterVolumeSpecName: "kube-api-access-8msft") pod "ee068b46-5bbc-4442-b2be-6b0f086d1edb" (UID: "ee068b46-5bbc-4442-b2be-6b0f086d1edb"). InnerVolumeSpecName "kube-api-access-8msft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:08:24 crc kubenswrapper[4897]: I0214 19:08:24.964849 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee068b46-5bbc-4442-b2be-6b0f086d1edb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee068b46-5bbc-4442-b2be-6b0f086d1edb" (UID: "ee068b46-5bbc-4442-b2be-6b0f086d1edb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:08:25 crc kubenswrapper[4897]: I0214 19:08:25.019158 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee068b46-5bbc-4442-b2be-6b0f086d1edb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:25 crc kubenswrapper[4897]: I0214 19:08:25.019193 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8msft\" (UniqueName: \"kubernetes.io/projected/ee068b46-5bbc-4442-b2be-6b0f086d1edb-kube-api-access-8msft\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:25 crc kubenswrapper[4897]: I0214 19:08:25.078409 4897 generic.go:334] "Generic (PLEG): container finished" podID="9b9564cf-9082-4da6-8197-229a5a16f424" containerID="5ceee8207abb0f8e607590cb3be3de1accb9c50f6b0ec1319bfc1c74c7561608" exitCode=143 Feb 14 19:08:25 crc kubenswrapper[4897]: I0214 19:08:25.078474 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b9564cf-9082-4da6-8197-229a5a16f424","Type":"ContainerDied","Data":"5ceee8207abb0f8e607590cb3be3de1accb9c50f6b0ec1319bfc1c74c7561608"} Feb 14 19:08:25 crc kubenswrapper[4897]: I0214 19:08:25.081289 4897 generic.go:334] "Generic (PLEG): container finished" podID="ee068b46-5bbc-4442-b2be-6b0f086d1edb" containerID="e0463e2fb0434a637bd3ed41363edfb93ebdf8134ff51849ebf48cfcd25976de" exitCode=0 Feb 14 19:08:25 crc kubenswrapper[4897]: I0214 19:08:25.081334 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tgsgv" Feb 14 19:08:25 crc kubenswrapper[4897]: I0214 19:08:25.081371 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgsgv" event={"ID":"ee068b46-5bbc-4442-b2be-6b0f086d1edb","Type":"ContainerDied","Data":"e0463e2fb0434a637bd3ed41363edfb93ebdf8134ff51849ebf48cfcd25976de"} Feb 14 19:08:25 crc kubenswrapper[4897]: I0214 19:08:25.081402 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgsgv" event={"ID":"ee068b46-5bbc-4442-b2be-6b0f086d1edb","Type":"ContainerDied","Data":"a45d57c48601dab592e67e536c05b3bf93c1270e756f21d581c1df86f6bdc35e"} Feb 14 19:08:25 crc kubenswrapper[4897]: I0214 19:08:25.081419 4897 scope.go:117] "RemoveContainer" containerID="e0463e2fb0434a637bd3ed41363edfb93ebdf8134ff51849ebf48cfcd25976de" Feb 14 19:08:25 crc kubenswrapper[4897]: I0214 19:08:25.083110 4897 generic.go:334] "Generic (PLEG): container finished" podID="482f17ca-b3a8-485b-bacc-58b97547974a" containerID="60ed2b81ddf2f413c01e12978ae43b9013b26e3752dd24a40797129171d89369" exitCode=143 Feb 14 19:08:25 crc kubenswrapper[4897]: I0214 19:08:25.083131 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"482f17ca-b3a8-485b-bacc-58b97547974a","Type":"ContainerDied","Data":"60ed2b81ddf2f413c01e12978ae43b9013b26e3752dd24a40797129171d89369"} Feb 14 19:08:25 crc kubenswrapper[4897]: I0214 19:08:25.108382 4897 scope.go:117] "RemoveContainer" containerID="1d4f237909a7f9f07a4c8f14c78f3c6041a0a0ad2695c86b0c9556799c3a84a2" Feb 14 19:08:25 crc kubenswrapper[4897]: I0214 19:08:25.124548 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tgsgv"] Feb 14 19:08:25 crc kubenswrapper[4897]: I0214 19:08:25.152091 4897 scope.go:117] "RemoveContainer" containerID="0111513b4da7df65bece61640a470423757585ab43fafbb2e9ec4f38b9532834" Feb 14 19:08:25 crc kubenswrapper[4897]: I0214 19:08:25.155929 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tgsgv"] Feb 14 19:08:25 crc kubenswrapper[4897]: I0214 19:08:25.189492 4897 scope.go:117] "RemoveContainer" containerID="e0463e2fb0434a637bd3ed41363edfb93ebdf8134ff51849ebf48cfcd25976de" Feb 14 19:08:25 crc kubenswrapper[4897]: E0214 19:08:25.190440 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0463e2fb0434a637bd3ed41363edfb93ebdf8134ff51849ebf48cfcd25976de\": container with ID starting with e0463e2fb0434a637bd3ed41363edfb93ebdf8134ff51849ebf48cfcd25976de not found: ID does not exist" containerID="e0463e2fb0434a637bd3ed41363edfb93ebdf8134ff51849ebf48cfcd25976de" Feb 14 19:08:25 crc kubenswrapper[4897]: I0214 19:08:25.190483 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0463e2fb0434a637bd3ed41363edfb93ebdf8134ff51849ebf48cfcd25976de"} err="failed to get container status \"e0463e2fb0434a637bd3ed41363edfb93ebdf8134ff51849ebf48cfcd25976de\": rpc error: code = NotFound desc = could not find container \"e0463e2fb0434a637bd3ed41363edfb93ebdf8134ff51849ebf48cfcd25976de\": container with ID starting with e0463e2fb0434a637bd3ed41363edfb93ebdf8134ff51849ebf48cfcd25976de not found: ID does not exist" Feb 14 19:08:25 crc kubenswrapper[4897]: I0214 19:08:25.190536 4897 scope.go:117] "RemoveContainer" containerID="1d4f237909a7f9f07a4c8f14c78f3c6041a0a0ad2695c86b0c9556799c3a84a2" Feb 14 19:08:25 crc kubenswrapper[4897]: E0214 19:08:25.190964 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d4f237909a7f9f07a4c8f14c78f3c6041a0a0ad2695c86b0c9556799c3a84a2\": container with ID starting with 1d4f237909a7f9f07a4c8f14c78f3c6041a0a0ad2695c86b0c9556799c3a84a2 not found: ID does not exist" containerID="1d4f237909a7f9f07a4c8f14c78f3c6041a0a0ad2695c86b0c9556799c3a84a2" Feb 14 19:08:25 crc kubenswrapper[4897]: I0214 19:08:25.191009 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d4f237909a7f9f07a4c8f14c78f3c6041a0a0ad2695c86b0c9556799c3a84a2"} err="failed to get container status \"1d4f237909a7f9f07a4c8f14c78f3c6041a0a0ad2695c86b0c9556799c3a84a2\": rpc error: code = NotFound desc = could not find container \"1d4f237909a7f9f07a4c8f14c78f3c6041a0a0ad2695c86b0c9556799c3a84a2\": container with ID starting with 1d4f237909a7f9f07a4c8f14c78f3c6041a0a0ad2695c86b0c9556799c3a84a2 not found: ID does not exist" Feb 14 19:08:25 crc kubenswrapper[4897]: I0214 19:08:25.191053 4897 scope.go:117] "RemoveContainer" containerID="0111513b4da7df65bece61640a470423757585ab43fafbb2e9ec4f38b9532834" Feb 14 19:08:25 crc kubenswrapper[4897]: E0214 19:08:25.191337 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0111513b4da7df65bece61640a470423757585ab43fafbb2e9ec4f38b9532834\": container with ID starting with 0111513b4da7df65bece61640a470423757585ab43fafbb2e9ec4f38b9532834 not found: ID does not exist" containerID="0111513b4da7df65bece61640a470423757585ab43fafbb2e9ec4f38b9532834" Feb 14 19:08:25 crc kubenswrapper[4897]: I0214 19:08:25.191379 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0111513b4da7df65bece61640a470423757585ab43fafbb2e9ec4f38b9532834"} err="failed to get container status \"0111513b4da7df65bece61640a470423757585ab43fafbb2e9ec4f38b9532834\": rpc error: code = NotFound desc = could not find container \"0111513b4da7df65bece61640a470423757585ab43fafbb2e9ec4f38b9532834\": container with ID starting with 0111513b4da7df65bece61640a470423757585ab43fafbb2e9ec4f38b9532834 not found: ID does not exist" Feb 14 19:08:25 crc kubenswrapper[4897]: I0214 19:08:25.811726 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee068b46-5bbc-4442-b2be-6b0f086d1edb" path="/var/lib/kubelet/pods/ee068b46-5bbc-4442-b2be-6b0f086d1edb/volumes" Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.123802 4897 generic.go:334] "Generic (PLEG): container finished" podID="482f17ca-b3a8-485b-bacc-58b97547974a" containerID="c0c5d828d890f790335115cf7cc36aa5e10fa411011ec6a3ef4df81041e287ed" exitCode=0 Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.124440 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"482f17ca-b3a8-485b-bacc-58b97547974a","Type":"ContainerDied","Data":"c0c5d828d890f790335115cf7cc36aa5e10fa411011ec6a3ef4df81041e287ed"} Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.125930 4897 generic.go:334] "Generic (PLEG): container finished" podID="9b9564cf-9082-4da6-8197-229a5a16f424" containerID="b9adf73bc90557a49b5277f4e1c17e9d8547ab4da364480f92f937bdac9ed118" exitCode=0 Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.126044 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b9564cf-9082-4da6-8197-229a5a16f424","Type":"ContainerDied","Data":"b9adf73bc90557a49b5277f4e1c17e9d8547ab4da364480f92f937bdac9ed118"} Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.126134 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b9564cf-9082-4da6-8197-229a5a16f424","Type":"ContainerDied","Data":"c99ec54e762c23685c9aa974cb34a93f005e9a9336d1bce25ee77e9a467c01a8"} Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.126197 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c99ec54e762c23685c9aa974cb34a93f005e9a9336d1bce25ee77e9a467c01a8" Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.259220 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.268913 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.293963 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/482f17ca-b3a8-485b-bacc-58b97547974a-config-data\") pod \"482f17ca-b3a8-485b-bacc-58b97547974a\" (UID: \"482f17ca-b3a8-485b-bacc-58b97547974a\") " Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.294022 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk7gn\" (UniqueName: \"kubernetes.io/projected/9b9564cf-9082-4da6-8197-229a5a16f424-kube-api-access-qk7gn\") pod \"9b9564cf-9082-4da6-8197-229a5a16f424\" (UID: \"9b9564cf-9082-4da6-8197-229a5a16f424\") " Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.294068 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b9564cf-9082-4da6-8197-229a5a16f424-internal-tls-certs\") pod \"9b9564cf-9082-4da6-8197-229a5a16f424\" (UID: \"9b9564cf-9082-4da6-8197-229a5a16f424\") " Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.294100 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/482f17ca-b3a8-485b-bacc-58b97547974a-combined-ca-bundle\") pod \"482f17ca-b3a8-485b-bacc-58b97547974a\" (UID: \"482f17ca-b3a8-485b-bacc-58b97547974a\") " Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.294123 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b9564cf-9082-4da6-8197-229a5a16f424-logs\") pod \"9b9564cf-9082-4da6-8197-229a5a16f424\" (UID: \"9b9564cf-9082-4da6-8197-229a5a16f424\") " Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.294174 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds9lp\" (UniqueName: \"kubernetes.io/projected/482f17ca-b3a8-485b-bacc-58b97547974a-kube-api-access-ds9lp\") pod \"482f17ca-b3a8-485b-bacc-58b97547974a\" (UID: \"482f17ca-b3a8-485b-bacc-58b97547974a\") " Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.294205 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9564cf-9082-4da6-8197-229a5a16f424-combined-ca-bundle\") pod \"9b9564cf-9082-4da6-8197-229a5a16f424\" (UID: \"9b9564cf-9082-4da6-8197-229a5a16f424\") " Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.294244 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/482f17ca-b3a8-485b-bacc-58b97547974a-logs\") pod \"482f17ca-b3a8-485b-bacc-58b97547974a\" (UID: \"482f17ca-b3a8-485b-bacc-58b97547974a\") " Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.294282 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b9564cf-9082-4da6-8197-229a5a16f424-public-tls-certs\") pod \"9b9564cf-9082-4da6-8197-229a5a16f424\" (UID: \"9b9564cf-9082-4da6-8197-229a5a16f424\") " Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.294324 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/482f17ca-b3a8-485b-bacc-58b97547974a-nova-metadata-tls-certs\") pod \"482f17ca-b3a8-485b-bacc-58b97547974a\" (UID: \"482f17ca-b3a8-485b-bacc-58b97547974a\") " Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.294355 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b9564cf-9082-4da6-8197-229a5a16f424-config-data\") pod \"9b9564cf-9082-4da6-8197-229a5a16f424\" (UID: \"9b9564cf-9082-4da6-8197-229a5a16f424\") " Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.294611 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b9564cf-9082-4da6-8197-229a5a16f424-logs" (OuterVolumeSpecName: "logs") pod "9b9564cf-9082-4da6-8197-229a5a16f424" (UID: "9b9564cf-9082-4da6-8197-229a5a16f424"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.297517 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/482f17ca-b3a8-485b-bacc-58b97547974a-logs" (OuterVolumeSpecName: "logs") pod "482f17ca-b3a8-485b-bacc-58b97547974a" (UID: "482f17ca-b3a8-485b-bacc-58b97547974a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.298621 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/482f17ca-b3a8-485b-bacc-58b97547974a-logs\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.298728 4897 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b9564cf-9082-4da6-8197-229a5a16f424-logs\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.320105 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/482f17ca-b3a8-485b-bacc-58b97547974a-kube-api-access-ds9lp" (OuterVolumeSpecName: "kube-api-access-ds9lp") pod "482f17ca-b3a8-485b-bacc-58b97547974a" (UID: "482f17ca-b3a8-485b-bacc-58b97547974a"). InnerVolumeSpecName "kube-api-access-ds9lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.339345 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/482f17ca-b3a8-485b-bacc-58b97547974a-config-data" (OuterVolumeSpecName: "config-data") pod "482f17ca-b3a8-485b-bacc-58b97547974a" (UID: "482f17ca-b3a8-485b-bacc-58b97547974a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.356203 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b9564cf-9082-4da6-8197-229a5a16f424-kube-api-access-qk7gn" (OuterVolumeSpecName: "kube-api-access-qk7gn") pod "9b9564cf-9082-4da6-8197-229a5a16f424" (UID: "9b9564cf-9082-4da6-8197-229a5a16f424"). InnerVolumeSpecName "kube-api-access-qk7gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.366750 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/482f17ca-b3a8-485b-bacc-58b97547974a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "482f17ca-b3a8-485b-bacc-58b97547974a" (UID: "482f17ca-b3a8-485b-bacc-58b97547974a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.373355 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b9564cf-9082-4da6-8197-229a5a16f424-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b9564cf-9082-4da6-8197-229a5a16f424" (UID: "9b9564cf-9082-4da6-8197-229a5a16f424"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.377224 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b9564cf-9082-4da6-8197-229a5a16f424-config-data" (OuterVolumeSpecName: "config-data") pod "9b9564cf-9082-4da6-8197-229a5a16f424" (UID: "9b9564cf-9082-4da6-8197-229a5a16f424"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.389447 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/482f17ca-b3a8-485b-bacc-58b97547974a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "482f17ca-b3a8-485b-bacc-58b97547974a" (UID: "482f17ca-b3a8-485b-bacc-58b97547974a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.395558 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b9564cf-9082-4da6-8197-229a5a16f424-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9b9564cf-9082-4da6-8197-229a5a16f424" (UID: "9b9564cf-9082-4da6-8197-229a5a16f424"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.401650 4897 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b9564cf-9082-4da6-8197-229a5a16f424-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.401721 4897 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/482f17ca-b3a8-485b-bacc-58b97547974a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.401752 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b9564cf-9082-4da6-8197-229a5a16f424-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.401761 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/482f17ca-b3a8-485b-bacc-58b97547974a-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.401770 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk7gn\" (UniqueName: \"kubernetes.io/projected/9b9564cf-9082-4da6-8197-229a5a16f424-kube-api-access-qk7gn\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.401778 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/482f17ca-b3a8-485b-bacc-58b97547974a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.401786 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds9lp\" (UniqueName: \"kubernetes.io/projected/482f17ca-b3a8-485b-bacc-58b97547974a-kube-api-access-ds9lp\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.401794 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9564cf-9082-4da6-8197-229a5a16f424-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.408547 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b9564cf-9082-4da6-8197-229a5a16f424-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9b9564cf-9082-4da6-8197-229a5a16f424" (UID: "9b9564cf-9082-4da6-8197-229a5a16f424"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:28 crc kubenswrapper[4897]: E0214 19:08:28.429928 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4584029be60adf77f36e9076ad681cf2a9a6c580f2839fcb7574d0a471a06f0f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 14 19:08:28 crc kubenswrapper[4897]: E0214 19:08:28.431305 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4584029be60adf77f36e9076ad681cf2a9a6c580f2839fcb7574d0a471a06f0f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 14 19:08:28 crc kubenswrapper[4897]: E0214 19:08:28.432356 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4584029be60adf77f36e9076ad681cf2a9a6c580f2839fcb7574d0a471a06f0f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 14 19:08:28 crc kubenswrapper[4897]: E0214 19:08:28.432380 4897 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="95f8be13-487d-4d73-91c5-0996935e042c" containerName="nova-scheduler-scheduler" Feb 14 19:08:28 crc kubenswrapper[4897]: I0214 19:08:28.503125 4897 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b9564cf-9082-4da6-8197-229a5a16f424-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.137072 4897 generic.go:334] "Generic (PLEG): container finished" podID="95f8be13-487d-4d73-91c5-0996935e042c" containerID="4584029be60adf77f36e9076ad681cf2a9a6c580f2839fcb7574d0a471a06f0f" exitCode=0 Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.137174 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"95f8be13-487d-4d73-91c5-0996935e042c","Type":"ContainerDied","Data":"4584029be60adf77f36e9076ad681cf2a9a6c580f2839fcb7574d0a471a06f0f"} Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.137479 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"95f8be13-487d-4d73-91c5-0996935e042c","Type":"ContainerDied","Data":"6aee2472e0548b996d4b4589752644f5b13af40f874da39ccbfbd1391f47f986"} Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.137495 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6aee2472e0548b996d4b4589752644f5b13af40f874da39ccbfbd1391f47f986" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.140503 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"482f17ca-b3a8-485b-bacc-58b97547974a","Type":"ContainerDied","Data":"bc7e119c15b1d60a0386c6deefbecd91ee1c9d3d5029c5fd86b4059c3b577261"} Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.140523 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.140554 4897 scope.go:117] "RemoveContainer" containerID="c0c5d828d890f790335115cf7cc36aa5e10fa411011ec6a3ef4df81041e287ed" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.140562 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.163378 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.172258 4897 scope.go:117] "RemoveContainer" containerID="60ed2b81ddf2f413c01e12978ae43b9013b26e3752dd24a40797129171d89369" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.184634 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.197749 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.224136 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 14 19:08:29 crc kubenswrapper[4897]: E0214 19:08:29.224829 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="482f17ca-b3a8-485b-bacc-58b97547974a" containerName="nova-metadata-metadata" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.224863 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="482f17ca-b3a8-485b-bacc-58b97547974a" containerName="nova-metadata-metadata" Feb 14 19:08:29 crc kubenswrapper[4897]: E0214 19:08:29.224891 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22457187-fe82-4c9a-b565-95c7e561611f" containerName="nova-manage" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.224905 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="22457187-fe82-4c9a-b565-95c7e561611f" containerName="nova-manage" Feb 14 19:08:29 crc kubenswrapper[4897]: E0214 19:08:29.224924 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="482f17ca-b3a8-485b-bacc-58b97547974a" containerName="nova-metadata-log" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.224937 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="482f17ca-b3a8-485b-bacc-58b97547974a" containerName="nova-metadata-log" Feb 14 19:08:29 crc kubenswrapper[4897]: E0214 19:08:29.224961 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e7b8b61-a5fd-4bea-91f0-45342d6587f2" containerName="extract-content" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.224973 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e7b8b61-a5fd-4bea-91f0-45342d6587f2" containerName="extract-content" Feb 14 19:08:29 crc kubenswrapper[4897]: E0214 19:08:29.224999 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95f8be13-487d-4d73-91c5-0996935e042c" containerName="nova-scheduler-scheduler" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.225011 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="95f8be13-487d-4d73-91c5-0996935e042c" containerName="nova-scheduler-scheduler" Feb 14 19:08:29 crc kubenswrapper[4897]: E0214 19:08:29.225067 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee068b46-5bbc-4442-b2be-6b0f086d1edb" containerName="extract-utilities" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.225085 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee068b46-5bbc-4442-b2be-6b0f086d1edb" containerName="extract-utilities" Feb 14 19:08:29 crc kubenswrapper[4897]: E0214 19:08:29.225116 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b9564cf-9082-4da6-8197-229a5a16f424" containerName="nova-api-api" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.225128 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b9564cf-9082-4da6-8197-229a5a16f424" containerName="nova-api-api" Feb 14 19:08:29 crc kubenswrapper[4897]: E0214 19:08:29.225141 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee068b46-5bbc-4442-b2be-6b0f086d1edb" containerName="registry-server" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.225153 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee068b46-5bbc-4442-b2be-6b0f086d1edb" containerName="registry-server" Feb 14 19:08:29 crc kubenswrapper[4897]: E0214 19:08:29.225179 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b9564cf-9082-4da6-8197-229a5a16f424" containerName="nova-api-log" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.225192 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b9564cf-9082-4da6-8197-229a5a16f424" containerName="nova-api-log" Feb 14 19:08:29 crc kubenswrapper[4897]: E0214 19:08:29.225213 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e7b8b61-a5fd-4bea-91f0-45342d6587f2" containerName="extract-utilities" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.225226 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e7b8b61-a5fd-4bea-91f0-45342d6587f2" containerName="extract-utilities" Feb 14 19:08:29 crc kubenswrapper[4897]: E0214 19:08:29.225269 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e7b8b61-a5fd-4bea-91f0-45342d6587f2" containerName="registry-server" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.225281 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e7b8b61-a5fd-4bea-91f0-45342d6587f2" containerName="registry-server" Feb 14 19:08:29 crc kubenswrapper[4897]: E0214 19:08:29.225308 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee068b46-5bbc-4442-b2be-6b0f086d1edb" containerName="extract-content" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.225321 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee068b46-5bbc-4442-b2be-6b0f086d1edb" containerName="extract-content" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.225756 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b9564cf-9082-4da6-8197-229a5a16f424" containerName="nova-api-log" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.225795 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="482f17ca-b3a8-485b-bacc-58b97547974a" containerName="nova-metadata-metadata" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.225812 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="482f17ca-b3a8-485b-bacc-58b97547974a" containerName="nova-metadata-log" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.225860 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e7b8b61-a5fd-4bea-91f0-45342d6587f2" containerName="registry-server" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.225887 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee068b46-5bbc-4442-b2be-6b0f086d1edb" containerName="registry-server" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.225911 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="95f8be13-487d-4d73-91c5-0996935e042c" containerName="nova-scheduler-scheduler" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.225936 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b9564cf-9082-4da6-8197-229a5a16f424" containerName="nova-api-api" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.225958 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="22457187-fe82-4c9a-b565-95c7e561611f" containerName="nova-manage" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.228514 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.239094 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.239389 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.252143 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.277721 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.290119 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.314423 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.317535 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.321803 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.322073 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.322134 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95f8be13-487d-4d73-91c5-0996935e042c-combined-ca-bundle\") pod \"95f8be13-487d-4d73-91c5-0996935e042c\" (UID: \"95f8be13-487d-4d73-91c5-0996935e042c\") " Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.322263 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95f8be13-487d-4d73-91c5-0996935e042c-config-data\") pod \"95f8be13-487d-4d73-91c5-0996935e042c\" (UID: \"95f8be13-487d-4d73-91c5-0996935e042c\") " Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.322411 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wv9k7\" (UniqueName: \"kubernetes.io/projected/95f8be13-487d-4d73-91c5-0996935e042c-kube-api-access-wv9k7\") pod \"95f8be13-487d-4d73-91c5-0996935e042c\" (UID: \"95f8be13-487d-4d73-91c5-0996935e042c\") " Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.322873 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.326420 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.331208 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95f8be13-487d-4d73-91c5-0996935e042c-kube-api-access-wv9k7" (OuterVolumeSpecName: "kube-api-access-wv9k7") pod "95f8be13-487d-4d73-91c5-0996935e042c" (UID: "95f8be13-487d-4d73-91c5-0996935e042c"). InnerVolumeSpecName "kube-api-access-wv9k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.375848 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95f8be13-487d-4d73-91c5-0996935e042c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95f8be13-487d-4d73-91c5-0996935e042c" (UID: "95f8be13-487d-4d73-91c5-0996935e042c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.381877 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95f8be13-487d-4d73-91c5-0996935e042c-config-data" (OuterVolumeSpecName: "config-data") pod "95f8be13-487d-4d73-91c5-0996935e042c" (UID: "95f8be13-487d-4d73-91c5-0996935e042c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.425570 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee0355d7-cd7c-4073-8996-b6e54e93319d-public-tls-certs\") pod \"nova-api-0\" (UID: \"ee0355d7-cd7c-4073-8996-b6e54e93319d\") " pod="openstack/nova-api-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.425614 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmbg2\" (UniqueName: \"kubernetes.io/projected/ee0355d7-cd7c-4073-8996-b6e54e93319d-kube-api-access-zmbg2\") pod \"nova-api-0\" (UID: \"ee0355d7-cd7c-4073-8996-b6e54e93319d\") " pod="openstack/nova-api-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.425648 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/964cb23c-1cc7-43f9-8ce3-b5c280f5cd28-logs\") pod \"nova-metadata-0\" (UID: \"964cb23c-1cc7-43f9-8ce3-b5c280f5cd28\") " pod="openstack/nova-metadata-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.425673 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee0355d7-cd7c-4073-8996-b6e54e93319d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ee0355d7-cd7c-4073-8996-b6e54e93319d\") " pod="openstack/nova-api-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.425810 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee0355d7-cd7c-4073-8996-b6e54e93319d-logs\") pod \"nova-api-0\" (UID: \"ee0355d7-cd7c-4073-8996-b6e54e93319d\") " pod="openstack/nova-api-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.425827 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/964cb23c-1cc7-43f9-8ce3-b5c280f5cd28-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"964cb23c-1cc7-43f9-8ce3-b5c280f5cd28\") " pod="openstack/nova-metadata-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.425895 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0355d7-cd7c-4073-8996-b6e54e93319d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ee0355d7-cd7c-4073-8996-b6e54e93319d\") " pod="openstack/nova-api-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.425920 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmwq8\" (UniqueName: \"kubernetes.io/projected/964cb23c-1cc7-43f9-8ce3-b5c280f5cd28-kube-api-access-pmwq8\") pod \"nova-metadata-0\" (UID: \"964cb23c-1cc7-43f9-8ce3-b5c280f5cd28\") " pod="openstack/nova-metadata-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.426058 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee0355d7-cd7c-4073-8996-b6e54e93319d-config-data\") pod \"nova-api-0\" (UID: \"ee0355d7-cd7c-4073-8996-b6e54e93319d\") " pod="openstack/nova-api-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.426122 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/964cb23c-1cc7-43f9-8ce3-b5c280f5cd28-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"964cb23c-1cc7-43f9-8ce3-b5c280f5cd28\") " pod="openstack/nova-metadata-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.426164 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/964cb23c-1cc7-43f9-8ce3-b5c280f5cd28-config-data\") pod \"nova-metadata-0\" (UID: \"964cb23c-1cc7-43f9-8ce3-b5c280f5cd28\") " pod="openstack/nova-metadata-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.426275 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95f8be13-487d-4d73-91c5-0996935e042c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.426291 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95f8be13-487d-4d73-91c5-0996935e042c-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.426301 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wv9k7\" (UniqueName: \"kubernetes.io/projected/95f8be13-487d-4d73-91c5-0996935e042c-kube-api-access-wv9k7\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.528310 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/964cb23c-1cc7-43f9-8ce3-b5c280f5cd28-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"964cb23c-1cc7-43f9-8ce3-b5c280f5cd28\") " pod="openstack/nova-metadata-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.528401 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/964cb23c-1cc7-43f9-8ce3-b5c280f5cd28-config-data\") pod \"nova-metadata-0\" (UID: \"964cb23c-1cc7-43f9-8ce3-b5c280f5cd28\") " pod="openstack/nova-metadata-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.528582 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee0355d7-cd7c-4073-8996-b6e54e93319d-public-tls-certs\") pod \"nova-api-0\" (UID: \"ee0355d7-cd7c-4073-8996-b6e54e93319d\") " pod="openstack/nova-api-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.528624 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmbg2\" (UniqueName: \"kubernetes.io/projected/ee0355d7-cd7c-4073-8996-b6e54e93319d-kube-api-access-zmbg2\") pod \"nova-api-0\" (UID: \"ee0355d7-cd7c-4073-8996-b6e54e93319d\") " pod="openstack/nova-api-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.528681 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/964cb23c-1cc7-43f9-8ce3-b5c280f5cd28-logs\") pod \"nova-metadata-0\" (UID: \"964cb23c-1cc7-43f9-8ce3-b5c280f5cd28\") " pod="openstack/nova-metadata-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.528738 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee0355d7-cd7c-4073-8996-b6e54e93319d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ee0355d7-cd7c-4073-8996-b6e54e93319d\") " pod="openstack/nova-api-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.528838 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/964cb23c-1cc7-43f9-8ce3-b5c280f5cd28-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"964cb23c-1cc7-43f9-8ce3-b5c280f5cd28\") " pod="openstack/nova-metadata-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.528872 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee0355d7-cd7c-4073-8996-b6e54e93319d-logs\") pod \"nova-api-0\" (UID: \"ee0355d7-cd7c-4073-8996-b6e54e93319d\") " pod="openstack/nova-api-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.528960 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0355d7-cd7c-4073-8996-b6e54e93319d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ee0355d7-cd7c-4073-8996-b6e54e93319d\") " pod="openstack/nova-api-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.529000 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmwq8\" (UniqueName: \"kubernetes.io/projected/964cb23c-1cc7-43f9-8ce3-b5c280f5cd28-kube-api-access-pmwq8\") pod \"nova-metadata-0\" (UID: \"964cb23c-1cc7-43f9-8ce3-b5c280f5cd28\") " pod="openstack/nova-metadata-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.529092 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/964cb23c-1cc7-43f9-8ce3-b5c280f5cd28-logs\") pod \"nova-metadata-0\" (UID: \"964cb23c-1cc7-43f9-8ce3-b5c280f5cd28\") " pod="openstack/nova-metadata-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.529126 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee0355d7-cd7c-4073-8996-b6e54e93319d-config-data\") pod \"nova-api-0\" (UID: \"ee0355d7-cd7c-4073-8996-b6e54e93319d\") " pod="openstack/nova-api-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.530523 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee0355d7-cd7c-4073-8996-b6e54e93319d-logs\") pod \"nova-api-0\" (UID: \"ee0355d7-cd7c-4073-8996-b6e54e93319d\") " pod="openstack/nova-api-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.532839 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/964cb23c-1cc7-43f9-8ce3-b5c280f5cd28-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"964cb23c-1cc7-43f9-8ce3-b5c280f5cd28\") " pod="openstack/nova-metadata-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.533110 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee0355d7-cd7c-4073-8996-b6e54e93319d-public-tls-certs\") pod \"nova-api-0\" (UID: \"ee0355d7-cd7c-4073-8996-b6e54e93319d\") " pod="openstack/nova-api-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.536324 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/964cb23c-1cc7-43f9-8ce3-b5c280f5cd28-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"964cb23c-1cc7-43f9-8ce3-b5c280f5cd28\") " pod="openstack/nova-metadata-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.536723 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/964cb23c-1cc7-43f9-8ce3-b5c280f5cd28-config-data\") pod \"nova-metadata-0\" (UID: \"964cb23c-1cc7-43f9-8ce3-b5c280f5cd28\") " pod="openstack/nova-metadata-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.538218 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0355d7-cd7c-4073-8996-b6e54e93319d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ee0355d7-cd7c-4073-8996-b6e54e93319d\") " pod="openstack/nova-api-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.541360 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee0355d7-cd7c-4073-8996-b6e54e93319d-config-data\") pod \"nova-api-0\" (UID: \"ee0355d7-cd7c-4073-8996-b6e54e93319d\") " pod="openstack/nova-api-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.543077 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee0355d7-cd7c-4073-8996-b6e54e93319d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ee0355d7-cd7c-4073-8996-b6e54e93319d\") " pod="openstack/nova-api-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.550907 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmbg2\" (UniqueName: \"kubernetes.io/projected/ee0355d7-cd7c-4073-8996-b6e54e93319d-kube-api-access-zmbg2\") pod \"nova-api-0\" (UID: \"ee0355d7-cd7c-4073-8996-b6e54e93319d\") " pod="openstack/nova-api-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.570599 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmwq8\" (UniqueName: \"kubernetes.io/projected/964cb23c-1cc7-43f9-8ce3-b5c280f5cd28-kube-api-access-pmwq8\") pod \"nova-metadata-0\" (UID: \"964cb23c-1cc7-43f9-8ce3-b5c280f5cd28\") " pod="openstack/nova-metadata-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.686422 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.836187 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="482f17ca-b3a8-485b-bacc-58b97547974a" path="/var/lib/kubelet/pods/482f17ca-b3a8-485b-bacc-58b97547974a/volumes" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.838241 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b9564cf-9082-4da6-8197-229a5a16f424" path="/var/lib/kubelet/pods/9b9564cf-9082-4da6-8197-229a5a16f424/volumes" Feb 14 19:08:29 crc kubenswrapper[4897]: I0214 19:08:29.865599 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 19:08:30 crc kubenswrapper[4897]: I0214 19:08:30.150927 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 19:08:30 crc kubenswrapper[4897]: I0214 19:08:30.205083 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 19:08:30 crc kubenswrapper[4897]: I0214 19:08:30.224133 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 19:08:30 crc kubenswrapper[4897]: W0214 19:08:30.236960 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0355d7_cd7c_4073_8996_b6e54e93319d.slice/crio-1f015c9b02f60b02409ddbbd688e0595c0e597eb9031cf23dd30d3ed0a6d4248 WatchSource:0}: Error finding container 1f015c9b02f60b02409ddbbd688e0595c0e597eb9031cf23dd30d3ed0a6d4248: Status 404 returned error can't find the container with id 1f015c9b02f60b02409ddbbd688e0595c0e597eb9031cf23dd30d3ed0a6d4248 Feb 14 19:08:30 crc kubenswrapper[4897]: I0214 19:08:30.237752 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 19:08:30 crc kubenswrapper[4897]: I0214 19:08:30.239332 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 19:08:30 crc kubenswrapper[4897]: I0214 19:08:30.244472 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 14 19:08:30 crc kubenswrapper[4897]: I0214 19:08:30.265648 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 19:08:30 crc kubenswrapper[4897]: I0214 19:08:30.277681 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 19:08:30 crc kubenswrapper[4897]: I0214 19:08:30.344874 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 19:08:30 crc kubenswrapper[4897]: I0214 19:08:30.364066 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/965f9d5d-41a1-413c-a99a-09596c896734-config-data\") pod \"nova-scheduler-0\" (UID: \"965f9d5d-41a1-413c-a99a-09596c896734\") " pod="openstack/nova-scheduler-0" Feb 14 19:08:30 crc kubenswrapper[4897]: I0214 19:08:30.364398 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/965f9d5d-41a1-413c-a99a-09596c896734-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"965f9d5d-41a1-413c-a99a-09596c896734\") " pod="openstack/nova-scheduler-0" Feb 14 19:08:30 crc kubenswrapper[4897]: I0214 19:08:30.364539 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2289n\" (UniqueName: \"kubernetes.io/projected/965f9d5d-41a1-413c-a99a-09596c896734-kube-api-access-2289n\") pod \"nova-scheduler-0\" (UID: \"965f9d5d-41a1-413c-a99a-09596c896734\") " pod="openstack/nova-scheduler-0" Feb 14 19:08:30 crc kubenswrapper[4897]: I0214 19:08:30.466505 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/965f9d5d-41a1-413c-a99a-09596c896734-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"965f9d5d-41a1-413c-a99a-09596c896734\") " pod="openstack/nova-scheduler-0" Feb 14 19:08:30 crc kubenswrapper[4897]: I0214 19:08:30.466899 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2289n\" (UniqueName: \"kubernetes.io/projected/965f9d5d-41a1-413c-a99a-09596c896734-kube-api-access-2289n\") pod \"nova-scheduler-0\" (UID: \"965f9d5d-41a1-413c-a99a-09596c896734\") " pod="openstack/nova-scheduler-0" Feb 14 19:08:30 crc kubenswrapper[4897]: I0214 19:08:30.467018 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/965f9d5d-41a1-413c-a99a-09596c896734-config-data\") pod \"nova-scheduler-0\" (UID: \"965f9d5d-41a1-413c-a99a-09596c896734\") " pod="openstack/nova-scheduler-0" Feb 14 19:08:30 crc kubenswrapper[4897]: I0214 19:08:30.471024 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/965f9d5d-41a1-413c-a99a-09596c896734-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"965f9d5d-41a1-413c-a99a-09596c896734\") " pod="openstack/nova-scheduler-0" Feb 14 19:08:30 crc kubenswrapper[4897]: I0214 19:08:30.471536 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/965f9d5d-41a1-413c-a99a-09596c896734-config-data\") pod \"nova-scheduler-0\" (UID: \"965f9d5d-41a1-413c-a99a-09596c896734\") " pod="openstack/nova-scheduler-0" Feb 14 19:08:30 crc kubenswrapper[4897]: I0214 19:08:30.494210 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2289n\" (UniqueName: \"kubernetes.io/projected/965f9d5d-41a1-413c-a99a-09596c896734-kube-api-access-2289n\") pod \"nova-scheduler-0\" (UID: \"965f9d5d-41a1-413c-a99a-09596c896734\") " pod="openstack/nova-scheduler-0" Feb 14 19:08:30 crc kubenswrapper[4897]: I0214 19:08:30.561150 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 19:08:31 crc kubenswrapper[4897]: I0214 19:08:31.064427 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 19:08:31 crc kubenswrapper[4897]: I0214 19:08:31.166094 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ee0355d7-cd7c-4073-8996-b6e54e93319d","Type":"ContainerStarted","Data":"300f640d2a7f5a1e272ffe72b3e169bc9553ce6ffd04f88de7fc341f7745f1be"} Feb 14 19:08:31 crc kubenswrapper[4897]: I0214 19:08:31.166134 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ee0355d7-cd7c-4073-8996-b6e54e93319d","Type":"ContainerStarted","Data":"94b6079edba931ee067988beb3db8caf576b80bb6b577fe8d3a99d1498c3d631"} Feb 14 19:08:31 crc kubenswrapper[4897]: I0214 19:08:31.166144 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ee0355d7-cd7c-4073-8996-b6e54e93319d","Type":"ContainerStarted","Data":"1f015c9b02f60b02409ddbbd688e0595c0e597eb9031cf23dd30d3ed0a6d4248"} Feb 14 19:08:31 crc kubenswrapper[4897]: I0214 19:08:31.169605 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"964cb23c-1cc7-43f9-8ce3-b5c280f5cd28","Type":"ContainerStarted","Data":"eb5a6dd89a50e07c7c4b50ac978a7325d1d50a3dd29ae62eb60eb1e2c62e2455"} Feb 14 19:08:31 crc kubenswrapper[4897]: I0214 19:08:31.169633 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"964cb23c-1cc7-43f9-8ce3-b5c280f5cd28","Type":"ContainerStarted","Data":"6451e7f5956361d5427bf98039a1b2d45c2a7a2e9ed77b3124bec14bb18425fc"} Feb 14 19:08:31 crc kubenswrapper[4897]: I0214 19:08:31.169643 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"964cb23c-1cc7-43f9-8ce3-b5c280f5cd28","Type":"ContainerStarted","Data":"de6c5eb85cf320c4c3085de520e2aafce85a3aac9c903fb385d05b25b0452c9e"} Feb 14 19:08:31 crc kubenswrapper[4897]: I0214 19:08:31.172196 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"965f9d5d-41a1-413c-a99a-09596c896734","Type":"ContainerStarted","Data":"bc4105162ef83a405c88b1cfc6100dac9bbeb1810cf4256490773c832c936db8"} Feb 14 19:08:31 crc kubenswrapper[4897]: I0214 19:08:31.186656 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.186640628 podStartE2EDuration="2.186640628s" podCreationTimestamp="2026-02-14 19:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:08:31.182693244 +0000 UTC m=+1564.159101737" watchObservedRunningTime="2026-02-14 19:08:31.186640628 +0000 UTC m=+1564.163049111" Feb 14 19:08:31 crc kubenswrapper[4897]: I0214 19:08:31.200268 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.200250235 podStartE2EDuration="2.200250235s" podCreationTimestamp="2026-02-14 19:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:08:31.198540492 +0000 UTC m=+1564.174948985" watchObservedRunningTime="2026-02-14 19:08:31.200250235 +0000 UTC m=+1564.176658728" Feb 14 19:08:31 crc kubenswrapper[4897]: I0214 19:08:31.813827 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95f8be13-487d-4d73-91c5-0996935e042c" path="/var/lib/kubelet/pods/95f8be13-487d-4d73-91c5-0996935e042c/volumes" Feb 14 19:08:32 crc kubenswrapper[4897]: I0214 19:08:32.194085 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"965f9d5d-41a1-413c-a99a-09596c896734","Type":"ContainerStarted","Data":"8ca97d27aad48a615c4bdb40b7db13b046156c5124b398d0de01df2446f53fab"} Feb 14 19:08:32 crc kubenswrapper[4897]: I0214 19:08:32.226062 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.226043486 podStartE2EDuration="2.226043486s" podCreationTimestamp="2026-02-14 19:08:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:08:32.223562079 +0000 UTC m=+1565.199970592" watchObservedRunningTime="2026-02-14 19:08:32.226043486 +0000 UTC m=+1565.202451979" Feb 14 19:08:34 crc kubenswrapper[4897]: I0214 19:08:34.867252 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 14 19:08:34 crc kubenswrapper[4897]: I0214 19:08:34.867802 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 14 19:08:35 crc kubenswrapper[4897]: I0214 19:08:35.561877 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 14 19:08:39 crc kubenswrapper[4897]: I0214 19:08:39.150198 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 14 19:08:39 crc kubenswrapper[4897]: I0214 19:08:39.686964 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 19:08:39 crc kubenswrapper[4897]: I0214 19:08:39.687011 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 19:08:39 crc kubenswrapper[4897]: I0214 19:08:39.868555 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 14 19:08:39 crc kubenswrapper[4897]: I0214 19:08:39.869082 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 14 19:08:40 crc kubenswrapper[4897]: I0214 19:08:40.563350 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 14 19:08:40 crc kubenswrapper[4897]: I0214 19:08:40.613375 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 14 19:08:40 crc kubenswrapper[4897]: I0214 19:08:40.699208 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ee0355d7-cd7c-4073-8996-b6e54e93319d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.14:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 19:08:40 crc kubenswrapper[4897]: I0214 19:08:40.699226 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ee0355d7-cd7c-4073-8996-b6e54e93319d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.14:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 19:08:40 crc kubenswrapper[4897]: I0214 19:08:40.915320 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="964cb23c-1cc7-43f9-8ce3-b5c280f5cd28" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.13:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 19:08:40 crc kubenswrapper[4897]: I0214 19:08:40.915640 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="964cb23c-1cc7-43f9-8ce3-b5c280f5cd28" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.13:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 19:08:41 crc kubenswrapper[4897]: I0214 19:08:41.360585 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 14 19:08:43 crc kubenswrapper[4897]: I0214 19:08:43.614289 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 19:08:43 crc kubenswrapper[4897]: I0214 19:08:43.617662 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="31fc1ad2-32a3-4e47-846f-a69e5ee34493" containerName="kube-state-metrics" containerID="cri-o://286e2848ae1acf45f0d512d54f3cd2bea97153a2fe9e0244a0e0e58297bc05ff" gracePeriod=30 Feb 14 19:08:43 crc kubenswrapper[4897]: I0214 19:08:43.813433 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 19:08:43 crc kubenswrapper[4897]: I0214 19:08:43.813692 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="a3bb3e8e-2264-4122-be43-4c1be375ceb1" containerName="mysqld-exporter" containerID="cri-o://3039e5bd149172a4ee3cdfc626c4b4100f167772d57d98eb272ec9080ad8606d" gracePeriod=30 Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.299736 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.399547 4897 generic.go:334] "Generic (PLEG): container finished" podID="31fc1ad2-32a3-4e47-846f-a69e5ee34493" containerID="286e2848ae1acf45f0d512d54f3cd2bea97153a2fe9e0244a0e0e58297bc05ff" exitCode=2 Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.399785 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"31fc1ad2-32a3-4e47-846f-a69e5ee34493","Type":"ContainerDied","Data":"286e2848ae1acf45f0d512d54f3cd2bea97153a2fe9e0244a0e0e58297bc05ff"} Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.399811 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"31fc1ad2-32a3-4e47-846f-a69e5ee34493","Type":"ContainerDied","Data":"3b9be166c63ce14e55a0f6c7be42d303c4aa911740756a271d1f765c030bb366"} Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.399855 4897 scope.go:117] "RemoveContainer" containerID="286e2848ae1acf45f0d512d54f3cd2bea97153a2fe9e0244a0e0e58297bc05ff" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.400038 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.400566 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.408340 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkdcf\" (UniqueName: \"kubernetes.io/projected/31fc1ad2-32a3-4e47-846f-a69e5ee34493-kube-api-access-dkdcf\") pod \"31fc1ad2-32a3-4e47-846f-a69e5ee34493\" (UID: \"31fc1ad2-32a3-4e47-846f-a69e5ee34493\") " Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.414372 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fc1ad2-32a3-4e47-846f-a69e5ee34493-kube-api-access-dkdcf" (OuterVolumeSpecName: "kube-api-access-dkdcf") pod "31fc1ad2-32a3-4e47-846f-a69e5ee34493" (UID: "31fc1ad2-32a3-4e47-846f-a69e5ee34493"). InnerVolumeSpecName "kube-api-access-dkdcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.436951 4897 generic.go:334] "Generic (PLEG): container finished" podID="a3bb3e8e-2264-4122-be43-4c1be375ceb1" containerID="3039e5bd149172a4ee3cdfc626c4b4100f167772d57d98eb272ec9080ad8606d" exitCode=2 Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.437013 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"a3bb3e8e-2264-4122-be43-4c1be375ceb1","Type":"ContainerDied","Data":"3039e5bd149172a4ee3cdfc626c4b4100f167772d57d98eb272ec9080ad8606d"} Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.437055 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"a3bb3e8e-2264-4122-be43-4c1be375ceb1","Type":"ContainerDied","Data":"b514975a338e793b13f844a9ca625722a4c053c2ee4ec8371d4f42f2332d8f6e"} Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.455517 4897 scope.go:117] "RemoveContainer" containerID="286e2848ae1acf45f0d512d54f3cd2bea97153a2fe9e0244a0e0e58297bc05ff" Feb 14 19:08:44 crc kubenswrapper[4897]: E0214 19:08:44.455959 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"286e2848ae1acf45f0d512d54f3cd2bea97153a2fe9e0244a0e0e58297bc05ff\": container with ID starting with 286e2848ae1acf45f0d512d54f3cd2bea97153a2fe9e0244a0e0e58297bc05ff not found: ID does not exist" containerID="286e2848ae1acf45f0d512d54f3cd2bea97153a2fe9e0244a0e0e58297bc05ff" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.456008 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"286e2848ae1acf45f0d512d54f3cd2bea97153a2fe9e0244a0e0e58297bc05ff"} err="failed to get container status \"286e2848ae1acf45f0d512d54f3cd2bea97153a2fe9e0244a0e0e58297bc05ff\": rpc error: code = NotFound desc = could not find container \"286e2848ae1acf45f0d512d54f3cd2bea97153a2fe9e0244a0e0e58297bc05ff\": container with ID starting with 286e2848ae1acf45f0d512d54f3cd2bea97153a2fe9e0244a0e0e58297bc05ff not found: ID does not exist" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.456053 4897 scope.go:117] "RemoveContainer" containerID="3039e5bd149172a4ee3cdfc626c4b4100f167772d57d98eb272ec9080ad8606d" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.480941 4897 scope.go:117] "RemoveContainer" containerID="3039e5bd149172a4ee3cdfc626c4b4100f167772d57d98eb272ec9080ad8606d" Feb 14 19:08:44 crc kubenswrapper[4897]: E0214 19:08:44.481493 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3039e5bd149172a4ee3cdfc626c4b4100f167772d57d98eb272ec9080ad8606d\": container with ID starting with 3039e5bd149172a4ee3cdfc626c4b4100f167772d57d98eb272ec9080ad8606d not found: ID does not exist" containerID="3039e5bd149172a4ee3cdfc626c4b4100f167772d57d98eb272ec9080ad8606d" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.481537 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3039e5bd149172a4ee3cdfc626c4b4100f167772d57d98eb272ec9080ad8606d"} err="failed to get container status \"3039e5bd149172a4ee3cdfc626c4b4100f167772d57d98eb272ec9080ad8606d\": rpc error: code = NotFound desc = could not find container \"3039e5bd149172a4ee3cdfc626c4b4100f167772d57d98eb272ec9080ad8606d\": container with ID starting with 3039e5bd149172a4ee3cdfc626c4b4100f167772d57d98eb272ec9080ad8606d not found: ID does not exist" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.510789 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3bb3e8e-2264-4122-be43-4c1be375ceb1-combined-ca-bundle\") pod \"a3bb3e8e-2264-4122-be43-4c1be375ceb1\" (UID: \"a3bb3e8e-2264-4122-be43-4c1be375ceb1\") " Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.510937 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3bb3e8e-2264-4122-be43-4c1be375ceb1-config-data\") pod \"a3bb3e8e-2264-4122-be43-4c1be375ceb1\" (UID: \"a3bb3e8e-2264-4122-be43-4c1be375ceb1\") " Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.510981 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xl5g\" (UniqueName: \"kubernetes.io/projected/a3bb3e8e-2264-4122-be43-4c1be375ceb1-kube-api-access-9xl5g\") pod \"a3bb3e8e-2264-4122-be43-4c1be375ceb1\" (UID: \"a3bb3e8e-2264-4122-be43-4c1be375ceb1\") " Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.512481 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkdcf\" (UniqueName: \"kubernetes.io/projected/31fc1ad2-32a3-4e47-846f-a69e5ee34493-kube-api-access-dkdcf\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.516344 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3bb3e8e-2264-4122-be43-4c1be375ceb1-kube-api-access-9xl5g" (OuterVolumeSpecName: "kube-api-access-9xl5g") pod "a3bb3e8e-2264-4122-be43-4c1be375ceb1" (UID: "a3bb3e8e-2264-4122-be43-4c1be375ceb1"). InnerVolumeSpecName "kube-api-access-9xl5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.544016 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3bb3e8e-2264-4122-be43-4c1be375ceb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3bb3e8e-2264-4122-be43-4c1be375ceb1" (UID: "a3bb3e8e-2264-4122-be43-4c1be375ceb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.584345 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3bb3e8e-2264-4122-be43-4c1be375ceb1-config-data" (OuterVolumeSpecName: "config-data") pod "a3bb3e8e-2264-4122-be43-4c1be375ceb1" (UID: "a3bb3e8e-2264-4122-be43-4c1be375ceb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.616333 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3bb3e8e-2264-4122-be43-4c1be375ceb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.616393 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3bb3e8e-2264-4122-be43-4c1be375ceb1-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.616411 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xl5g\" (UniqueName: \"kubernetes.io/projected/a3bb3e8e-2264-4122-be43-4c1be375ceb1-kube-api-access-9xl5g\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.735255 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.746968 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.767655 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 19:08:44 crc kubenswrapper[4897]: E0214 19:08:44.768350 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31fc1ad2-32a3-4e47-846f-a69e5ee34493" containerName="kube-state-metrics" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.768380 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="31fc1ad2-32a3-4e47-846f-a69e5ee34493" containerName="kube-state-metrics" Feb 14 19:08:44 crc kubenswrapper[4897]: E0214 19:08:44.768413 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3bb3e8e-2264-4122-be43-4c1be375ceb1" containerName="mysqld-exporter" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.768422 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3bb3e8e-2264-4122-be43-4c1be375ceb1" containerName="mysqld-exporter" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.768718 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3bb3e8e-2264-4122-be43-4c1be375ceb1" containerName="mysqld-exporter" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.768758 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="31fc1ad2-32a3-4e47-846f-a69e5ee34493" containerName="kube-state-metrics" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.769881 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.772983 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.773449 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.792328 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.933975 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48ec6bd3-236f-4982-8dfa-e5c72c4d67bc-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"48ec6bd3-236f-4982-8dfa-e5c72c4d67bc\") " pod="openstack/kube-state-metrics-0" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.934108 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/48ec6bd3-236f-4982-8dfa-e5c72c4d67bc-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"48ec6bd3-236f-4982-8dfa-e5c72c4d67bc\") " pod="openstack/kube-state-metrics-0" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.934192 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/48ec6bd3-236f-4982-8dfa-e5c72c4d67bc-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"48ec6bd3-236f-4982-8dfa-e5c72c4d67bc\") " pod="openstack/kube-state-metrics-0" Feb 14 19:08:44 crc kubenswrapper[4897]: I0214 19:08:44.934255 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbghj\" (UniqueName: \"kubernetes.io/projected/48ec6bd3-236f-4982-8dfa-e5c72c4d67bc-kube-api-access-sbghj\") pod \"kube-state-metrics-0\" (UID: \"48ec6bd3-236f-4982-8dfa-e5c72c4d67bc\") " pod="openstack/kube-state-metrics-0" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.036917 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48ec6bd3-236f-4982-8dfa-e5c72c4d67bc-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"48ec6bd3-236f-4982-8dfa-e5c72c4d67bc\") " pod="openstack/kube-state-metrics-0" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.037021 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/48ec6bd3-236f-4982-8dfa-e5c72c4d67bc-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"48ec6bd3-236f-4982-8dfa-e5c72c4d67bc\") " pod="openstack/kube-state-metrics-0" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.037108 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/48ec6bd3-236f-4982-8dfa-e5c72c4d67bc-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"48ec6bd3-236f-4982-8dfa-e5c72c4d67bc\") " pod="openstack/kube-state-metrics-0" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.037171 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbghj\" (UniqueName: \"kubernetes.io/projected/48ec6bd3-236f-4982-8dfa-e5c72c4d67bc-kube-api-access-sbghj\") pod \"kube-state-metrics-0\" (UID: \"48ec6bd3-236f-4982-8dfa-e5c72c4d67bc\") " pod="openstack/kube-state-metrics-0" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.041662 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48ec6bd3-236f-4982-8dfa-e5c72c4d67bc-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"48ec6bd3-236f-4982-8dfa-e5c72c4d67bc\") " pod="openstack/kube-state-metrics-0" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.045257 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/48ec6bd3-236f-4982-8dfa-e5c72c4d67bc-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"48ec6bd3-236f-4982-8dfa-e5c72c4d67bc\") " pod="openstack/kube-state-metrics-0" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.046567 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/48ec6bd3-236f-4982-8dfa-e5c72c4d67bc-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"48ec6bd3-236f-4982-8dfa-e5c72c4d67bc\") " pod="openstack/kube-state-metrics-0" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.069069 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbghj\" (UniqueName: \"kubernetes.io/projected/48ec6bd3-236f-4982-8dfa-e5c72c4d67bc-kube-api-access-sbghj\") pod \"kube-state-metrics-0\" (UID: \"48ec6bd3-236f-4982-8dfa-e5c72c4d67bc\") " pod="openstack/kube-state-metrics-0" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.115650 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.449515 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.500469 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.524098 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.535014 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.537085 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.540090 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.540476 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.550987 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.552466 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce461153-c9cf-4a4a-a546-bf3a5effc936-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"ce461153-c9cf-4a4a-a546-bf3a5effc936\") " pod="openstack/mysqld-exporter-0" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.552540 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce461153-c9cf-4a4a-a546-bf3a5effc936-config-data\") pod \"mysqld-exporter-0\" (UID: \"ce461153-c9cf-4a4a-a546-bf3a5effc936\") " pod="openstack/mysqld-exporter-0" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.560289 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce461153-c9cf-4a4a-a546-bf3a5effc936-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"ce461153-c9cf-4a4a-a546-bf3a5effc936\") " pod="openstack/mysqld-exporter-0" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.560567 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmb4g\" (UniqueName: \"kubernetes.io/projected/ce461153-c9cf-4a4a-a546-bf3a5effc936-kube-api-access-tmb4g\") pod \"mysqld-exporter-0\" (UID: \"ce461153-c9cf-4a4a-a546-bf3a5effc936\") " pod="openstack/mysqld-exporter-0" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.603241 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.662375 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce461153-c9cf-4a4a-a546-bf3a5effc936-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"ce461153-c9cf-4a4a-a546-bf3a5effc936\") " pod="openstack/mysqld-exporter-0" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.662487 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmb4g\" (UniqueName: \"kubernetes.io/projected/ce461153-c9cf-4a4a-a546-bf3a5effc936-kube-api-access-tmb4g\") pod \"mysqld-exporter-0\" (UID: \"ce461153-c9cf-4a4a-a546-bf3a5effc936\") " pod="openstack/mysqld-exporter-0" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.662605 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce461153-c9cf-4a4a-a546-bf3a5effc936-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"ce461153-c9cf-4a4a-a546-bf3a5effc936\") " pod="openstack/mysqld-exporter-0" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.662642 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce461153-c9cf-4a4a-a546-bf3a5effc936-config-data\") pod \"mysqld-exporter-0\" (UID: \"ce461153-c9cf-4a4a-a546-bf3a5effc936\") " pod="openstack/mysqld-exporter-0" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.667891 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce461153-c9cf-4a4a-a546-bf3a5effc936-config-data\") pod \"mysqld-exporter-0\" (UID: \"ce461153-c9cf-4a4a-a546-bf3a5effc936\") " pod="openstack/mysqld-exporter-0" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.668496 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce461153-c9cf-4a4a-a546-bf3a5effc936-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"ce461153-c9cf-4a4a-a546-bf3a5effc936\") " pod="openstack/mysqld-exporter-0" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.669991 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce461153-c9cf-4a4a-a546-bf3a5effc936-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"ce461153-c9cf-4a4a-a546-bf3a5effc936\") " pod="openstack/mysqld-exporter-0" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.688612 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmb4g\" (UniqueName: \"kubernetes.io/projected/ce461153-c9cf-4a4a-a546-bf3a5effc936-kube-api-access-tmb4g\") pod \"mysqld-exporter-0\" (UID: \"ce461153-c9cf-4a4a-a546-bf3a5effc936\") " pod="openstack/mysqld-exporter-0" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.806871 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fc1ad2-32a3-4e47-846f-a69e5ee34493" path="/var/lib/kubelet/pods/31fc1ad2-32a3-4e47-846f-a69e5ee34493/volumes" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.807799 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3bb3e8e-2264-4122-be43-4c1be375ceb1" path="/var/lib/kubelet/pods/a3bb3e8e-2264-4122-be43-4c1be375ceb1/volumes" Feb 14 19:08:45 crc kubenswrapper[4897]: I0214 19:08:45.867908 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 14 19:08:46 crc kubenswrapper[4897]: I0214 19:08:46.262874 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:08:46 crc kubenswrapper[4897]: I0214 19:08:46.263590 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65ae60bb-0390-4729-8e95-a59633606a95" containerName="ceilometer-central-agent" containerID="cri-o://35004ae46b90519f0a307a00890566c224d828b57484ce86c8db5cce73276be7" gracePeriod=30 Feb 14 19:08:46 crc kubenswrapper[4897]: I0214 19:08:46.263782 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65ae60bb-0390-4729-8e95-a59633606a95" containerName="proxy-httpd" containerID="cri-o://f5e64bfb563f23c6ec0fe6f5e0a4a36bee8bfe647cc7003eb26406bc73ce47a6" gracePeriod=30 Feb 14 19:08:46 crc kubenswrapper[4897]: I0214 19:08:46.263879 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65ae60bb-0390-4729-8e95-a59633606a95" containerName="sg-core" containerID="cri-o://6486ffdafc27d5a8464330b77e4278af807c73c5fb798426c68013d04ef615ba" gracePeriod=30 Feb 14 19:08:46 crc kubenswrapper[4897]: I0214 19:08:46.264094 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65ae60bb-0390-4729-8e95-a59633606a95" containerName="ceilometer-notification-agent" containerID="cri-o://9431cb9bbf0fb3594f8c7eb95f93b749700eb842dfc5699bd6c14340599534a8" gracePeriod=30 Feb 14 19:08:46 crc kubenswrapper[4897]: I0214 19:08:46.435464 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 19:08:46 crc kubenswrapper[4897]: W0214 19:08:46.442536 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce461153_c9cf_4a4a_a546_bf3a5effc936.slice/crio-3748ebd81fb546b3b8ce035d3a2ffe4c12bb38a9b668e3b9b08b75a6a7f4cc41 WatchSource:0}: Error finding container 3748ebd81fb546b3b8ce035d3a2ffe4c12bb38a9b668e3b9b08b75a6a7f4cc41: Status 404 returned error can't find the container with id 3748ebd81fb546b3b8ce035d3a2ffe4c12bb38a9b668e3b9b08b75a6a7f4cc41 Feb 14 19:08:46 crc kubenswrapper[4897]: I0214 19:08:46.465605 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"ce461153-c9cf-4a4a-a546-bf3a5effc936","Type":"ContainerStarted","Data":"3748ebd81fb546b3b8ce035d3a2ffe4c12bb38a9b668e3b9b08b75a6a7f4cc41"} Feb 14 19:08:46 crc kubenswrapper[4897]: I0214 19:08:46.471255 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"48ec6bd3-236f-4982-8dfa-e5c72c4d67bc","Type":"ContainerStarted","Data":"c1c1c286663a525a4be4401c2677d60548c33ad10ce6c9f0a9e7811aa89cf6f9"} Feb 14 19:08:46 crc kubenswrapper[4897]: I0214 19:08:46.471335 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"48ec6bd3-236f-4982-8dfa-e5c72c4d67bc","Type":"ContainerStarted","Data":"da14405d8417cdf3c4dad032e48dc721c9ab6f91bbcfa791b567be4f88ac899d"} Feb 14 19:08:46 crc kubenswrapper[4897]: I0214 19:08:46.471374 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 14 19:08:46 crc kubenswrapper[4897]: I0214 19:08:46.474841 4897 generic.go:334] "Generic (PLEG): container finished" podID="65ae60bb-0390-4729-8e95-a59633606a95" containerID="6486ffdafc27d5a8464330b77e4278af807c73c5fb798426c68013d04ef615ba" exitCode=2 Feb 14 19:08:46 crc kubenswrapper[4897]: I0214 19:08:46.474888 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65ae60bb-0390-4729-8e95-a59633606a95","Type":"ContainerDied","Data":"6486ffdafc27d5a8464330b77e4278af807c73c5fb798426c68013d04ef615ba"} Feb 14 19:08:46 crc kubenswrapper[4897]: I0214 19:08:46.494940 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.06131985 podStartE2EDuration="2.494919147s" podCreationTimestamp="2026-02-14 19:08:44 +0000 UTC" firstStartedPulling="2026-02-14 19:08:45.598839366 +0000 UTC m=+1578.575247849" lastFinishedPulling="2026-02-14 19:08:46.032438663 +0000 UTC m=+1579.008847146" observedRunningTime="2026-02-14 19:08:46.487243616 +0000 UTC m=+1579.463652109" watchObservedRunningTime="2026-02-14 19:08:46.494919147 +0000 UTC m=+1579.471327630" Feb 14 19:08:47 crc kubenswrapper[4897]: I0214 19:08:47.487712 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"ce461153-c9cf-4a4a-a546-bf3a5effc936","Type":"ContainerStarted","Data":"e2b52a0c0e2dd00b674cbc463f37330e9789e0b32d4108b974a75f8b83d92a44"} Feb 14 19:08:47 crc kubenswrapper[4897]: I0214 19:08:47.492806 4897 generic.go:334] "Generic (PLEG): container finished" podID="65ae60bb-0390-4729-8e95-a59633606a95" containerID="f5e64bfb563f23c6ec0fe6f5e0a4a36bee8bfe647cc7003eb26406bc73ce47a6" exitCode=0 Feb 14 19:08:47 crc kubenswrapper[4897]: I0214 19:08:47.492833 4897 generic.go:334] "Generic (PLEG): container finished" podID="65ae60bb-0390-4729-8e95-a59633606a95" containerID="35004ae46b90519f0a307a00890566c224d828b57484ce86c8db5cce73276be7" exitCode=0 Feb 14 19:08:47 crc kubenswrapper[4897]: I0214 19:08:47.492910 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65ae60bb-0390-4729-8e95-a59633606a95","Type":"ContainerDied","Data":"f5e64bfb563f23c6ec0fe6f5e0a4a36bee8bfe647cc7003eb26406bc73ce47a6"} Feb 14 19:08:47 crc kubenswrapper[4897]: I0214 19:08:47.492963 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65ae60bb-0390-4729-8e95-a59633606a95","Type":"ContainerDied","Data":"35004ae46b90519f0a307a00890566c224d828b57484ce86c8db5cce73276be7"} Feb 14 19:08:47 crc kubenswrapper[4897]: I0214 19:08:47.514104 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=1.982525828 podStartE2EDuration="2.514081751s" podCreationTimestamp="2026-02-14 19:08:45 +0000 UTC" firstStartedPulling="2026-02-14 19:08:46.445145384 +0000 UTC m=+1579.421553867" lastFinishedPulling="2026-02-14 19:08:46.976701297 +0000 UTC m=+1579.953109790" observedRunningTime="2026-02-14 19:08:47.503693344 +0000 UTC m=+1580.480101847" watchObservedRunningTime="2026-02-14 19:08:47.514081751 +0000 UTC m=+1580.490490234" Feb 14 19:08:49 crc kubenswrapper[4897]: I0214 19:08:49.694846 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 14 19:08:49 crc kubenswrapper[4897]: I0214 19:08:49.696693 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 14 19:08:49 crc kubenswrapper[4897]: I0214 19:08:49.697070 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 14 19:08:49 crc kubenswrapper[4897]: I0214 19:08:49.697128 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 14 19:08:49 crc kubenswrapper[4897]: I0214 19:08:49.703610 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 14 19:08:49 crc kubenswrapper[4897]: I0214 19:08:49.707910 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 14 19:08:49 crc kubenswrapper[4897]: I0214 19:08:49.872623 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 14 19:08:49 crc kubenswrapper[4897]: I0214 19:08:49.875620 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 14 19:08:49 crc kubenswrapper[4897]: I0214 19:08:49.881369 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 14 19:08:50 crc kubenswrapper[4897]: I0214 19:08:50.535284 4897 generic.go:334] "Generic (PLEG): container finished" podID="65ae60bb-0390-4729-8e95-a59633606a95" containerID="9431cb9bbf0fb3594f8c7eb95f93b749700eb842dfc5699bd6c14340599534a8" exitCode=0 Feb 14 19:08:50 crc kubenswrapper[4897]: I0214 19:08:50.535364 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65ae60bb-0390-4729-8e95-a59633606a95","Type":"ContainerDied","Data":"9431cb9bbf0fb3594f8c7eb95f93b749700eb842dfc5699bd6c14340599534a8"} Feb 14 19:08:50 crc kubenswrapper[4897]: I0214 19:08:50.535740 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65ae60bb-0390-4729-8e95-a59633606a95","Type":"ContainerDied","Data":"ead88010b77c6475e09b191cd04d5446515d3b53fdd5bca36b4e9c4fd636503d"} Feb 14 19:08:50 crc kubenswrapper[4897]: I0214 19:08:50.535760 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ead88010b77c6475e09b191cd04d5446515d3b53fdd5bca36b4e9c4fd636503d" Feb 14 19:08:50 crc kubenswrapper[4897]: I0214 19:08:50.543796 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 14 19:08:50 crc kubenswrapper[4897]: I0214 19:08:50.643836 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:08:50 crc kubenswrapper[4897]: I0214 19:08:50.714628 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65ae60bb-0390-4729-8e95-a59633606a95-sg-core-conf-yaml\") pod \"65ae60bb-0390-4729-8e95-a59633606a95\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " Feb 14 19:08:50 crc kubenswrapper[4897]: I0214 19:08:50.714729 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ae60bb-0390-4729-8e95-a59633606a95-combined-ca-bundle\") pod \"65ae60bb-0390-4729-8e95-a59633606a95\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " Feb 14 19:08:50 crc kubenswrapper[4897]: I0214 19:08:50.714865 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65ae60bb-0390-4729-8e95-a59633606a95-config-data\") pod \"65ae60bb-0390-4729-8e95-a59633606a95\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " Feb 14 19:08:50 crc kubenswrapper[4897]: I0214 19:08:50.714949 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9vvb\" (UniqueName: \"kubernetes.io/projected/65ae60bb-0390-4729-8e95-a59633606a95-kube-api-access-k9vvb\") pod \"65ae60bb-0390-4729-8e95-a59633606a95\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " Feb 14 19:08:50 crc kubenswrapper[4897]: I0214 19:08:50.715020 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65ae60bb-0390-4729-8e95-a59633606a95-log-httpd\") pod \"65ae60bb-0390-4729-8e95-a59633606a95\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " Feb 14 19:08:50 crc kubenswrapper[4897]: I0214 19:08:50.715142 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65ae60bb-0390-4729-8e95-a59633606a95-scripts\") pod \"65ae60bb-0390-4729-8e95-a59633606a95\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " Feb 14 19:08:50 crc kubenswrapper[4897]: I0214 19:08:50.715261 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65ae60bb-0390-4729-8e95-a59633606a95-run-httpd\") pod \"65ae60bb-0390-4729-8e95-a59633606a95\" (UID: \"65ae60bb-0390-4729-8e95-a59633606a95\") " Feb 14 19:08:50 crc kubenswrapper[4897]: I0214 19:08:50.715548 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65ae60bb-0390-4729-8e95-a59633606a95-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "65ae60bb-0390-4729-8e95-a59633606a95" (UID: "65ae60bb-0390-4729-8e95-a59633606a95"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:08:50 crc kubenswrapper[4897]: I0214 19:08:50.715804 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65ae60bb-0390-4729-8e95-a59633606a95-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "65ae60bb-0390-4729-8e95-a59633606a95" (UID: "65ae60bb-0390-4729-8e95-a59633606a95"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:08:50 crc kubenswrapper[4897]: I0214 19:08:50.716209 4897 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65ae60bb-0390-4729-8e95-a59633606a95-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:50 crc kubenswrapper[4897]: I0214 19:08:50.716229 4897 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65ae60bb-0390-4729-8e95-a59633606a95-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:50 crc kubenswrapper[4897]: I0214 19:08:50.720517 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65ae60bb-0390-4729-8e95-a59633606a95-scripts" (OuterVolumeSpecName: "scripts") pod "65ae60bb-0390-4729-8e95-a59633606a95" (UID: "65ae60bb-0390-4729-8e95-a59633606a95"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:50 crc kubenswrapper[4897]: I0214 19:08:50.731580 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65ae60bb-0390-4729-8e95-a59633606a95-kube-api-access-k9vvb" (OuterVolumeSpecName: "kube-api-access-k9vvb") pod "65ae60bb-0390-4729-8e95-a59633606a95" (UID: "65ae60bb-0390-4729-8e95-a59633606a95"). InnerVolumeSpecName "kube-api-access-k9vvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:08:50 crc kubenswrapper[4897]: I0214 19:08:50.810479 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65ae60bb-0390-4729-8e95-a59633606a95-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "65ae60bb-0390-4729-8e95-a59633606a95" (UID: "65ae60bb-0390-4729-8e95-a59633606a95"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:50 crc kubenswrapper[4897]: I0214 19:08:50.819851 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9vvb\" (UniqueName: \"kubernetes.io/projected/65ae60bb-0390-4729-8e95-a59633606a95-kube-api-access-k9vvb\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:50 crc kubenswrapper[4897]: I0214 19:08:50.820016 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65ae60bb-0390-4729-8e95-a59633606a95-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:50 crc kubenswrapper[4897]: I0214 19:08:50.820071 4897 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65ae60bb-0390-4729-8e95-a59633606a95-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:50 crc kubenswrapper[4897]: I0214 19:08:50.847701 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65ae60bb-0390-4729-8e95-a59633606a95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65ae60bb-0390-4729-8e95-a59633606a95" (UID: "65ae60bb-0390-4729-8e95-a59633606a95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:50 crc kubenswrapper[4897]: I0214 19:08:50.884447 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65ae60bb-0390-4729-8e95-a59633606a95-config-data" (OuterVolumeSpecName: "config-data") pod "65ae60bb-0390-4729-8e95-a59633606a95" (UID: "65ae60bb-0390-4729-8e95-a59633606a95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:08:50 crc kubenswrapper[4897]: I0214 19:08:50.922690 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ae60bb-0390-4729-8e95-a59633606a95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:50 crc kubenswrapper[4897]: I0214 19:08:50.923017 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65ae60bb-0390-4729-8e95-a59633606a95-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.546228 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.595293 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.609056 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.625275 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:08:51 crc kubenswrapper[4897]: E0214 19:08:51.625867 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ae60bb-0390-4729-8e95-a59633606a95" containerName="proxy-httpd" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.625890 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ae60bb-0390-4729-8e95-a59633606a95" containerName="proxy-httpd" Feb 14 19:08:51 crc kubenswrapper[4897]: E0214 19:08:51.625931 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ae60bb-0390-4729-8e95-a59633606a95" containerName="ceilometer-central-agent" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.625939 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ae60bb-0390-4729-8e95-a59633606a95" containerName="ceilometer-central-agent" Feb 14 19:08:51 crc kubenswrapper[4897]: E0214 19:08:51.625956 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ae60bb-0390-4729-8e95-a59633606a95" containerName="sg-core" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.625965 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ae60bb-0390-4729-8e95-a59633606a95" containerName="sg-core" Feb 14 19:08:51 crc kubenswrapper[4897]: E0214 19:08:51.625988 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ae60bb-0390-4729-8e95-a59633606a95" containerName="ceilometer-notification-agent" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.625996 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ae60bb-0390-4729-8e95-a59633606a95" containerName="ceilometer-notification-agent" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.626303 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ae60bb-0390-4729-8e95-a59633606a95" containerName="ceilometer-central-agent" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.626341 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ae60bb-0390-4729-8e95-a59633606a95" containerName="sg-core" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.626360 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ae60bb-0390-4729-8e95-a59633606a95" containerName="proxy-httpd" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.626377 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ae60bb-0390-4729-8e95-a59633606a95" containerName="ceilometer-notification-agent" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.629383 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.632507 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.635178 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.635395 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.638093 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.643115 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrxb4\" (UniqueName: \"kubernetes.io/projected/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-kube-api-access-nrxb4\") pod \"ceilometer-0\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " pod="openstack/ceilometer-0" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.643182 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-scripts\") pod \"ceilometer-0\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " pod="openstack/ceilometer-0" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.643356 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " pod="openstack/ceilometer-0" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.643396 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-config-data\") pod \"ceilometer-0\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " pod="openstack/ceilometer-0" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.643492 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-log-httpd\") pod \"ceilometer-0\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " pod="openstack/ceilometer-0" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.643573 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-run-httpd\") pod \"ceilometer-0\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " pod="openstack/ceilometer-0" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.643598 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " pod="openstack/ceilometer-0" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.643640 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " pod="openstack/ceilometer-0" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.746205 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrxb4\" (UniqueName: \"kubernetes.io/projected/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-kube-api-access-nrxb4\") pod \"ceilometer-0\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " pod="openstack/ceilometer-0" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.746549 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-scripts\") pod \"ceilometer-0\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " pod="openstack/ceilometer-0" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.746671 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " pod="openstack/ceilometer-0" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.746717 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-config-data\") pod \"ceilometer-0\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " pod="openstack/ceilometer-0" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.747500 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-log-httpd\") pod \"ceilometer-0\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " pod="openstack/ceilometer-0" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.747623 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-run-httpd\") pod \"ceilometer-0\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " pod="openstack/ceilometer-0" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.747638 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " pod="openstack/ceilometer-0" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.748048 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-log-httpd\") pod \"ceilometer-0\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " pod="openstack/ceilometer-0" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.748112 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-run-httpd\") pod \"ceilometer-0\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " pod="openstack/ceilometer-0" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.748178 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " pod="openstack/ceilometer-0" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.750938 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-config-data\") pod \"ceilometer-0\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " pod="openstack/ceilometer-0" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.751609 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " pod="openstack/ceilometer-0" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.751865 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-scripts\") pod \"ceilometer-0\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " pod="openstack/ceilometer-0" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.755417 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " pod="openstack/ceilometer-0" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.759181 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " pod="openstack/ceilometer-0" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.769608 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrxb4\" (UniqueName: \"kubernetes.io/projected/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-kube-api-access-nrxb4\") pod \"ceilometer-0\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " pod="openstack/ceilometer-0" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.808734 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65ae60bb-0390-4729-8e95-a59633606a95" path="/var/lib/kubelet/pods/65ae60bb-0390-4729-8e95-a59633606a95/volumes" Feb 14 19:08:51 crc kubenswrapper[4897]: I0214 19:08:51.948664 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:08:52 crc kubenswrapper[4897]: W0214 19:08:52.485352 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d5972ac_5eb1_49b5_b70c_25d1777f89d3.slice/crio-ca9c87b7986d73a4e74449545c5ebee1f5b14dbc3c932b60894c16ec6fe43877 WatchSource:0}: Error finding container ca9c87b7986d73a4e74449545c5ebee1f5b14dbc3c932b60894c16ec6fe43877: Status 404 returned error can't find the container with id ca9c87b7986d73a4e74449545c5ebee1f5b14dbc3c932b60894c16ec6fe43877 Feb 14 19:08:52 crc kubenswrapper[4897]: I0214 19:08:52.489641 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:08:52 crc kubenswrapper[4897]: I0214 19:08:52.561737 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d5972ac-5eb1-49b5-b70c-25d1777f89d3","Type":"ContainerStarted","Data":"ca9c87b7986d73a4e74449545c5ebee1f5b14dbc3c932b60894c16ec6fe43877"} Feb 14 19:08:53 crc kubenswrapper[4897]: I0214 19:08:53.578199 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d5972ac-5eb1-49b5-b70c-25d1777f89d3","Type":"ContainerStarted","Data":"03441f53413ad382d2c05b5d1bcd0f3d216d2eaf5d8f1ec8d7bd898e6b8128ba"} Feb 14 19:08:54 crc kubenswrapper[4897]: I0214 19:08:54.590669 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d5972ac-5eb1-49b5-b70c-25d1777f89d3","Type":"ContainerStarted","Data":"7449fb55b01895d2c419a39f593ee8a0e3d4b5e000970c2c8c7d41bc0b2b0ba2"} Feb 14 19:08:55 crc kubenswrapper[4897]: I0214 19:08:55.375607 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 14 19:08:56 crc kubenswrapper[4897]: I0214 19:08:56.612633 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d5972ac-5eb1-49b5-b70c-25d1777f89d3","Type":"ContainerStarted","Data":"a346edc29687b0f84455d1f5d761a4d35c4e1cd3802cc1e0e9a511b46a7d1dd5"} Feb 14 19:08:57 crc kubenswrapper[4897]: I0214 19:08:57.625535 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d5972ac-5eb1-49b5-b70c-25d1777f89d3","Type":"ContainerStarted","Data":"872cb853dfea09c33d3d83294656dc36709c16823d525509a0aa90b95d6a1884"} Feb 14 19:08:57 crc kubenswrapper[4897]: I0214 19:08:57.626796 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 19:08:57 crc kubenswrapper[4897]: I0214 19:08:57.662105 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.179767862 podStartE2EDuration="6.662083727s" podCreationTimestamp="2026-02-14 19:08:51 +0000 UTC" firstStartedPulling="2026-02-14 19:08:52.489058096 +0000 UTC m=+1585.465466579" lastFinishedPulling="2026-02-14 19:08:56.971373941 +0000 UTC m=+1589.947782444" observedRunningTime="2026-02-14 19:08:57.646120336 +0000 UTC m=+1590.622528819" watchObservedRunningTime="2026-02-14 19:08:57.662083727 +0000 UTC m=+1590.638492210" Feb 14 19:09:01 crc kubenswrapper[4897]: I0214 19:09:01.726127 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:09:01 crc kubenswrapper[4897]: I0214 19:09:01.726801 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:09:21 crc kubenswrapper[4897]: I0214 19:09:21.970424 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 14 19:09:31 crc kubenswrapper[4897]: I0214 19:09:31.725814 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:09:31 crc kubenswrapper[4897]: I0214 19:09:31.726573 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:09:33 crc kubenswrapper[4897]: I0214 19:09:33.492739 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-2rjdz"] Feb 14 19:09:33 crc kubenswrapper[4897]: I0214 19:09:33.508356 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-2rjdz"] Feb 14 19:09:33 crc kubenswrapper[4897]: I0214 19:09:33.581976 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-cqk8v"] Feb 14 19:09:33 crc kubenswrapper[4897]: I0214 19:09:33.593386 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-cqk8v" Feb 14 19:09:33 crc kubenswrapper[4897]: I0214 19:09:33.636135 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b49610a6-b99e-432f-9d5f-271cec21d2e6-config-data\") pod \"heat-db-sync-cqk8v\" (UID: \"b49610a6-b99e-432f-9d5f-271cec21d2e6\") " pod="openstack/heat-db-sync-cqk8v" Feb 14 19:09:33 crc kubenswrapper[4897]: I0214 19:09:33.636222 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49610a6-b99e-432f-9d5f-271cec21d2e6-combined-ca-bundle\") pod \"heat-db-sync-cqk8v\" (UID: \"b49610a6-b99e-432f-9d5f-271cec21d2e6\") " pod="openstack/heat-db-sync-cqk8v" Feb 14 19:09:33 crc kubenswrapper[4897]: I0214 19:09:33.636257 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpmpx\" (UniqueName: \"kubernetes.io/projected/b49610a6-b99e-432f-9d5f-271cec21d2e6-kube-api-access-bpmpx\") pod \"heat-db-sync-cqk8v\" (UID: \"b49610a6-b99e-432f-9d5f-271cec21d2e6\") " pod="openstack/heat-db-sync-cqk8v" Feb 14 19:09:33 crc kubenswrapper[4897]: I0214 19:09:33.636579 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-cqk8v"] Feb 14 19:09:33 crc kubenswrapper[4897]: I0214 19:09:33.739110 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b49610a6-b99e-432f-9d5f-271cec21d2e6-config-data\") pod \"heat-db-sync-cqk8v\" (UID: \"b49610a6-b99e-432f-9d5f-271cec21d2e6\") " pod="openstack/heat-db-sync-cqk8v" Feb 14 19:09:33 crc kubenswrapper[4897]: I0214 19:09:33.739196 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49610a6-b99e-432f-9d5f-271cec21d2e6-combined-ca-bundle\") pod \"heat-db-sync-cqk8v\" (UID: \"b49610a6-b99e-432f-9d5f-271cec21d2e6\") " pod="openstack/heat-db-sync-cqk8v" Feb 14 19:09:33 crc kubenswrapper[4897]: I0214 19:09:33.739247 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpmpx\" (UniqueName: \"kubernetes.io/projected/b49610a6-b99e-432f-9d5f-271cec21d2e6-kube-api-access-bpmpx\") pod \"heat-db-sync-cqk8v\" (UID: \"b49610a6-b99e-432f-9d5f-271cec21d2e6\") " pod="openstack/heat-db-sync-cqk8v" Feb 14 19:09:33 crc kubenswrapper[4897]: I0214 19:09:33.747348 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b49610a6-b99e-432f-9d5f-271cec21d2e6-config-data\") pod \"heat-db-sync-cqk8v\" (UID: \"b49610a6-b99e-432f-9d5f-271cec21d2e6\") " pod="openstack/heat-db-sync-cqk8v" Feb 14 19:09:33 crc kubenswrapper[4897]: I0214 19:09:33.755464 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49610a6-b99e-432f-9d5f-271cec21d2e6-combined-ca-bundle\") pod \"heat-db-sync-cqk8v\" (UID: \"b49610a6-b99e-432f-9d5f-271cec21d2e6\") " pod="openstack/heat-db-sync-cqk8v" Feb 14 19:09:33 crc kubenswrapper[4897]: I0214 19:09:33.757614 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpmpx\" (UniqueName: \"kubernetes.io/projected/b49610a6-b99e-432f-9d5f-271cec21d2e6-kube-api-access-bpmpx\") pod \"heat-db-sync-cqk8v\" (UID: \"b49610a6-b99e-432f-9d5f-271cec21d2e6\") " pod="openstack/heat-db-sync-cqk8v" Feb 14 19:09:33 crc kubenswrapper[4897]: I0214 19:09:33.808146 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c17a810c-7598-46ab-93c3-c480c175ca61" path="/var/lib/kubelet/pods/c17a810c-7598-46ab-93c3-c480c175ca61/volumes" Feb 14 19:09:33 crc kubenswrapper[4897]: I0214 19:09:33.945339 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-cqk8v" Feb 14 19:09:34 crc kubenswrapper[4897]: I0214 19:09:34.398111 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-cqk8v"] Feb 14 19:09:34 crc kubenswrapper[4897]: I0214 19:09:34.407966 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 19:09:35 crc kubenswrapper[4897]: I0214 19:09:35.150373 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-cqk8v" event={"ID":"b49610a6-b99e-432f-9d5f-271cec21d2e6","Type":"ContainerStarted","Data":"400a156fdf412ae57d342eff9729ee1000c05b8cb452006c77abd5bb5fa03602"} Feb 14 19:09:35 crc kubenswrapper[4897]: I0214 19:09:35.297489 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 19:09:35 crc kubenswrapper[4897]: I0214 19:09:35.966351 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:09:35 crc kubenswrapper[4897]: I0214 19:09:35.967090 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d5972ac-5eb1-49b5-b70c-25d1777f89d3" containerName="ceilometer-central-agent" containerID="cri-o://03441f53413ad382d2c05b5d1bcd0f3d216d2eaf5d8f1ec8d7bd898e6b8128ba" gracePeriod=30 Feb 14 19:09:35 crc kubenswrapper[4897]: I0214 19:09:35.967495 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d5972ac-5eb1-49b5-b70c-25d1777f89d3" containerName="ceilometer-notification-agent" containerID="cri-o://7449fb55b01895d2c419a39f593ee8a0e3d4b5e000970c2c8c7d41bc0b2b0ba2" gracePeriod=30 Feb 14 19:09:35 crc kubenswrapper[4897]: I0214 19:09:35.967649 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d5972ac-5eb1-49b5-b70c-25d1777f89d3" containerName="sg-core" containerID="cri-o://a346edc29687b0f84455d1f5d761a4d35c4e1cd3802cc1e0e9a511b46a7d1dd5" gracePeriod=30 Feb 14 19:09:35 crc kubenswrapper[4897]: I0214 19:09:35.967763 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d5972ac-5eb1-49b5-b70c-25d1777f89d3" containerName="proxy-httpd" containerID="cri-o://872cb853dfea09c33d3d83294656dc36709c16823d525509a0aa90b95d6a1884" gracePeriod=30 Feb 14 19:09:36 crc kubenswrapper[4897]: I0214 19:09:36.166942 4897 generic.go:334] "Generic (PLEG): container finished" podID="5d5972ac-5eb1-49b5-b70c-25d1777f89d3" containerID="a346edc29687b0f84455d1f5d761a4d35c4e1cd3802cc1e0e9a511b46a7d1dd5" exitCode=2 Feb 14 19:09:36 crc kubenswrapper[4897]: I0214 19:09:36.166984 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d5972ac-5eb1-49b5-b70c-25d1777f89d3","Type":"ContainerDied","Data":"a346edc29687b0f84455d1f5d761a4d35c4e1cd3802cc1e0e9a511b46a7d1dd5"} Feb 14 19:09:36 crc kubenswrapper[4897]: I0214 19:09:36.313023 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.192807 4897 generic.go:334] "Generic (PLEG): container finished" podID="5d5972ac-5eb1-49b5-b70c-25d1777f89d3" containerID="872cb853dfea09c33d3d83294656dc36709c16823d525509a0aa90b95d6a1884" exitCode=0 Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.193125 4897 generic.go:334] "Generic (PLEG): container finished" podID="5d5972ac-5eb1-49b5-b70c-25d1777f89d3" containerID="7449fb55b01895d2c419a39f593ee8a0e3d4b5e000970c2c8c7d41bc0b2b0ba2" exitCode=0 Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.193160 4897 generic.go:334] "Generic (PLEG): container finished" podID="5d5972ac-5eb1-49b5-b70c-25d1777f89d3" containerID="03441f53413ad382d2c05b5d1bcd0f3d216d2eaf5d8f1ec8d7bd898e6b8128ba" exitCode=0 Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.193213 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d5972ac-5eb1-49b5-b70c-25d1777f89d3","Type":"ContainerDied","Data":"872cb853dfea09c33d3d83294656dc36709c16823d525509a0aa90b95d6a1884"} Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.193243 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d5972ac-5eb1-49b5-b70c-25d1777f89d3","Type":"ContainerDied","Data":"7449fb55b01895d2c419a39f593ee8a0e3d4b5e000970c2c8c7d41bc0b2b0ba2"} Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.193255 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d5972ac-5eb1-49b5-b70c-25d1777f89d3","Type":"ContainerDied","Data":"03441f53413ad382d2c05b5d1bcd0f3d216d2eaf5d8f1ec8d7bd898e6b8128ba"} Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.592883 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.692260 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-config-data\") pod \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.692312 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-scripts\") pod \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.692412 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-log-httpd\") pod \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.692451 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-sg-core-conf-yaml\") pod \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.692536 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrxb4\" (UniqueName: \"kubernetes.io/projected/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-kube-api-access-nrxb4\") pod \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.692555 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-run-httpd\") pod \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.692594 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-ceilometer-tls-certs\") pod \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.692648 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-combined-ca-bundle\") pod \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\" (UID: \"5d5972ac-5eb1-49b5-b70c-25d1777f89d3\") " Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.692928 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5d5972ac-5eb1-49b5-b70c-25d1777f89d3" (UID: "5d5972ac-5eb1-49b5-b70c-25d1777f89d3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.692998 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5d5972ac-5eb1-49b5-b70c-25d1777f89d3" (UID: "5d5972ac-5eb1-49b5-b70c-25d1777f89d3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.693360 4897 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.693377 4897 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.720229 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-kube-api-access-nrxb4" (OuterVolumeSpecName: "kube-api-access-nrxb4") pod "5d5972ac-5eb1-49b5-b70c-25d1777f89d3" (UID: "5d5972ac-5eb1-49b5-b70c-25d1777f89d3"). InnerVolumeSpecName "kube-api-access-nrxb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.758540 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-scripts" (OuterVolumeSpecName: "scripts") pod "5d5972ac-5eb1-49b5-b70c-25d1777f89d3" (UID: "5d5972ac-5eb1-49b5-b70c-25d1777f89d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.796057 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "5d5972ac-5eb1-49b5-b70c-25d1777f89d3" (UID: "5d5972ac-5eb1-49b5-b70c-25d1777f89d3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.800506 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.801002 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrxb4\" (UniqueName: \"kubernetes.io/projected/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-kube-api-access-nrxb4\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.801100 4897 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.853677 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5d5972ac-5eb1-49b5-b70c-25d1777f89d3" (UID: "5d5972ac-5eb1-49b5-b70c-25d1777f89d3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.897862 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-config-data" (OuterVolumeSpecName: "config-data") pod "5d5972ac-5eb1-49b5-b70c-25d1777f89d3" (UID: "5d5972ac-5eb1-49b5-b70c-25d1777f89d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.903802 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.903823 4897 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:37 crc kubenswrapper[4897]: I0214 19:09:37.910390 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d5972ac-5eb1-49b5-b70c-25d1777f89d3" (UID: "5d5972ac-5eb1-49b5-b70c-25d1777f89d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.008016 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d5972ac-5eb1-49b5-b70c-25d1777f89d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.214618 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d5972ac-5eb1-49b5-b70c-25d1777f89d3","Type":"ContainerDied","Data":"ca9c87b7986d73a4e74449545c5ebee1f5b14dbc3c932b60894c16ec6fe43877"} Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.214751 4897 scope.go:117] "RemoveContainer" containerID="872cb853dfea09c33d3d83294656dc36709c16823d525509a0aa90b95d6a1884" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.214711 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.253269 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.260820 4897 scope.go:117] "RemoveContainer" containerID="a346edc29687b0f84455d1f5d761a4d35c4e1cd3802cc1e0e9a511b46a7d1dd5" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.268068 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.291933 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:09:38 crc kubenswrapper[4897]: E0214 19:09:38.292435 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d5972ac-5eb1-49b5-b70c-25d1777f89d3" containerName="ceilometer-central-agent" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.292452 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d5972ac-5eb1-49b5-b70c-25d1777f89d3" containerName="ceilometer-central-agent" Feb 14 19:09:38 crc kubenswrapper[4897]: E0214 19:09:38.292477 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d5972ac-5eb1-49b5-b70c-25d1777f89d3" containerName="ceilometer-notification-agent" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.292483 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d5972ac-5eb1-49b5-b70c-25d1777f89d3" containerName="ceilometer-notification-agent" Feb 14 19:09:38 crc kubenswrapper[4897]: E0214 19:09:38.292495 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d5972ac-5eb1-49b5-b70c-25d1777f89d3" containerName="proxy-httpd" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.292501 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d5972ac-5eb1-49b5-b70c-25d1777f89d3" containerName="proxy-httpd" Feb 14 19:09:38 crc kubenswrapper[4897]: E0214 19:09:38.292530 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d5972ac-5eb1-49b5-b70c-25d1777f89d3" containerName="sg-core" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.292536 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d5972ac-5eb1-49b5-b70c-25d1777f89d3" containerName="sg-core" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.292748 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d5972ac-5eb1-49b5-b70c-25d1777f89d3" containerName="ceilometer-notification-agent" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.292770 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d5972ac-5eb1-49b5-b70c-25d1777f89d3" containerName="proxy-httpd" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.292790 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d5972ac-5eb1-49b5-b70c-25d1777f89d3" containerName="ceilometer-central-agent" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.292805 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d5972ac-5eb1-49b5-b70c-25d1777f89d3" containerName="sg-core" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.297630 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.300306 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.300561 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.300561 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.313652 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.314944 4897 scope.go:117] "RemoveContainer" containerID="7449fb55b01895d2c419a39f593ee8a0e3d4b5e000970c2c8c7d41bc0b2b0ba2" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.359674 4897 scope.go:117] "RemoveContainer" containerID="03441f53413ad382d2c05b5d1bcd0f3d216d2eaf5d8f1ec8d7bd898e6b8128ba" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.432347 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/289311f5-ac62-4fe6-b260-8bda0a09331b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"289311f5-ac62-4fe6-b260-8bda0a09331b\") " pod="openstack/ceilometer-0" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.432390 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/289311f5-ac62-4fe6-b260-8bda0a09331b-log-httpd\") pod \"ceilometer-0\" (UID: \"289311f5-ac62-4fe6-b260-8bda0a09331b\") " pod="openstack/ceilometer-0" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.432444 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/289311f5-ac62-4fe6-b260-8bda0a09331b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"289311f5-ac62-4fe6-b260-8bda0a09331b\") " pod="openstack/ceilometer-0" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.432501 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/289311f5-ac62-4fe6-b260-8bda0a09331b-scripts\") pod \"ceilometer-0\" (UID: \"289311f5-ac62-4fe6-b260-8bda0a09331b\") " pod="openstack/ceilometer-0" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.432523 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/289311f5-ac62-4fe6-b260-8bda0a09331b-run-httpd\") pod \"ceilometer-0\" (UID: \"289311f5-ac62-4fe6-b260-8bda0a09331b\") " pod="openstack/ceilometer-0" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.432570 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/289311f5-ac62-4fe6-b260-8bda0a09331b-config-data\") pod \"ceilometer-0\" (UID: \"289311f5-ac62-4fe6-b260-8bda0a09331b\") " pod="openstack/ceilometer-0" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.432597 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59tjz\" (UniqueName: \"kubernetes.io/projected/289311f5-ac62-4fe6-b260-8bda0a09331b-kube-api-access-59tjz\") pod \"ceilometer-0\" (UID: \"289311f5-ac62-4fe6-b260-8bda0a09331b\") " pod="openstack/ceilometer-0" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.432640 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/289311f5-ac62-4fe6-b260-8bda0a09331b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"289311f5-ac62-4fe6-b260-8bda0a09331b\") " pod="openstack/ceilometer-0" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.534649 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/289311f5-ac62-4fe6-b260-8bda0a09331b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"289311f5-ac62-4fe6-b260-8bda0a09331b\") " pod="openstack/ceilometer-0" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.534697 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/289311f5-ac62-4fe6-b260-8bda0a09331b-log-httpd\") pod \"ceilometer-0\" (UID: \"289311f5-ac62-4fe6-b260-8bda0a09331b\") " pod="openstack/ceilometer-0" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.534739 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/289311f5-ac62-4fe6-b260-8bda0a09331b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"289311f5-ac62-4fe6-b260-8bda0a09331b\") " pod="openstack/ceilometer-0" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.534787 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/289311f5-ac62-4fe6-b260-8bda0a09331b-scripts\") pod \"ceilometer-0\" (UID: \"289311f5-ac62-4fe6-b260-8bda0a09331b\") " pod="openstack/ceilometer-0" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.534801 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/289311f5-ac62-4fe6-b260-8bda0a09331b-run-httpd\") pod \"ceilometer-0\" (UID: \"289311f5-ac62-4fe6-b260-8bda0a09331b\") " pod="openstack/ceilometer-0" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.534837 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/289311f5-ac62-4fe6-b260-8bda0a09331b-config-data\") pod \"ceilometer-0\" (UID: \"289311f5-ac62-4fe6-b260-8bda0a09331b\") " pod="openstack/ceilometer-0" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.534855 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59tjz\" (UniqueName: \"kubernetes.io/projected/289311f5-ac62-4fe6-b260-8bda0a09331b-kube-api-access-59tjz\") pod \"ceilometer-0\" (UID: \"289311f5-ac62-4fe6-b260-8bda0a09331b\") " pod="openstack/ceilometer-0" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.534889 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/289311f5-ac62-4fe6-b260-8bda0a09331b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"289311f5-ac62-4fe6-b260-8bda0a09331b\") " pod="openstack/ceilometer-0" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.535891 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/289311f5-ac62-4fe6-b260-8bda0a09331b-log-httpd\") pod \"ceilometer-0\" (UID: \"289311f5-ac62-4fe6-b260-8bda0a09331b\") " pod="openstack/ceilometer-0" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.536462 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/289311f5-ac62-4fe6-b260-8bda0a09331b-run-httpd\") pod \"ceilometer-0\" (UID: \"289311f5-ac62-4fe6-b260-8bda0a09331b\") " pod="openstack/ceilometer-0" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.540576 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/289311f5-ac62-4fe6-b260-8bda0a09331b-config-data\") pod \"ceilometer-0\" (UID: \"289311f5-ac62-4fe6-b260-8bda0a09331b\") " pod="openstack/ceilometer-0" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.546794 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/289311f5-ac62-4fe6-b260-8bda0a09331b-scripts\") pod \"ceilometer-0\" (UID: \"289311f5-ac62-4fe6-b260-8bda0a09331b\") " pod="openstack/ceilometer-0" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.546935 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/289311f5-ac62-4fe6-b260-8bda0a09331b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"289311f5-ac62-4fe6-b260-8bda0a09331b\") " pod="openstack/ceilometer-0" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.547202 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/289311f5-ac62-4fe6-b260-8bda0a09331b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"289311f5-ac62-4fe6-b260-8bda0a09331b\") " pod="openstack/ceilometer-0" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.554595 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/289311f5-ac62-4fe6-b260-8bda0a09331b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"289311f5-ac62-4fe6-b260-8bda0a09331b\") " pod="openstack/ceilometer-0" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.560553 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59tjz\" (UniqueName: \"kubernetes.io/projected/289311f5-ac62-4fe6-b260-8bda0a09331b-kube-api-access-59tjz\") pod \"ceilometer-0\" (UID: \"289311f5-ac62-4fe6-b260-8bda0a09331b\") " pod="openstack/ceilometer-0" Feb 14 19:09:38 crc kubenswrapper[4897]: I0214 19:09:38.612724 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 19:09:39 crc kubenswrapper[4897]: I0214 19:09:39.249370 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 19:09:39 crc kubenswrapper[4897]: W0214 19:09:39.257506 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod289311f5_ac62_4fe6_b260_8bda0a09331b.slice/crio-16d38ec50710aac1a2836b9cac45dde467bf95c5ba39266c32b47d0ecd1c5e92 WatchSource:0}: Error finding container 16d38ec50710aac1a2836b9cac45dde467bf95c5ba39266c32b47d0ecd1c5e92: Status 404 returned error can't find the container with id 16d38ec50710aac1a2836b9cac45dde467bf95c5ba39266c32b47d0ecd1c5e92 Feb 14 19:09:39 crc kubenswrapper[4897]: I0214 19:09:39.810484 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d5972ac-5eb1-49b5-b70c-25d1777f89d3" path="/var/lib/kubelet/pods/5d5972ac-5eb1-49b5-b70c-25d1777f89d3/volumes" Feb 14 19:09:40 crc kubenswrapper[4897]: I0214 19:09:40.246525 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"289311f5-ac62-4fe6-b260-8bda0a09331b","Type":"ContainerStarted","Data":"16d38ec50710aac1a2836b9cac45dde467bf95c5ba39266c32b47d0ecd1c5e92"} Feb 14 19:09:40 crc kubenswrapper[4897]: I0214 19:09:40.384168 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="3e532d34-b3bb-4f63-bc64-6b6cc22666b0" containerName="rabbitmq" containerID="cri-o://09d64742c29c0487e12d87473de7e26082faebf923d1f5ccc5a3856364def3a5" gracePeriod=604795 Feb 14 19:09:40 crc kubenswrapper[4897]: I0214 19:09:40.521849 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="75b00edc-276b-4e3b-84c1-db17e1eeb3ee" containerName="rabbitmq" containerID="cri-o://f55537400280848c8107974904a1cdcd30ba7c25d7ae2f56bedeab430743c3f3" gracePeriod=604796 Feb 14 19:09:43 crc kubenswrapper[4897]: I0214 19:09:43.841130 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="3e532d34-b3bb-4f63-bc64-6b6cc22666b0" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 14 19:09:44 crc kubenswrapper[4897]: I0214 19:09:44.168149 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="75b00edc-276b-4e3b-84c1-db17e1eeb3ee" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Feb 14 19:09:47 crc kubenswrapper[4897]: I0214 19:09:47.344906 4897 generic.go:334] "Generic (PLEG): container finished" podID="75b00edc-276b-4e3b-84c1-db17e1eeb3ee" containerID="f55537400280848c8107974904a1cdcd30ba7c25d7ae2f56bedeab430743c3f3" exitCode=0 Feb 14 19:09:47 crc kubenswrapper[4897]: I0214 19:09:47.345007 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"75b00edc-276b-4e3b-84c1-db17e1eeb3ee","Type":"ContainerDied","Data":"f55537400280848c8107974904a1cdcd30ba7c25d7ae2f56bedeab430743c3f3"} Feb 14 19:09:47 crc kubenswrapper[4897]: I0214 19:09:47.349516 4897 generic.go:334] "Generic (PLEG): container finished" podID="3e532d34-b3bb-4f63-bc64-6b6cc22666b0" containerID="09d64742c29c0487e12d87473de7e26082faebf923d1f5ccc5a3856364def3a5" exitCode=0 Feb 14 19:09:47 crc kubenswrapper[4897]: I0214 19:09:47.349565 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"3e532d34-b3bb-4f63-bc64-6b6cc22666b0","Type":"ContainerDied","Data":"09d64742c29c0487e12d87473de7e26082faebf923d1f5ccc5a3856364def3a5"} Feb 14 19:09:49 crc kubenswrapper[4897]: I0214 19:09:49.329299 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-qjgh4"] Feb 14 19:09:49 crc kubenswrapper[4897]: I0214 19:09:49.331762 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" Feb 14 19:09:49 crc kubenswrapper[4897]: I0214 19:09:49.334361 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 14 19:09:49 crc kubenswrapper[4897]: I0214 19:09:49.350791 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-qjgh4"] Feb 14 19:09:49 crc kubenswrapper[4897]: I0214 19:09:49.417843 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-qjgh4\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" Feb 14 19:09:49 crc kubenswrapper[4897]: I0214 19:09:49.418229 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srpjl\" (UniqueName: \"kubernetes.io/projected/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-kube-api-access-srpjl\") pod \"dnsmasq-dns-7d84b4d45c-qjgh4\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" Feb 14 19:09:49 crc kubenswrapper[4897]: I0214 19:09:49.418282 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-qjgh4\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" Feb 14 19:09:49 crc kubenswrapper[4897]: I0214 19:09:49.418351 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-config\") pod \"dnsmasq-dns-7d84b4d45c-qjgh4\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" Feb 14 19:09:49 crc kubenswrapper[4897]: I0214 19:09:49.418475 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-qjgh4\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" Feb 14 19:09:49 crc kubenswrapper[4897]: I0214 19:09:49.418515 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-qjgh4\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" Feb 14 19:09:49 crc kubenswrapper[4897]: I0214 19:09:49.418625 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-qjgh4\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" Feb 14 19:09:49 crc kubenswrapper[4897]: I0214 19:09:49.522979 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srpjl\" (UniqueName: \"kubernetes.io/projected/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-kube-api-access-srpjl\") pod \"dnsmasq-dns-7d84b4d45c-qjgh4\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" Feb 14 19:09:49 crc kubenswrapper[4897]: I0214 19:09:49.523045 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-qjgh4\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" Feb 14 19:09:49 crc kubenswrapper[4897]: I0214 19:09:49.523094 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-config\") pod \"dnsmasq-dns-7d84b4d45c-qjgh4\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" Feb 14 19:09:49 crc kubenswrapper[4897]: I0214 19:09:49.523160 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-qjgh4\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" Feb 14 19:09:49 crc kubenswrapper[4897]: I0214 19:09:49.523191 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-qjgh4\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" Feb 14 19:09:49 crc kubenswrapper[4897]: I0214 19:09:49.523259 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-qjgh4\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" Feb 14 19:09:49 crc kubenswrapper[4897]: I0214 19:09:49.523331 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-qjgh4\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" Feb 14 19:09:49 crc kubenswrapper[4897]: I0214 19:09:49.524186 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-qjgh4\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" Feb 14 19:09:49 crc kubenswrapper[4897]: I0214 19:09:49.524385 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-config\") pod \"dnsmasq-dns-7d84b4d45c-qjgh4\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" Feb 14 19:09:49 crc kubenswrapper[4897]: I0214 19:09:49.524984 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-qjgh4\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" Feb 14 19:09:49 crc kubenswrapper[4897]: I0214 19:09:49.525016 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-qjgh4\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" Feb 14 19:09:49 crc kubenswrapper[4897]: I0214 19:09:49.525178 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-qjgh4\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" Feb 14 19:09:49 crc kubenswrapper[4897]: I0214 19:09:49.526363 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-qjgh4\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" Feb 14 19:09:49 crc kubenswrapper[4897]: I0214 19:09:49.582659 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srpjl\" (UniqueName: \"kubernetes.io/projected/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-kube-api-access-srpjl\") pod \"dnsmasq-dns-7d84b4d45c-qjgh4\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" Feb 14 19:09:49 crc kubenswrapper[4897]: I0214 19:09:49.649574 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" Feb 14 19:09:52 crc kubenswrapper[4897]: E0214 19:09:52.448098 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 14 19:09:52 crc kubenswrapper[4897]: E0214 19:09:52.448629 4897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 14 19:09:52 crc kubenswrapper[4897]: E0214 19:09:52.448771 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bpmpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-cqk8v_openstack(b49610a6-b99e-432f-9d5f-271cec21d2e6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 19:09:52 crc kubenswrapper[4897]: E0214 19:09:52.450228 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-cqk8v" podUID="b49610a6-b99e-432f-9d5f-271cec21d2e6" Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.548485 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.598677 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-rabbitmq-erlang-cookie\") pod \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.598782 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-config-data\") pod \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.598867 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-pod-info\") pod \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.598906 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-rabbitmq-plugins\") pod \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.598959 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-erlang-cookie-secret\") pod \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.599047 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-rabbitmq-confd\") pod \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.599098 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-server-conf\") pod \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.599129 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwvd4\" (UniqueName: \"kubernetes.io/projected/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-kube-api-access-xwvd4\") pod \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.600670 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c3f582fe-134d-414a-971c-d17234485d47\") pod \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.600863 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-rabbitmq-tls\") pod \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.600944 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-plugins-conf\") pod \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\" (UID: \"3e532d34-b3bb-4f63-bc64-6b6cc22666b0\") " Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.601007 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "3e532d34-b3bb-4f63-bc64-6b6cc22666b0" (UID: "3e532d34-b3bb-4f63-bc64-6b6cc22666b0"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.601774 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "3e532d34-b3bb-4f63-bc64-6b6cc22666b0" (UID: "3e532d34-b3bb-4f63-bc64-6b6cc22666b0"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.601857 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "3e532d34-b3bb-4f63-bc64-6b6cc22666b0" (UID: "3e532d34-b3bb-4f63-bc64-6b6cc22666b0"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.603981 4897 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.604012 4897 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.604039 4897 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.633409 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "3e532d34-b3bb-4f63-bc64-6b6cc22666b0" (UID: "3e532d34-b3bb-4f63-bc64-6b6cc22666b0"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.633918 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "3e532d34-b3bb-4f63-bc64-6b6cc22666b0" (UID: "3e532d34-b3bb-4f63-bc64-6b6cc22666b0"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.634916 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-pod-info" (OuterVolumeSpecName: "pod-info") pod "3e532d34-b3bb-4f63-bc64-6b6cc22666b0" (UID: "3e532d34-b3bb-4f63-bc64-6b6cc22666b0"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.637105 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-kube-api-access-xwvd4" (OuterVolumeSpecName: "kube-api-access-xwvd4") pod "3e532d34-b3bb-4f63-bc64-6b6cc22666b0" (UID: "3e532d34-b3bb-4f63-bc64-6b6cc22666b0"). InnerVolumeSpecName "kube-api-access-xwvd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.649691 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c3f582fe-134d-414a-971c-d17234485d47" (OuterVolumeSpecName: "persistence") pod "3e532d34-b3bb-4f63-bc64-6b6cc22666b0" (UID: "3e532d34-b3bb-4f63-bc64-6b6cc22666b0"). InnerVolumeSpecName "pvc-c3f582fe-134d-414a-971c-d17234485d47". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.681374 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-config-data" (OuterVolumeSpecName: "config-data") pod "3e532d34-b3bb-4f63-bc64-6b6cc22666b0" (UID: "3e532d34-b3bb-4f63-bc64-6b6cc22666b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.694793 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-server-conf" (OuterVolumeSpecName: "server-conf") pod "3e532d34-b3bb-4f63-bc64-6b6cc22666b0" (UID: "3e532d34-b3bb-4f63-bc64-6b6cc22666b0"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.708186 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.708224 4897 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-pod-info\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.708235 4897 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.708246 4897 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-server-conf\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.708256 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwvd4\" (UniqueName: \"kubernetes.io/projected/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-kube-api-access-xwvd4\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.708281 4897 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c3f582fe-134d-414a-971c-d17234485d47\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c3f582fe-134d-414a-971c-d17234485d47\") on node \"crc\" " Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.708292 4897 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.761014 4897 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.761385 4897 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c3f582fe-134d-414a-971c-d17234485d47" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c3f582fe-134d-414a-971c-d17234485d47") on node "crc" Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.807024 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "3e532d34-b3bb-4f63-bc64-6b6cc22666b0" (UID: "3e532d34-b3bb-4f63-bc64-6b6cc22666b0"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.810674 4897 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3e532d34-b3bb-4f63-bc64-6b6cc22666b0-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:52 crc kubenswrapper[4897]: I0214 19:09:52.810703 4897 reconciler_common.go:293] "Volume detached for volume \"pvc-c3f582fe-134d-414a-971c-d17234485d47\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c3f582fe-134d-414a-971c-d17234485d47\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.425496 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"3e532d34-b3bb-4f63-bc64-6b6cc22666b0","Type":"ContainerDied","Data":"35d51b28cecbf76e932afedcc230bbc3f85fdff73e6fb5a862f4742fc75228d7"} Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.425538 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.425566 4897 scope.go:117] "RemoveContainer" containerID="09d64742c29c0487e12d87473de7e26082faebf923d1f5ccc5a3856364def3a5" Feb 14 19:09:53 crc kubenswrapper[4897]: E0214 19:09:53.432693 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-cqk8v" podUID="b49610a6-b99e-432f-9d5f-271cec21d2e6" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.501295 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.519922 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.543928 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 19:09:53 crc kubenswrapper[4897]: E0214 19:09:53.544497 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e532d34-b3bb-4f63-bc64-6b6cc22666b0" containerName="setup-container" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.544513 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e532d34-b3bb-4f63-bc64-6b6cc22666b0" containerName="setup-container" Feb 14 19:09:53 crc kubenswrapper[4897]: E0214 19:09:53.544559 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e532d34-b3bb-4f63-bc64-6b6cc22666b0" containerName="rabbitmq" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.544565 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e532d34-b3bb-4f63-bc64-6b6cc22666b0" containerName="rabbitmq" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.544789 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e532d34-b3bb-4f63-bc64-6b6cc22666b0" containerName="rabbitmq" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.546213 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.557905 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.627949 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/540a20b2-a6ae-4527-bb75-b6d570169dc2-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.628305 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ljnz\" (UniqueName: \"kubernetes.io/projected/540a20b2-a6ae-4527-bb75-b6d570169dc2-kube-api-access-5ljnz\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.628354 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/540a20b2-a6ae-4527-bb75-b6d570169dc2-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.628382 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/540a20b2-a6ae-4527-bb75-b6d570169dc2-config-data\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.628416 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/540a20b2-a6ae-4527-bb75-b6d570169dc2-pod-info\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.628456 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/540a20b2-a6ae-4527-bb75-b6d570169dc2-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.628496 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c3f582fe-134d-414a-971c-d17234485d47\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c3f582fe-134d-414a-971c-d17234485d47\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.628513 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/540a20b2-a6ae-4527-bb75-b6d570169dc2-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.628532 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/540a20b2-a6ae-4527-bb75-b6d570169dc2-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.628600 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/540a20b2-a6ae-4527-bb75-b6d570169dc2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.628654 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/540a20b2-a6ae-4527-bb75-b6d570169dc2-server-conf\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.733886 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/540a20b2-a6ae-4527-bb75-b6d570169dc2-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.733933 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ljnz\" (UniqueName: \"kubernetes.io/projected/540a20b2-a6ae-4527-bb75-b6d570169dc2-kube-api-access-5ljnz\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.733990 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/540a20b2-a6ae-4527-bb75-b6d570169dc2-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.734017 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/540a20b2-a6ae-4527-bb75-b6d570169dc2-config-data\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.734083 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/540a20b2-a6ae-4527-bb75-b6d570169dc2-pod-info\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.734129 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/540a20b2-a6ae-4527-bb75-b6d570169dc2-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.734184 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c3f582fe-134d-414a-971c-d17234485d47\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c3f582fe-134d-414a-971c-d17234485d47\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.734207 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/540a20b2-a6ae-4527-bb75-b6d570169dc2-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.734233 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/540a20b2-a6ae-4527-bb75-b6d570169dc2-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.734338 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/540a20b2-a6ae-4527-bb75-b6d570169dc2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.734418 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/540a20b2-a6ae-4527-bb75-b6d570169dc2-server-conf\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.734803 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/540a20b2-a6ae-4527-bb75-b6d570169dc2-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.734892 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/540a20b2-a6ae-4527-bb75-b6d570169dc2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.736116 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/540a20b2-a6ae-4527-bb75-b6d570169dc2-config-data\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.737140 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.737188 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c3f582fe-134d-414a-971c-d17234485d47\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c3f582fe-134d-414a-971c-d17234485d47\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/db3bb9145c21dd13780a516e6cf8590bb629ffd0f8f03124b19a4bac524d871f/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.737490 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/540a20b2-a6ae-4527-bb75-b6d570169dc2-server-conf\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.737524 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/540a20b2-a6ae-4527-bb75-b6d570169dc2-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.741414 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/540a20b2-a6ae-4527-bb75-b6d570169dc2-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.742431 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/540a20b2-a6ae-4527-bb75-b6d570169dc2-pod-info\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.742702 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/540a20b2-a6ae-4527-bb75-b6d570169dc2-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.750784 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/540a20b2-a6ae-4527-bb75-b6d570169dc2-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.755708 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ljnz\" (UniqueName: \"kubernetes.io/projected/540a20b2-a6ae-4527-bb75-b6d570169dc2-kube-api-access-5ljnz\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.807727 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e532d34-b3bb-4f63-bc64-6b6cc22666b0" path="/var/lib/kubelet/pods/3e532d34-b3bb-4f63-bc64-6b6cc22666b0/volumes" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.830662 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c3f582fe-134d-414a-971c-d17234485d47\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c3f582fe-134d-414a-971c-d17234485d47\") pod \"rabbitmq-server-2\" (UID: \"540a20b2-a6ae-4527-bb75-b6d570169dc2\") " pod="openstack/rabbitmq-server-2" Feb 14 19:09:53 crc kubenswrapper[4897]: I0214 19:09:53.874689 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 14 19:09:56 crc kubenswrapper[4897]: I0214 19:09:56.653662 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-qjgh4"] Feb 14 19:09:56 crc kubenswrapper[4897]: I0214 19:09:56.805198 4897 scope.go:117] "RemoveContainer" containerID="bb5453fc7c803ba4c78169d1d9f1ca44c2597e317e1cdc22384f1796b179a86c" Feb 14 19:09:56 crc kubenswrapper[4897]: W0214 19:09:56.816331 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0d02d9c_0ec9_4a03_9c21_79f326ef56fd.slice/crio-65a27d5625ade7c61f7ae7c342ae5b96b1279f6265a086efff4fde2a81cf2bb4 WatchSource:0}: Error finding container 65a27d5625ade7c61f7ae7c342ae5b96b1279f6265a086efff4fde2a81cf2bb4: Status 404 returned error can't find the container with id 65a27d5625ade7c61f7ae7c342ae5b96b1279f6265a086efff4fde2a81cf2bb4 Feb 14 19:09:56 crc kubenswrapper[4897]: I0214 19:09:56.997396 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.041008 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-server-conf\") pod \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.041183 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbsjw\" (UniqueName: \"kubernetes.io/projected/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-kube-api-access-pbsjw\") pod \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.041250 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-rabbitmq-plugins\") pod \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.041305 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-config-data\") pod \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.041327 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-rabbitmq-tls\") pod \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.041372 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-rabbitmq-confd\") pod \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.042187 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1ad9f53-f4e5-40e5-9da8-9de52cf43282\") pod \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.042241 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-plugins-conf\") pod \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.044096 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "75b00edc-276b-4e3b-84c1-db17e1eeb3ee" (UID: "75b00edc-276b-4e3b-84c1-db17e1eeb3ee"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.046196 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "75b00edc-276b-4e3b-84c1-db17e1eeb3ee" (UID: "75b00edc-276b-4e3b-84c1-db17e1eeb3ee"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.048801 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "75b00edc-276b-4e3b-84c1-db17e1eeb3ee" (UID: "75b00edc-276b-4e3b-84c1-db17e1eeb3ee"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.058019 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-kube-api-access-pbsjw" (OuterVolumeSpecName: "kube-api-access-pbsjw") pod "75b00edc-276b-4e3b-84c1-db17e1eeb3ee" (UID: "75b00edc-276b-4e3b-84c1-db17e1eeb3ee"). InnerVolumeSpecName "kube-api-access-pbsjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.102334 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-config-data" (OuterVolumeSpecName: "config-data") pod "75b00edc-276b-4e3b-84c1-db17e1eeb3ee" (UID: "75b00edc-276b-4e3b-84c1-db17e1eeb3ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.138585 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1ad9f53-f4e5-40e5-9da8-9de52cf43282" (OuterVolumeSpecName: "persistence") pod "75b00edc-276b-4e3b-84c1-db17e1eeb3ee" (UID: "75b00edc-276b-4e3b-84c1-db17e1eeb3ee"). InnerVolumeSpecName "pvc-e1ad9f53-f4e5-40e5-9da8-9de52cf43282". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.144997 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-pod-info\") pod \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.145065 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-erlang-cookie-secret\") pod \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.145127 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-rabbitmq-erlang-cookie\") pod \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\" (UID: \"75b00edc-276b-4e3b-84c1-db17e1eeb3ee\") " Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.145660 4897 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e1ad9f53-f4e5-40e5-9da8-9de52cf43282\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1ad9f53-f4e5-40e5-9da8-9de52cf43282\") on node \"crc\" " Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.145678 4897 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.145689 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbsjw\" (UniqueName: \"kubernetes.io/projected/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-kube-api-access-pbsjw\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.145700 4897 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.145710 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.145718 4897 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.146358 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "75b00edc-276b-4e3b-84c1-db17e1eeb3ee" (UID: "75b00edc-276b-4e3b-84c1-db17e1eeb3ee"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.153953 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "75b00edc-276b-4e3b-84c1-db17e1eeb3ee" (UID: "75b00edc-276b-4e3b-84c1-db17e1eeb3ee"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.158381 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-pod-info" (OuterVolumeSpecName: "pod-info") pod "75b00edc-276b-4e3b-84c1-db17e1eeb3ee" (UID: "75b00edc-276b-4e3b-84c1-db17e1eeb3ee"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.211784 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-server-conf" (OuterVolumeSpecName: "server-conf") pod "75b00edc-276b-4e3b-84c1-db17e1eeb3ee" (UID: "75b00edc-276b-4e3b-84c1-db17e1eeb3ee"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.226533 4897 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.227111 4897 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e1ad9f53-f4e5-40e5-9da8-9de52cf43282" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1ad9f53-f4e5-40e5-9da8-9de52cf43282") on node "crc" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.265093 4897 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-pod-info\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.265390 4897 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.265400 4897 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.265411 4897 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-server-conf\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.265421 4897 reconciler_common.go:293] "Volume detached for volume \"pvc-e1ad9f53-f4e5-40e5-9da8-9de52cf43282\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1ad9f53-f4e5-40e5-9da8-9de52cf43282\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.447868 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.449288 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "75b00edc-276b-4e3b-84c1-db17e1eeb3ee" (UID: "75b00edc-276b-4e3b-84c1-db17e1eeb3ee"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.480992 4897 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/75b00edc-276b-4e3b-84c1-db17e1eeb3ee-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.496285 4897 generic.go:334] "Generic (PLEG): container finished" podID="f0d02d9c-0ec9-4a03-9c21-79f326ef56fd" containerID="0d3f38e80cfa2cd61fe0469a3460f65b599c1121f1882599883ebe942751d19d" exitCode=0 Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.496363 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" event={"ID":"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd","Type":"ContainerDied","Data":"0d3f38e80cfa2cd61fe0469a3460f65b599c1121f1882599883ebe942751d19d"} Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.496392 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" event={"ID":"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd","Type":"ContainerStarted","Data":"65a27d5625ade7c61f7ae7c342ae5b96b1279f6265a086efff4fde2a81cf2bb4"} Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.514185 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"75b00edc-276b-4e3b-84c1-db17e1eeb3ee","Type":"ContainerDied","Data":"15b99b673350073944d84bf07a5b46fdd24e4605cae1e1700b21a73edc4a2dd3"} Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.514472 4897 scope.go:117] "RemoveContainer" containerID="f55537400280848c8107974904a1cdcd30ba7c25d7ae2f56bedeab430743c3f3" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.514377 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.529336 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"289311f5-ac62-4fe6-b260-8bda0a09331b","Type":"ContainerStarted","Data":"4f46f557febbe5e70794375605b160aeb9b01adc4005b88dc9ae36489b9cb612"} Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.616507 4897 scope.go:117] "RemoveContainer" containerID="cbdac35dc72f27a3253bb19267a193ec38202343ba5dde4d824ec972949ec729" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.648100 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.663073 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.682245 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 19:09:57 crc kubenswrapper[4897]: E0214 19:09:57.682754 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b00edc-276b-4e3b-84c1-db17e1eeb3ee" containerName="rabbitmq" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.682799 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b00edc-276b-4e3b-84c1-db17e1eeb3ee" containerName="rabbitmq" Feb 14 19:09:57 crc kubenswrapper[4897]: E0214 19:09:57.682846 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b00edc-276b-4e3b-84c1-db17e1eeb3ee" containerName="setup-container" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.682853 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b00edc-276b-4e3b-84c1-db17e1eeb3ee" containerName="setup-container" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.683093 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b00edc-276b-4e3b-84c1-db17e1eeb3ee" containerName="rabbitmq" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.685829 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.689762 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.690012 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.690082 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4sqls" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.690204 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.690449 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.690671 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.691363 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.728108 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.788464 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/292d0d53-8176-4764-84c5-a899eb11ab99-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.788655 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e1ad9f53-f4e5-40e5-9da8-9de52cf43282\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1ad9f53-f4e5-40e5-9da8-9de52cf43282\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.788740 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/292d0d53-8176-4764-84c5-a899eb11ab99-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.788812 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/292d0d53-8176-4764-84c5-a899eb11ab99-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.788960 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/292d0d53-8176-4764-84c5-a899eb11ab99-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.789091 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/292d0d53-8176-4764-84c5-a899eb11ab99-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.789165 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/292d0d53-8176-4764-84c5-a899eb11ab99-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.789236 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/292d0d53-8176-4764-84c5-a899eb11ab99-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.789360 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wts6q\" (UniqueName: \"kubernetes.io/projected/292d0d53-8176-4764-84c5-a899eb11ab99-kube-api-access-wts6q\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.789665 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/292d0d53-8176-4764-84c5-a899eb11ab99-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.789809 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/292d0d53-8176-4764-84c5-a899eb11ab99-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.816588 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75b00edc-276b-4e3b-84c1-db17e1eeb3ee" path="/var/lib/kubelet/pods/75b00edc-276b-4e3b-84c1-db17e1eeb3ee/volumes" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.892312 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/292d0d53-8176-4764-84c5-a899eb11ab99-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.892394 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/292d0d53-8176-4764-84c5-a899eb11ab99-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.892455 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/292d0d53-8176-4764-84c5-a899eb11ab99-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.892528 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e1ad9f53-f4e5-40e5-9da8-9de52cf43282\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1ad9f53-f4e5-40e5-9da8-9de52cf43282\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.892558 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/292d0d53-8176-4764-84c5-a899eb11ab99-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.892578 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/292d0d53-8176-4764-84c5-a899eb11ab99-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.892622 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/292d0d53-8176-4764-84c5-a899eb11ab99-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.892675 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/292d0d53-8176-4764-84c5-a899eb11ab99-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.892711 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/292d0d53-8176-4764-84c5-a899eb11ab99-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.892730 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/292d0d53-8176-4764-84c5-a899eb11ab99-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.892752 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wts6q\" (UniqueName: \"kubernetes.io/projected/292d0d53-8176-4764-84c5-a899eb11ab99-kube-api-access-wts6q\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.894422 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/292d0d53-8176-4764-84c5-a899eb11ab99-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.895318 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/292d0d53-8176-4764-84c5-a899eb11ab99-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.895569 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/292d0d53-8176-4764-84c5-a899eb11ab99-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.895691 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/292d0d53-8176-4764-84c5-a899eb11ab99-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.896134 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/292d0d53-8176-4764-84c5-a899eb11ab99-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.898555 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/292d0d53-8176-4764-84c5-a899eb11ab99-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.898975 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/292d0d53-8176-4764-84c5-a899eb11ab99-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.900496 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/292d0d53-8176-4764-84c5-a899eb11ab99-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.901874 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/292d0d53-8176-4764-84c5-a899eb11ab99-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.904342 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.904461 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e1ad9f53-f4e5-40e5-9da8-9de52cf43282\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1ad9f53-f4e5-40e5-9da8-9de52cf43282\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1ba554ac9cd7bf9719c3c599063f28fa348ae684b1c7ff81601658ac87c0ecab/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.915774 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wts6q\" (UniqueName: \"kubernetes.io/projected/292d0d53-8176-4764-84c5-a899eb11ab99-kube-api-access-wts6q\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:57 crc kubenswrapper[4897]: I0214 19:09:57.959535 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e1ad9f53-f4e5-40e5-9da8-9de52cf43282\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1ad9f53-f4e5-40e5-9da8-9de52cf43282\") pod \"rabbitmq-cell1-server-0\" (UID: \"292d0d53-8176-4764-84c5-a899eb11ab99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:58 crc kubenswrapper[4897]: I0214 19:09:58.006352 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:09:58 crc kubenswrapper[4897]: I0214 19:09:58.511758 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 19:09:58 crc kubenswrapper[4897]: W0214 19:09:58.535197 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod292d0d53_8176_4764_84c5_a899eb11ab99.slice/crio-8fbcd14bfb8d47ea8286a102ae23c598d3cef35a2c72ef836b61859a5b9a1ded WatchSource:0}: Error finding container 8fbcd14bfb8d47ea8286a102ae23c598d3cef35a2c72ef836b61859a5b9a1ded: Status 404 returned error can't find the container with id 8fbcd14bfb8d47ea8286a102ae23c598d3cef35a2c72ef836b61859a5b9a1ded Feb 14 19:09:58 crc kubenswrapper[4897]: I0214 19:09:58.546744 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"289311f5-ac62-4fe6-b260-8bda0a09331b","Type":"ContainerStarted","Data":"da620bfec1dfc9e3a1807acfd78032122486f68b0c6cb3f4a4e6e57dca4feaf8"} Feb 14 19:09:58 crc kubenswrapper[4897]: I0214 19:09:58.551750 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" event={"ID":"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd","Type":"ContainerStarted","Data":"d6d96f99e3e501bf0c3cc95a99e620a870410dc6579ac6277fd950f1be1da235"} Feb 14 19:09:58 crc kubenswrapper[4897]: I0214 19:09:58.551943 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" Feb 14 19:09:58 crc kubenswrapper[4897]: I0214 19:09:58.556117 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"540a20b2-a6ae-4527-bb75-b6d570169dc2","Type":"ContainerStarted","Data":"1c262db20f83f4d336c629baac8e1f2879645aa14dde316e08ccff60ce3aaa9e"} Feb 14 19:09:58 crc kubenswrapper[4897]: I0214 19:09:58.574930 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" podStartSLOduration=9.57490871 podStartE2EDuration="9.57490871s" podCreationTimestamp="2026-02-14 19:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:09:58.574775336 +0000 UTC m=+1651.551183829" watchObservedRunningTime="2026-02-14 19:09:58.57490871 +0000 UTC m=+1651.551317193" Feb 14 19:09:59 crc kubenswrapper[4897]: I0214 19:09:59.168573 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="75b00edc-276b-4e3b-84c1-db17e1eeb3ee" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: i/o timeout" Feb 14 19:09:59 crc kubenswrapper[4897]: I0214 19:09:59.575813 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"292d0d53-8176-4764-84c5-a899eb11ab99","Type":"ContainerStarted","Data":"8fbcd14bfb8d47ea8286a102ae23c598d3cef35a2c72ef836b61859a5b9a1ded"} Feb 14 19:09:59 crc kubenswrapper[4897]: I0214 19:09:59.579734 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"289311f5-ac62-4fe6-b260-8bda0a09331b","Type":"ContainerStarted","Data":"095939e2d9425274159625270f7ebe0107a431c2bc50d4a67c117a6aa99e8af7"} Feb 14 19:10:01 crc kubenswrapper[4897]: I0214 19:10:01.603408 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"292d0d53-8176-4764-84c5-a899eb11ab99","Type":"ContainerStarted","Data":"4e394d973137312fa94fefaf0d8c877935f1768784bce5d484494175d18ae595"} Feb 14 19:10:01 crc kubenswrapper[4897]: I0214 19:10:01.606530 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"540a20b2-a6ae-4527-bb75-b6d570169dc2","Type":"ContainerStarted","Data":"fc2738ccdc11b3412bc28340529e06e71691d823b557c8a44f5be09fee0dc82d"} Feb 14 19:10:01 crc kubenswrapper[4897]: I0214 19:10:01.610460 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"289311f5-ac62-4fe6-b260-8bda0a09331b","Type":"ContainerStarted","Data":"b423bdaaaf3d1833f686f128bb69389c78a05619f6a7a64860768cbd125389de"} Feb 14 19:10:01 crc kubenswrapper[4897]: I0214 19:10:01.610694 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 19:10:01 crc kubenswrapper[4897]: I0214 19:10:01.721701 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.6613489550000002 podStartE2EDuration="23.721671997s" podCreationTimestamp="2026-02-14 19:09:38 +0000 UTC" firstStartedPulling="2026-02-14 19:09:39.260568002 +0000 UTC m=+1632.236976485" lastFinishedPulling="2026-02-14 19:10:00.320891054 +0000 UTC m=+1653.297299527" observedRunningTime="2026-02-14 19:10:01.715425752 +0000 UTC m=+1654.691834295" watchObservedRunningTime="2026-02-14 19:10:01.721671997 +0000 UTC m=+1654.698080480" Feb 14 19:10:01 crc kubenswrapper[4897]: I0214 19:10:01.725823 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:10:01 crc kubenswrapper[4897]: I0214 19:10:01.725941 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:10:01 crc kubenswrapper[4897]: I0214 19:10:01.726099 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 19:10:01 crc kubenswrapper[4897]: I0214 19:10:01.727824 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6"} pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 19:10:01 crc kubenswrapper[4897]: I0214 19:10:01.727946 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" containerID="cri-o://66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" gracePeriod=600 Feb 14 19:10:01 crc kubenswrapper[4897]: E0214 19:10:01.847076 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:10:02 crc kubenswrapper[4897]: I0214 19:10:02.625616 4897 generic.go:334] "Generic (PLEG): container finished" podID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerID="66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" exitCode=0 Feb 14 19:10:02 crc kubenswrapper[4897]: I0214 19:10:02.626993 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerDied","Data":"66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6"} Feb 14 19:10:02 crc kubenswrapper[4897]: I0214 19:10:02.627055 4897 scope.go:117] "RemoveContainer" containerID="235f7e04d5c8603ba95b93f15134ed139784ade9cf49c6bd1886aa661c14e66a" Feb 14 19:10:02 crc kubenswrapper[4897]: I0214 19:10:02.628657 4897 scope.go:117] "RemoveContainer" containerID="66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" Feb 14 19:10:02 crc kubenswrapper[4897]: E0214 19:10:02.628925 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:10:04 crc kubenswrapper[4897]: I0214 19:10:04.652293 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" Feb 14 19:10:04 crc kubenswrapper[4897]: I0214 19:10:04.744795 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-c8m59"] Feb 14 19:10:04 crc kubenswrapper[4897]: I0214 19:10:04.745072 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" podUID="f87788c4-1596-41e8-9033-674336188dc7" containerName="dnsmasq-dns" containerID="cri-o://c1baac32e31dbfa4cc88a8b99be9bc177ee8cd8ff6fae082ad5b22b97778dfff" gracePeriod=10 Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.050186 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-ghds2"] Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.052997 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.099154 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-ghds2"] Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.201845 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31859f8b-6460-470d-b9e5-56b33ef4a88d-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-ghds2\" (UID: \"31859f8b-6460-470d-b9e5-56b33ef4a88d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.201977 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31859f8b-6460-470d-b9e5-56b33ef4a88d-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-ghds2\" (UID: \"31859f8b-6460-470d-b9e5-56b33ef4a88d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.202161 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67xp7\" (UniqueName: \"kubernetes.io/projected/31859f8b-6460-470d-b9e5-56b33ef4a88d-kube-api-access-67xp7\") pod \"dnsmasq-dns-6f6df4f56c-ghds2\" (UID: \"31859f8b-6460-470d-b9e5-56b33ef4a88d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.202205 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31859f8b-6460-470d-b9e5-56b33ef4a88d-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-ghds2\" (UID: \"31859f8b-6460-470d-b9e5-56b33ef4a88d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.202246 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31859f8b-6460-470d-b9e5-56b33ef4a88d-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-ghds2\" (UID: \"31859f8b-6460-470d-b9e5-56b33ef4a88d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.202271 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/31859f8b-6460-470d-b9e5-56b33ef4a88d-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-ghds2\" (UID: \"31859f8b-6460-470d-b9e5-56b33ef4a88d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.202339 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31859f8b-6460-470d-b9e5-56b33ef4a88d-config\") pod \"dnsmasq-dns-6f6df4f56c-ghds2\" (UID: \"31859f8b-6460-470d-b9e5-56b33ef4a88d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.304774 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67xp7\" (UniqueName: \"kubernetes.io/projected/31859f8b-6460-470d-b9e5-56b33ef4a88d-kube-api-access-67xp7\") pod \"dnsmasq-dns-6f6df4f56c-ghds2\" (UID: \"31859f8b-6460-470d-b9e5-56b33ef4a88d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.305140 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31859f8b-6460-470d-b9e5-56b33ef4a88d-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-ghds2\" (UID: \"31859f8b-6460-470d-b9e5-56b33ef4a88d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.305201 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31859f8b-6460-470d-b9e5-56b33ef4a88d-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-ghds2\" (UID: \"31859f8b-6460-470d-b9e5-56b33ef4a88d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.305238 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/31859f8b-6460-470d-b9e5-56b33ef4a88d-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-ghds2\" (UID: \"31859f8b-6460-470d-b9e5-56b33ef4a88d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.305298 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31859f8b-6460-470d-b9e5-56b33ef4a88d-config\") pod \"dnsmasq-dns-6f6df4f56c-ghds2\" (UID: \"31859f8b-6460-470d-b9e5-56b33ef4a88d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.305388 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31859f8b-6460-470d-b9e5-56b33ef4a88d-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-ghds2\" (UID: \"31859f8b-6460-470d-b9e5-56b33ef4a88d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.305503 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31859f8b-6460-470d-b9e5-56b33ef4a88d-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-ghds2\" (UID: \"31859f8b-6460-470d-b9e5-56b33ef4a88d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.306394 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31859f8b-6460-470d-b9e5-56b33ef4a88d-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-ghds2\" (UID: \"31859f8b-6460-470d-b9e5-56b33ef4a88d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.306632 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31859f8b-6460-470d-b9e5-56b33ef4a88d-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-ghds2\" (UID: \"31859f8b-6460-470d-b9e5-56b33ef4a88d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.307259 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/31859f8b-6460-470d-b9e5-56b33ef4a88d-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-ghds2\" (UID: \"31859f8b-6460-470d-b9e5-56b33ef4a88d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.307497 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31859f8b-6460-470d-b9e5-56b33ef4a88d-config\") pod \"dnsmasq-dns-6f6df4f56c-ghds2\" (UID: \"31859f8b-6460-470d-b9e5-56b33ef4a88d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.307742 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31859f8b-6460-470d-b9e5-56b33ef4a88d-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-ghds2\" (UID: \"31859f8b-6460-470d-b9e5-56b33ef4a88d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.308045 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31859f8b-6460-470d-b9e5-56b33ef4a88d-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-ghds2\" (UID: \"31859f8b-6460-470d-b9e5-56b33ef4a88d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.336730 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67xp7\" (UniqueName: \"kubernetes.io/projected/31859f8b-6460-470d-b9e5-56b33ef4a88d-kube-api-access-67xp7\") pod \"dnsmasq-dns-6f6df4f56c-ghds2\" (UID: \"31859f8b-6460-470d-b9e5-56b33ef4a88d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.378631 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.519682 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.617108 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgcd2\" (UniqueName: \"kubernetes.io/projected/f87788c4-1596-41e8-9033-674336188dc7-kube-api-access-fgcd2\") pod \"f87788c4-1596-41e8-9033-674336188dc7\" (UID: \"f87788c4-1596-41e8-9033-674336188dc7\") " Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.617206 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-dns-swift-storage-0\") pod \"f87788c4-1596-41e8-9033-674336188dc7\" (UID: \"f87788c4-1596-41e8-9033-674336188dc7\") " Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.617333 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-ovsdbserver-nb\") pod \"f87788c4-1596-41e8-9033-674336188dc7\" (UID: \"f87788c4-1596-41e8-9033-674336188dc7\") " Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.617394 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-ovsdbserver-sb\") pod \"f87788c4-1596-41e8-9033-674336188dc7\" (UID: \"f87788c4-1596-41e8-9033-674336188dc7\") " Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.617558 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-config\") pod \"f87788c4-1596-41e8-9033-674336188dc7\" (UID: \"f87788c4-1596-41e8-9033-674336188dc7\") " Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.617611 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-dns-svc\") pod \"f87788c4-1596-41e8-9033-674336188dc7\" (UID: \"f87788c4-1596-41e8-9033-674336188dc7\") " Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.647283 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f87788c4-1596-41e8-9033-674336188dc7-kube-api-access-fgcd2" (OuterVolumeSpecName: "kube-api-access-fgcd2") pod "f87788c4-1596-41e8-9033-674336188dc7" (UID: "f87788c4-1596-41e8-9033-674336188dc7"). InnerVolumeSpecName "kube-api-access-fgcd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.674961 4897 generic.go:334] "Generic (PLEG): container finished" podID="f87788c4-1596-41e8-9033-674336188dc7" containerID="c1baac32e31dbfa4cc88a8b99be9bc177ee8cd8ff6fae082ad5b22b97778dfff" exitCode=0 Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.675899 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.675917 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" event={"ID":"f87788c4-1596-41e8-9033-674336188dc7","Type":"ContainerDied","Data":"c1baac32e31dbfa4cc88a8b99be9bc177ee8cd8ff6fae082ad5b22b97778dfff"} Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.676933 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-c8m59" event={"ID":"f87788c4-1596-41e8-9033-674336188dc7","Type":"ContainerDied","Data":"5ab40f2fb917b93dd5271a05fd8e568390e981dca20c0b7bd1576d8f818e5033"} Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.676962 4897 scope.go:117] "RemoveContainer" containerID="c1baac32e31dbfa4cc88a8b99be9bc177ee8cd8ff6fae082ad5b22b97778dfff" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.704448 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f87788c4-1596-41e8-9033-674336188dc7" (UID: "f87788c4-1596-41e8-9033-674336188dc7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.709886 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f87788c4-1596-41e8-9033-674336188dc7" (UID: "f87788c4-1596-41e8-9033-674336188dc7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.739453 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-config" (OuterVolumeSpecName: "config") pod "f87788c4-1596-41e8-9033-674336188dc7" (UID: "f87788c4-1596-41e8-9033-674336188dc7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.743191 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.743215 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.743225 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.743235 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgcd2\" (UniqueName: \"kubernetes.io/projected/f87788c4-1596-41e8-9033-674336188dc7-kube-api-access-fgcd2\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.759657 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f87788c4-1596-41e8-9033-674336188dc7" (UID: "f87788c4-1596-41e8-9033-674336188dc7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.798229 4897 scope.go:117] "RemoveContainer" containerID="fc09720803425945ff866cbee96a2d76fc93b5b1ab3c7a9eb8868851d53ac4e2" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.802897 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f87788c4-1596-41e8-9033-674336188dc7" (UID: "f87788c4-1596-41e8-9033-674336188dc7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.830719 4897 scope.go:117] "RemoveContainer" containerID="c1baac32e31dbfa4cc88a8b99be9bc177ee8cd8ff6fae082ad5b22b97778dfff" Feb 14 19:10:05 crc kubenswrapper[4897]: E0214 19:10:05.832145 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1baac32e31dbfa4cc88a8b99be9bc177ee8cd8ff6fae082ad5b22b97778dfff\": container with ID starting with c1baac32e31dbfa4cc88a8b99be9bc177ee8cd8ff6fae082ad5b22b97778dfff not found: ID does not exist" containerID="c1baac32e31dbfa4cc88a8b99be9bc177ee8cd8ff6fae082ad5b22b97778dfff" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.832186 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1baac32e31dbfa4cc88a8b99be9bc177ee8cd8ff6fae082ad5b22b97778dfff"} err="failed to get container status \"c1baac32e31dbfa4cc88a8b99be9bc177ee8cd8ff6fae082ad5b22b97778dfff\": rpc error: code = NotFound desc = could not find container \"c1baac32e31dbfa4cc88a8b99be9bc177ee8cd8ff6fae082ad5b22b97778dfff\": container with ID starting with c1baac32e31dbfa4cc88a8b99be9bc177ee8cd8ff6fae082ad5b22b97778dfff not found: ID does not exist" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.832211 4897 scope.go:117] "RemoveContainer" containerID="fc09720803425945ff866cbee96a2d76fc93b5b1ab3c7a9eb8868851d53ac4e2" Feb 14 19:10:05 crc kubenswrapper[4897]: E0214 19:10:05.832806 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc09720803425945ff866cbee96a2d76fc93b5b1ab3c7a9eb8868851d53ac4e2\": container with ID starting with fc09720803425945ff866cbee96a2d76fc93b5b1ab3c7a9eb8868851d53ac4e2 not found: ID does not exist" containerID="fc09720803425945ff866cbee96a2d76fc93b5b1ab3c7a9eb8868851d53ac4e2" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.832833 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc09720803425945ff866cbee96a2d76fc93b5b1ab3c7a9eb8868851d53ac4e2"} err="failed to get container status \"fc09720803425945ff866cbee96a2d76fc93b5b1ab3c7a9eb8868851d53ac4e2\": rpc error: code = NotFound desc = could not find container \"fc09720803425945ff866cbee96a2d76fc93b5b1ab3c7a9eb8868851d53ac4e2\": container with ID starting with fc09720803425945ff866cbee96a2d76fc93b5b1ab3c7a9eb8868851d53ac4e2 not found: ID does not exist" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.848915 4897 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:05 crc kubenswrapper[4897]: I0214 19:10:05.848954 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f87788c4-1596-41e8-9033-674336188dc7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:06 crc kubenswrapper[4897]: I0214 19:10:06.033140 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-c8m59"] Feb 14 19:10:06 crc kubenswrapper[4897]: I0214 19:10:06.044920 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-c8m59"] Feb 14 19:10:06 crc kubenswrapper[4897]: I0214 19:10:06.073093 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-ghds2"] Feb 14 19:10:06 crc kubenswrapper[4897]: I0214 19:10:06.689843 4897 generic.go:334] "Generic (PLEG): container finished" podID="31859f8b-6460-470d-b9e5-56b33ef4a88d" containerID="e67c74c85f785355b85dd24db27bdd49c1acc2b84c177c1960f6202e91a6dc00" exitCode=0 Feb 14 19:10:06 crc kubenswrapper[4897]: I0214 19:10:06.689906 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" event={"ID":"31859f8b-6460-470d-b9e5-56b33ef4a88d","Type":"ContainerDied","Data":"e67c74c85f785355b85dd24db27bdd49c1acc2b84c177c1960f6202e91a6dc00"} Feb 14 19:10:06 crc kubenswrapper[4897]: I0214 19:10:06.690423 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" event={"ID":"31859f8b-6460-470d-b9e5-56b33ef4a88d","Type":"ContainerStarted","Data":"6038c3dd0114ea7773a6a1cb3d9697b980a484622c4606da68db74e1caa1ae1e"} Feb 14 19:10:07 crc kubenswrapper[4897]: I0214 19:10:07.706364 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" event={"ID":"31859f8b-6460-470d-b9e5-56b33ef4a88d","Type":"ContainerStarted","Data":"d815ccfab4e84b60b546cd54ea787b6219b5d6fa648e8d94af86bb9f2159ce25"} Feb 14 19:10:07 crc kubenswrapper[4897]: I0214 19:10:07.707961 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" Feb 14 19:10:07 crc kubenswrapper[4897]: I0214 19:10:07.748216 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" podStartSLOduration=2.748188119 podStartE2EDuration="2.748188119s" podCreationTimestamp="2026-02-14 19:10:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:10:07.732902873 +0000 UTC m=+1660.709311366" watchObservedRunningTime="2026-02-14 19:10:07.748188119 +0000 UTC m=+1660.724596642" Feb 14 19:10:07 crc kubenswrapper[4897]: I0214 19:10:07.810342 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f87788c4-1596-41e8-9033-674336188dc7" path="/var/lib/kubelet/pods/f87788c4-1596-41e8-9033-674336188dc7/volumes" Feb 14 19:10:08 crc kubenswrapper[4897]: I0214 19:10:08.719553 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-cqk8v" event={"ID":"b49610a6-b99e-432f-9d5f-271cec21d2e6","Type":"ContainerStarted","Data":"c1c2073afa58c1a74aad53a2ba7a7ddfc453057ab3e9c8cd8870c6a483dbab2d"} Feb 14 19:10:08 crc kubenswrapper[4897]: I0214 19:10:08.741495 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-cqk8v" podStartSLOduration=2.143112672 podStartE2EDuration="35.741473925s" podCreationTimestamp="2026-02-14 19:09:33 +0000 UTC" firstStartedPulling="2026-02-14 19:09:34.407766406 +0000 UTC m=+1627.384174889" lastFinishedPulling="2026-02-14 19:10:08.006127659 +0000 UTC m=+1660.982536142" observedRunningTime="2026-02-14 19:10:08.73712737 +0000 UTC m=+1661.713535863" watchObservedRunningTime="2026-02-14 19:10:08.741473925 +0000 UTC m=+1661.717882408" Feb 14 19:10:12 crc kubenswrapper[4897]: I0214 19:10:12.774165 4897 generic.go:334] "Generic (PLEG): container finished" podID="b49610a6-b99e-432f-9d5f-271cec21d2e6" containerID="c1c2073afa58c1a74aad53a2ba7a7ddfc453057ab3e9c8cd8870c6a483dbab2d" exitCode=0 Feb 14 19:10:12 crc kubenswrapper[4897]: I0214 19:10:12.774244 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-cqk8v" event={"ID":"b49610a6-b99e-432f-9d5f-271cec21d2e6","Type":"ContainerDied","Data":"c1c2073afa58c1a74aad53a2ba7a7ddfc453057ab3e9c8cd8870c6a483dbab2d"} Feb 14 19:10:14 crc kubenswrapper[4897]: I0214 19:10:14.375997 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-cqk8v" Feb 14 19:10:14 crc kubenswrapper[4897]: I0214 19:10:14.464561 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49610a6-b99e-432f-9d5f-271cec21d2e6-combined-ca-bundle\") pod \"b49610a6-b99e-432f-9d5f-271cec21d2e6\" (UID: \"b49610a6-b99e-432f-9d5f-271cec21d2e6\") " Feb 14 19:10:14 crc kubenswrapper[4897]: I0214 19:10:14.464860 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b49610a6-b99e-432f-9d5f-271cec21d2e6-config-data\") pod \"b49610a6-b99e-432f-9d5f-271cec21d2e6\" (UID: \"b49610a6-b99e-432f-9d5f-271cec21d2e6\") " Feb 14 19:10:14 crc kubenswrapper[4897]: I0214 19:10:14.464903 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpmpx\" (UniqueName: \"kubernetes.io/projected/b49610a6-b99e-432f-9d5f-271cec21d2e6-kube-api-access-bpmpx\") pod \"b49610a6-b99e-432f-9d5f-271cec21d2e6\" (UID: \"b49610a6-b99e-432f-9d5f-271cec21d2e6\") " Feb 14 19:10:14 crc kubenswrapper[4897]: I0214 19:10:14.472539 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b49610a6-b99e-432f-9d5f-271cec21d2e6-kube-api-access-bpmpx" (OuterVolumeSpecName: "kube-api-access-bpmpx") pod "b49610a6-b99e-432f-9d5f-271cec21d2e6" (UID: "b49610a6-b99e-432f-9d5f-271cec21d2e6"). InnerVolumeSpecName "kube-api-access-bpmpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:10:14 crc kubenswrapper[4897]: I0214 19:10:14.530335 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b49610a6-b99e-432f-9d5f-271cec21d2e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b49610a6-b99e-432f-9d5f-271cec21d2e6" (UID: "b49610a6-b99e-432f-9d5f-271cec21d2e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:10:14 crc kubenswrapper[4897]: I0214 19:10:14.565280 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b49610a6-b99e-432f-9d5f-271cec21d2e6-config-data" (OuterVolumeSpecName: "config-data") pod "b49610a6-b99e-432f-9d5f-271cec21d2e6" (UID: "b49610a6-b99e-432f-9d5f-271cec21d2e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:10:14 crc kubenswrapper[4897]: I0214 19:10:14.568560 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49610a6-b99e-432f-9d5f-271cec21d2e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:14 crc kubenswrapper[4897]: I0214 19:10:14.568606 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b49610a6-b99e-432f-9d5f-271cec21d2e6-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:14 crc kubenswrapper[4897]: I0214 19:10:14.568625 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpmpx\" (UniqueName: \"kubernetes.io/projected/b49610a6-b99e-432f-9d5f-271cec21d2e6-kube-api-access-bpmpx\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:14 crc kubenswrapper[4897]: I0214 19:10:14.805900 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-cqk8v" event={"ID":"b49610a6-b99e-432f-9d5f-271cec21d2e6","Type":"ContainerDied","Data":"400a156fdf412ae57d342eff9729ee1000c05b8cb452006c77abd5bb5fa03602"} Feb 14 19:10:14 crc kubenswrapper[4897]: I0214 19:10:14.805934 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="400a156fdf412ae57d342eff9729ee1000c05b8cb452006c77abd5bb5fa03602" Feb 14 19:10:14 crc kubenswrapper[4897]: I0214 19:10:14.805961 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-cqk8v" Feb 14 19:10:15 crc kubenswrapper[4897]: I0214 19:10:15.380191 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6df4f56c-ghds2" Feb 14 19:10:15 crc kubenswrapper[4897]: I0214 19:10:15.473501 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-qjgh4"] Feb 14 19:10:15 crc kubenswrapper[4897]: I0214 19:10:15.473774 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" podUID="f0d02d9c-0ec9-4a03-9c21-79f326ef56fd" containerName="dnsmasq-dns" containerID="cri-o://d6d96f99e3e501bf0c3cc95a99e620a870410dc6579ac6277fd950f1be1da235" gracePeriod=10 Feb 14 19:10:15 crc kubenswrapper[4897]: I0214 19:10:15.876244 4897 generic.go:334] "Generic (PLEG): container finished" podID="f0d02d9c-0ec9-4a03-9c21-79f326ef56fd" containerID="d6d96f99e3e501bf0c3cc95a99e620a870410dc6579ac6277fd950f1be1da235" exitCode=0 Feb 14 19:10:15 crc kubenswrapper[4897]: I0214 19:10:15.876552 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" event={"ID":"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd","Type":"ContainerDied","Data":"d6d96f99e3e501bf0c3cc95a99e620a870410dc6579ac6277fd950f1be1da235"} Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.156463 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.296098 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5858fcf85c-g8zcx"] Feb 14 19:10:16 crc kubenswrapper[4897]: E0214 19:10:16.296897 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0d02d9c-0ec9-4a03-9c21-79f326ef56fd" containerName="dnsmasq-dns" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.296914 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0d02d9c-0ec9-4a03-9c21-79f326ef56fd" containerName="dnsmasq-dns" Feb 14 19:10:16 crc kubenswrapper[4897]: E0214 19:10:16.296946 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f87788c4-1596-41e8-9033-674336188dc7" containerName="dnsmasq-dns" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.296953 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f87788c4-1596-41e8-9033-674336188dc7" containerName="dnsmasq-dns" Feb 14 19:10:16 crc kubenswrapper[4897]: E0214 19:10:16.296969 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b49610a6-b99e-432f-9d5f-271cec21d2e6" containerName="heat-db-sync" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.296977 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b49610a6-b99e-432f-9d5f-271cec21d2e6" containerName="heat-db-sync" Feb 14 19:10:16 crc kubenswrapper[4897]: E0214 19:10:16.296989 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f87788c4-1596-41e8-9033-674336188dc7" containerName="init" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.296995 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f87788c4-1596-41e8-9033-674336188dc7" containerName="init" Feb 14 19:10:16 crc kubenswrapper[4897]: E0214 19:10:16.297006 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0d02d9c-0ec9-4a03-9c21-79f326ef56fd" containerName="init" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.297013 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0d02d9c-0ec9-4a03-9c21-79f326ef56fd" containerName="init" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.297238 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="b49610a6-b99e-432f-9d5f-271cec21d2e6" containerName="heat-db-sync" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.297248 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f87788c4-1596-41e8-9033-674336188dc7" containerName="dnsmasq-dns" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.297271 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0d02d9c-0ec9-4a03-9c21-79f326ef56fd" containerName="dnsmasq-dns" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.298102 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5858fcf85c-g8zcx" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.324078 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5858fcf85c-g8zcx"] Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.333974 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srpjl\" (UniqueName: \"kubernetes.io/projected/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-kube-api-access-srpjl\") pod \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.334059 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-dns-swift-storage-0\") pod \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.334099 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-ovsdbserver-nb\") pod \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.334171 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-dns-svc\") pod \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.334210 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-ovsdbserver-sb\") pod \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.334414 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-openstack-edpm-ipam\") pod \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.334463 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-config\") pod \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\" (UID: \"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd\") " Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.359276 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-kube-api-access-srpjl" (OuterVolumeSpecName: "kube-api-access-srpjl") pod "f0d02d9c-0ec9-4a03-9c21-79f326ef56fd" (UID: "f0d02d9c-0ec9-4a03-9c21-79f326ef56fd"). InnerVolumeSpecName "kube-api-access-srpjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.398333 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6dff47865f-dwdfs"] Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.407876 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6dff47865f-dwdfs" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.441935 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffd0f657-d81f-4767-b645-685963cf78ca-config-data-custom\") pod \"heat-engine-5858fcf85c-g8zcx\" (UID: \"ffd0f657-d81f-4767-b645-685963cf78ca\") " pod="openstack/heat-engine-5858fcf85c-g8zcx" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.442239 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr472\" (UniqueName: \"kubernetes.io/projected/ffd0f657-d81f-4767-b645-685963cf78ca-kube-api-access-dr472\") pod \"heat-engine-5858fcf85c-g8zcx\" (UID: \"ffd0f657-d81f-4767-b645-685963cf78ca\") " pod="openstack/heat-engine-5858fcf85c-g8zcx" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.442640 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffd0f657-d81f-4767-b645-685963cf78ca-config-data\") pod \"heat-engine-5858fcf85c-g8zcx\" (UID: \"ffd0f657-d81f-4767-b645-685963cf78ca\") " pod="openstack/heat-engine-5858fcf85c-g8zcx" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.442680 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffd0f657-d81f-4767-b645-685963cf78ca-combined-ca-bundle\") pod \"heat-engine-5858fcf85c-g8zcx\" (UID: \"ffd0f657-d81f-4767-b645-685963cf78ca\") " pod="openstack/heat-engine-5858fcf85c-g8zcx" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.442909 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srpjl\" (UniqueName: \"kubernetes.io/projected/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-kube-api-access-srpjl\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.460293 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6dff47865f-dwdfs"] Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.460840 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f0d02d9c-0ec9-4a03-9c21-79f326ef56fd" (UID: "f0d02d9c-0ec9-4a03-9c21-79f326ef56fd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.495728 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-config" (OuterVolumeSpecName: "config") pod "f0d02d9c-0ec9-4a03-9c21-79f326ef56fd" (UID: "f0d02d9c-0ec9-4a03-9c21-79f326ef56fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.497591 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f0d02d9c-0ec9-4a03-9c21-79f326ef56fd" (UID: "f0d02d9c-0ec9-4a03-9c21-79f326ef56fd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.500183 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-59bb6b8559-n8bq2"] Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.502478 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-59bb6b8559-n8bq2" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.503570 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "f0d02d9c-0ec9-4a03-9c21-79f326ef56fd" (UID: "f0d02d9c-0ec9-4a03-9c21-79f326ef56fd"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.513537 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f0d02d9c-0ec9-4a03-9c21-79f326ef56fd" (UID: "f0d02d9c-0ec9-4a03-9c21-79f326ef56fd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.516493 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f0d02d9c-0ec9-4a03-9c21-79f326ef56fd" (UID: "f0d02d9c-0ec9-4a03-9c21-79f326ef56fd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.518172 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-59bb6b8559-n8bq2"] Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.545694 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr472\" (UniqueName: \"kubernetes.io/projected/ffd0f657-d81f-4767-b645-685963cf78ca-kube-api-access-dr472\") pod \"heat-engine-5858fcf85c-g8zcx\" (UID: \"ffd0f657-d81f-4767-b645-685963cf78ca\") " pod="openstack/heat-engine-5858fcf85c-g8zcx" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.545769 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d94bdc7-c732-4513-878f-0d7f8ae186ca-public-tls-certs\") pod \"heat-api-6dff47865f-dwdfs\" (UID: \"8d94bdc7-c732-4513-878f-0d7f8ae186ca\") " pod="openstack/heat-api-6dff47865f-dwdfs" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.545844 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d94bdc7-c732-4513-878f-0d7f8ae186ca-config-data-custom\") pod \"heat-api-6dff47865f-dwdfs\" (UID: \"8d94bdc7-c732-4513-878f-0d7f8ae186ca\") " pod="openstack/heat-api-6dff47865f-dwdfs" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.545873 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d94bdc7-c732-4513-878f-0d7f8ae186ca-internal-tls-certs\") pod \"heat-api-6dff47865f-dwdfs\" (UID: \"8d94bdc7-c732-4513-878f-0d7f8ae186ca\") " pod="openstack/heat-api-6dff47865f-dwdfs" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.545931 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d94bdc7-c732-4513-878f-0d7f8ae186ca-combined-ca-bundle\") pod \"heat-api-6dff47865f-dwdfs\" (UID: \"8d94bdc7-c732-4513-878f-0d7f8ae186ca\") " pod="openstack/heat-api-6dff47865f-dwdfs" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.545975 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbppj\" (UniqueName: \"kubernetes.io/projected/8d94bdc7-c732-4513-878f-0d7f8ae186ca-kube-api-access-lbppj\") pod \"heat-api-6dff47865f-dwdfs\" (UID: \"8d94bdc7-c732-4513-878f-0d7f8ae186ca\") " pod="openstack/heat-api-6dff47865f-dwdfs" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.546004 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffd0f657-d81f-4767-b645-685963cf78ca-config-data\") pod \"heat-engine-5858fcf85c-g8zcx\" (UID: \"ffd0f657-d81f-4767-b645-685963cf78ca\") " pod="openstack/heat-engine-5858fcf85c-g8zcx" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.546023 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffd0f657-d81f-4767-b645-685963cf78ca-combined-ca-bundle\") pod \"heat-engine-5858fcf85c-g8zcx\" (UID: \"ffd0f657-d81f-4767-b645-685963cf78ca\") " pod="openstack/heat-engine-5858fcf85c-g8zcx" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.546106 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffd0f657-d81f-4767-b645-685963cf78ca-config-data-custom\") pod \"heat-engine-5858fcf85c-g8zcx\" (UID: \"ffd0f657-d81f-4767-b645-685963cf78ca\") " pod="openstack/heat-engine-5858fcf85c-g8zcx" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.546144 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d94bdc7-c732-4513-878f-0d7f8ae186ca-config-data\") pod \"heat-api-6dff47865f-dwdfs\" (UID: \"8d94bdc7-c732-4513-878f-0d7f8ae186ca\") " pod="openstack/heat-api-6dff47865f-dwdfs" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.546216 4897 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.546229 4897 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-config\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.546240 4897 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.546248 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.546257 4897 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.546265 4897 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.550698 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffd0f657-d81f-4767-b645-685963cf78ca-config-data-custom\") pod \"heat-engine-5858fcf85c-g8zcx\" (UID: \"ffd0f657-d81f-4767-b645-685963cf78ca\") " pod="openstack/heat-engine-5858fcf85c-g8zcx" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.551095 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffd0f657-d81f-4767-b645-685963cf78ca-combined-ca-bundle\") pod \"heat-engine-5858fcf85c-g8zcx\" (UID: \"ffd0f657-d81f-4767-b645-685963cf78ca\") " pod="openstack/heat-engine-5858fcf85c-g8zcx" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.551828 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffd0f657-d81f-4767-b645-685963cf78ca-config-data\") pod \"heat-engine-5858fcf85c-g8zcx\" (UID: \"ffd0f657-d81f-4767-b645-685963cf78ca\") " pod="openstack/heat-engine-5858fcf85c-g8zcx" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.562120 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr472\" (UniqueName: \"kubernetes.io/projected/ffd0f657-d81f-4767-b645-685963cf78ca-kube-api-access-dr472\") pod \"heat-engine-5858fcf85c-g8zcx\" (UID: \"ffd0f657-d81f-4767-b645-685963cf78ca\") " pod="openstack/heat-engine-5858fcf85c-g8zcx" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.629829 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5858fcf85c-g8zcx" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.648622 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d94bdc7-c732-4513-878f-0d7f8ae186ca-public-tls-certs\") pod \"heat-api-6dff47865f-dwdfs\" (UID: \"8d94bdc7-c732-4513-878f-0d7f8ae186ca\") " pod="openstack/heat-api-6dff47865f-dwdfs" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.649192 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d94bdc7-c732-4513-878f-0d7f8ae186ca-config-data-custom\") pod \"heat-api-6dff47865f-dwdfs\" (UID: \"8d94bdc7-c732-4513-878f-0d7f8ae186ca\") " pod="openstack/heat-api-6dff47865f-dwdfs" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.649309 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d94bdc7-c732-4513-878f-0d7f8ae186ca-internal-tls-certs\") pod \"heat-api-6dff47865f-dwdfs\" (UID: \"8d94bdc7-c732-4513-878f-0d7f8ae186ca\") " pod="openstack/heat-api-6dff47865f-dwdfs" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.649441 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d94bdc7-c732-4513-878f-0d7f8ae186ca-combined-ca-bundle\") pod \"heat-api-6dff47865f-dwdfs\" (UID: \"8d94bdc7-c732-4513-878f-0d7f8ae186ca\") " pod="openstack/heat-api-6dff47865f-dwdfs" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.649547 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa628683-cd13-40e1-a275-1bf56d130479-public-tls-certs\") pod \"heat-cfnapi-59bb6b8559-n8bq2\" (UID: \"aa628683-cd13-40e1-a275-1bf56d130479\") " pod="openstack/heat-cfnapi-59bb6b8559-n8bq2" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.649702 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbppj\" (UniqueName: \"kubernetes.io/projected/8d94bdc7-c732-4513-878f-0d7f8ae186ca-kube-api-access-lbppj\") pod \"heat-api-6dff47865f-dwdfs\" (UID: \"8d94bdc7-c732-4513-878f-0d7f8ae186ca\") " pod="openstack/heat-api-6dff47865f-dwdfs" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.649835 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa628683-cd13-40e1-a275-1bf56d130479-config-data-custom\") pod \"heat-cfnapi-59bb6b8559-n8bq2\" (UID: \"aa628683-cd13-40e1-a275-1bf56d130479\") " pod="openstack/heat-cfnapi-59bb6b8559-n8bq2" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.650158 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d94bdc7-c732-4513-878f-0d7f8ae186ca-config-data\") pod \"heat-api-6dff47865f-dwdfs\" (UID: \"8d94bdc7-c732-4513-878f-0d7f8ae186ca\") " pod="openstack/heat-api-6dff47865f-dwdfs" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.650267 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa628683-cd13-40e1-a275-1bf56d130479-internal-tls-certs\") pod \"heat-cfnapi-59bb6b8559-n8bq2\" (UID: \"aa628683-cd13-40e1-a275-1bf56d130479\") " pod="openstack/heat-cfnapi-59bb6b8559-n8bq2" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.650359 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa628683-cd13-40e1-a275-1bf56d130479-combined-ca-bundle\") pod \"heat-cfnapi-59bb6b8559-n8bq2\" (UID: \"aa628683-cd13-40e1-a275-1bf56d130479\") " pod="openstack/heat-cfnapi-59bb6b8559-n8bq2" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.650440 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa628683-cd13-40e1-a275-1bf56d130479-config-data\") pod \"heat-cfnapi-59bb6b8559-n8bq2\" (UID: \"aa628683-cd13-40e1-a275-1bf56d130479\") " pod="openstack/heat-cfnapi-59bb6b8559-n8bq2" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.650524 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22vk8\" (UniqueName: \"kubernetes.io/projected/aa628683-cd13-40e1-a275-1bf56d130479-kube-api-access-22vk8\") pod \"heat-cfnapi-59bb6b8559-n8bq2\" (UID: \"aa628683-cd13-40e1-a275-1bf56d130479\") " pod="openstack/heat-cfnapi-59bb6b8559-n8bq2" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.654593 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d94bdc7-c732-4513-878f-0d7f8ae186ca-public-tls-certs\") pod \"heat-api-6dff47865f-dwdfs\" (UID: \"8d94bdc7-c732-4513-878f-0d7f8ae186ca\") " pod="openstack/heat-api-6dff47865f-dwdfs" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.654961 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d94bdc7-c732-4513-878f-0d7f8ae186ca-internal-tls-certs\") pod \"heat-api-6dff47865f-dwdfs\" (UID: \"8d94bdc7-c732-4513-878f-0d7f8ae186ca\") " pod="openstack/heat-api-6dff47865f-dwdfs" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.655835 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d94bdc7-c732-4513-878f-0d7f8ae186ca-combined-ca-bundle\") pod \"heat-api-6dff47865f-dwdfs\" (UID: \"8d94bdc7-c732-4513-878f-0d7f8ae186ca\") " pod="openstack/heat-api-6dff47865f-dwdfs" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.657703 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d94bdc7-c732-4513-878f-0d7f8ae186ca-config-data-custom\") pod \"heat-api-6dff47865f-dwdfs\" (UID: \"8d94bdc7-c732-4513-878f-0d7f8ae186ca\") " pod="openstack/heat-api-6dff47865f-dwdfs" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.663988 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d94bdc7-c732-4513-878f-0d7f8ae186ca-config-data\") pod \"heat-api-6dff47865f-dwdfs\" (UID: \"8d94bdc7-c732-4513-878f-0d7f8ae186ca\") " pod="openstack/heat-api-6dff47865f-dwdfs" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.678628 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbppj\" (UniqueName: \"kubernetes.io/projected/8d94bdc7-c732-4513-878f-0d7f8ae186ca-kube-api-access-lbppj\") pod \"heat-api-6dff47865f-dwdfs\" (UID: \"8d94bdc7-c732-4513-878f-0d7f8ae186ca\") " pod="openstack/heat-api-6dff47865f-dwdfs" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.752567 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa628683-cd13-40e1-a275-1bf56d130479-internal-tls-certs\") pod \"heat-cfnapi-59bb6b8559-n8bq2\" (UID: \"aa628683-cd13-40e1-a275-1bf56d130479\") " pod="openstack/heat-cfnapi-59bb6b8559-n8bq2" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.752625 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa628683-cd13-40e1-a275-1bf56d130479-combined-ca-bundle\") pod \"heat-cfnapi-59bb6b8559-n8bq2\" (UID: \"aa628683-cd13-40e1-a275-1bf56d130479\") " pod="openstack/heat-cfnapi-59bb6b8559-n8bq2" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.752702 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa628683-cd13-40e1-a275-1bf56d130479-config-data\") pod \"heat-cfnapi-59bb6b8559-n8bq2\" (UID: \"aa628683-cd13-40e1-a275-1bf56d130479\") " pod="openstack/heat-cfnapi-59bb6b8559-n8bq2" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.752741 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22vk8\" (UniqueName: \"kubernetes.io/projected/aa628683-cd13-40e1-a275-1bf56d130479-kube-api-access-22vk8\") pod \"heat-cfnapi-59bb6b8559-n8bq2\" (UID: \"aa628683-cd13-40e1-a275-1bf56d130479\") " pod="openstack/heat-cfnapi-59bb6b8559-n8bq2" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.752892 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa628683-cd13-40e1-a275-1bf56d130479-public-tls-certs\") pod \"heat-cfnapi-59bb6b8559-n8bq2\" (UID: \"aa628683-cd13-40e1-a275-1bf56d130479\") " pod="openstack/heat-cfnapi-59bb6b8559-n8bq2" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.752945 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa628683-cd13-40e1-a275-1bf56d130479-config-data-custom\") pod \"heat-cfnapi-59bb6b8559-n8bq2\" (UID: \"aa628683-cd13-40e1-a275-1bf56d130479\") " pod="openstack/heat-cfnapi-59bb6b8559-n8bq2" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.758189 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa628683-cd13-40e1-a275-1bf56d130479-internal-tls-certs\") pod \"heat-cfnapi-59bb6b8559-n8bq2\" (UID: \"aa628683-cd13-40e1-a275-1bf56d130479\") " pod="openstack/heat-cfnapi-59bb6b8559-n8bq2" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.770533 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa628683-cd13-40e1-a275-1bf56d130479-config-data\") pod \"heat-cfnapi-59bb6b8559-n8bq2\" (UID: \"aa628683-cd13-40e1-a275-1bf56d130479\") " pod="openstack/heat-cfnapi-59bb6b8559-n8bq2" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.771388 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa628683-cd13-40e1-a275-1bf56d130479-combined-ca-bundle\") pod \"heat-cfnapi-59bb6b8559-n8bq2\" (UID: \"aa628683-cd13-40e1-a275-1bf56d130479\") " pod="openstack/heat-cfnapi-59bb6b8559-n8bq2" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.772016 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa628683-cd13-40e1-a275-1bf56d130479-config-data-custom\") pod \"heat-cfnapi-59bb6b8559-n8bq2\" (UID: \"aa628683-cd13-40e1-a275-1bf56d130479\") " pod="openstack/heat-cfnapi-59bb6b8559-n8bq2" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.777556 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22vk8\" (UniqueName: \"kubernetes.io/projected/aa628683-cd13-40e1-a275-1bf56d130479-kube-api-access-22vk8\") pod \"heat-cfnapi-59bb6b8559-n8bq2\" (UID: \"aa628683-cd13-40e1-a275-1bf56d130479\") " pod="openstack/heat-cfnapi-59bb6b8559-n8bq2" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.786210 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa628683-cd13-40e1-a275-1bf56d130479-public-tls-certs\") pod \"heat-cfnapi-59bb6b8559-n8bq2\" (UID: \"aa628683-cd13-40e1-a275-1bf56d130479\") " pod="openstack/heat-cfnapi-59bb6b8559-n8bq2" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.795047 4897 scope.go:117] "RemoveContainer" containerID="66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" Feb 14 19:10:16 crc kubenswrapper[4897]: E0214 19:10:16.795363 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.800259 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6dff47865f-dwdfs" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.829731 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-59bb6b8559-n8bq2" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.895505 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" event={"ID":"f0d02d9c-0ec9-4a03-9c21-79f326ef56fd","Type":"ContainerDied","Data":"65a27d5625ade7c61f7ae7c342ae5b96b1279f6265a086efff4fde2a81cf2bb4"} Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.895555 4897 scope.go:117] "RemoveContainer" containerID="d6d96f99e3e501bf0c3cc95a99e620a870410dc6579ac6277fd950f1be1da235" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.895733 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-qjgh4" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.939059 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-qjgh4"] Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.949501 4897 scope.go:117] "RemoveContainer" containerID="0d3f38e80cfa2cd61fe0469a3460f65b599c1121f1882599883ebe942751d19d" Feb 14 19:10:16 crc kubenswrapper[4897]: I0214 19:10:16.964296 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-qjgh4"] Feb 14 19:10:17 crc kubenswrapper[4897]: I0214 19:10:17.181767 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5858fcf85c-g8zcx"] Feb 14 19:10:17 crc kubenswrapper[4897]: W0214 19:10:17.194127 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffd0f657_d81f_4767_b645_685963cf78ca.slice/crio-88aef8cddbac8fe9fa710ea1f49eeff517d9f1978d50995663df09bacb995465 WatchSource:0}: Error finding container 88aef8cddbac8fe9fa710ea1f49eeff517d9f1978d50995663df09bacb995465: Status 404 returned error can't find the container with id 88aef8cddbac8fe9fa710ea1f49eeff517d9f1978d50995663df09bacb995465 Feb 14 19:10:17 crc kubenswrapper[4897]: I0214 19:10:17.357634 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6dff47865f-dwdfs"] Feb 14 19:10:17 crc kubenswrapper[4897]: W0214 19:10:17.360205 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d94bdc7_c732_4513_878f_0d7f8ae186ca.slice/crio-63de15a45c790ee67121b57d5e5a2ead19db4f048d95f8b2f96a879c3f8c6463 WatchSource:0}: Error finding container 63de15a45c790ee67121b57d5e5a2ead19db4f048d95f8b2f96a879c3f8c6463: Status 404 returned error can't find the container with id 63de15a45c790ee67121b57d5e5a2ead19db4f048d95f8b2f96a879c3f8c6463 Feb 14 19:10:17 crc kubenswrapper[4897]: I0214 19:10:17.499637 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-59bb6b8559-n8bq2"] Feb 14 19:10:17 crc kubenswrapper[4897]: W0214 19:10:17.501208 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa628683_cd13_40e1_a275_1bf56d130479.slice/crio-9a31d792152791651a0e3ab71a64e4bff792f20215edff590449fddc28af0a1d WatchSource:0}: Error finding container 9a31d792152791651a0e3ab71a64e4bff792f20215edff590449fddc28af0a1d: Status 404 returned error can't find the container with id 9a31d792152791651a0e3ab71a64e4bff792f20215edff590449fddc28af0a1d Feb 14 19:10:17 crc kubenswrapper[4897]: I0214 19:10:17.808906 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0d02d9c-0ec9-4a03-9c21-79f326ef56fd" path="/var/lib/kubelet/pods/f0d02d9c-0ec9-4a03-9c21-79f326ef56fd/volumes" Feb 14 19:10:17 crc kubenswrapper[4897]: I0214 19:10:17.917867 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5858fcf85c-g8zcx" event={"ID":"ffd0f657-d81f-4767-b645-685963cf78ca","Type":"ContainerStarted","Data":"9a06b05ffdac84a0537d10465873455d6d2a0d7f79925e463bc1bf4a77ef7f7b"} Feb 14 19:10:17 crc kubenswrapper[4897]: I0214 19:10:17.917924 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5858fcf85c-g8zcx" event={"ID":"ffd0f657-d81f-4767-b645-685963cf78ca","Type":"ContainerStarted","Data":"88aef8cddbac8fe9fa710ea1f49eeff517d9f1978d50995663df09bacb995465"} Feb 14 19:10:17 crc kubenswrapper[4897]: I0214 19:10:17.918110 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5858fcf85c-g8zcx" Feb 14 19:10:17 crc kubenswrapper[4897]: I0214 19:10:17.923199 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6dff47865f-dwdfs" event={"ID":"8d94bdc7-c732-4513-878f-0d7f8ae186ca","Type":"ContainerStarted","Data":"63de15a45c790ee67121b57d5e5a2ead19db4f048d95f8b2f96a879c3f8c6463"} Feb 14 19:10:17 crc kubenswrapper[4897]: I0214 19:10:17.924660 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-59bb6b8559-n8bq2" event={"ID":"aa628683-cd13-40e1-a275-1bf56d130479","Type":"ContainerStarted","Data":"9a31d792152791651a0e3ab71a64e4bff792f20215edff590449fddc28af0a1d"} Feb 14 19:10:17 crc kubenswrapper[4897]: I0214 19:10:17.941972 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5858fcf85c-g8zcx" podStartSLOduration=1.941954354 podStartE2EDuration="1.941954354s" podCreationTimestamp="2026-02-14 19:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:10:17.932348644 +0000 UTC m=+1670.908757127" watchObservedRunningTime="2026-02-14 19:10:17.941954354 +0000 UTC m=+1670.918362837" Feb 14 19:10:19 crc kubenswrapper[4897]: I0214 19:10:19.955983 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6dff47865f-dwdfs" event={"ID":"8d94bdc7-c732-4513-878f-0d7f8ae186ca","Type":"ContainerStarted","Data":"968beba5e5adab0ffe1d3df9bbc62fe8f852981a7129403d42e76c4fb642e938"} Feb 14 19:10:19 crc kubenswrapper[4897]: I0214 19:10:19.957702 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6dff47865f-dwdfs" Feb 14 19:10:19 crc kubenswrapper[4897]: I0214 19:10:19.959379 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-59bb6b8559-n8bq2" event={"ID":"aa628683-cd13-40e1-a275-1bf56d130479","Type":"ContainerStarted","Data":"d9bac807a83543b89c73831d56891013818e73bca597ff25eaab460f2fdbac27"} Feb 14 19:10:19 crc kubenswrapper[4897]: I0214 19:10:19.959840 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-59bb6b8559-n8bq2" Feb 14 19:10:19 crc kubenswrapper[4897]: I0214 19:10:19.979625 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6dff47865f-dwdfs" podStartSLOduration=2.357608654 podStartE2EDuration="3.979610731s" podCreationTimestamp="2026-02-14 19:10:16 +0000 UTC" firstStartedPulling="2026-02-14 19:10:17.362679017 +0000 UTC m=+1670.339087500" lastFinishedPulling="2026-02-14 19:10:18.984681094 +0000 UTC m=+1671.961089577" observedRunningTime="2026-02-14 19:10:19.974632325 +0000 UTC m=+1672.951040808" watchObservedRunningTime="2026-02-14 19:10:19.979610731 +0000 UTC m=+1672.956019204" Feb 14 19:10:19 crc kubenswrapper[4897]: I0214 19:10:19.998092 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-59bb6b8559-n8bq2" podStartSLOduration=2.516119982 podStartE2EDuration="3.998073997s" podCreationTimestamp="2026-02-14 19:10:16 +0000 UTC" firstStartedPulling="2026-02-14 19:10:17.503787321 +0000 UTC m=+1670.480195804" lastFinishedPulling="2026-02-14 19:10:18.985741336 +0000 UTC m=+1671.962149819" observedRunningTime="2026-02-14 19:10:19.996858529 +0000 UTC m=+1672.973267012" watchObservedRunningTime="2026-02-14 19:10:19.998073997 +0000 UTC m=+1672.974482480" Feb 14 19:10:24 crc kubenswrapper[4897]: I0214 19:10:24.858492 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2"] Feb 14 19:10:24 crc kubenswrapper[4897]: I0214 19:10:24.860253 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2" Feb 14 19:10:24 crc kubenswrapper[4897]: I0214 19:10:24.862891 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 19:10:24 crc kubenswrapper[4897]: I0214 19:10:24.863020 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j869w" Feb 14 19:10:24 crc kubenswrapper[4897]: I0214 19:10:24.863511 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 19:10:24 crc kubenswrapper[4897]: I0214 19:10:24.877374 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 19:10:24 crc kubenswrapper[4897]: I0214 19:10:24.901387 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2"] Feb 14 19:10:24 crc kubenswrapper[4897]: I0214 19:10:24.993216 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/53f34fde-c1f7-4d7c-906e-eb55326f4789-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2\" (UID: \"53f34fde-c1f7-4d7c-906e-eb55326f4789\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2" Feb 14 19:10:24 crc kubenswrapper[4897]: I0214 19:10:24.993411 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/53f34fde-c1f7-4d7c-906e-eb55326f4789-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2\" (UID: \"53f34fde-c1f7-4d7c-906e-eb55326f4789\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2" Feb 14 19:10:24 crc kubenswrapper[4897]: I0214 19:10:24.993577 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f34fde-c1f7-4d7c-906e-eb55326f4789-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2\" (UID: \"53f34fde-c1f7-4d7c-906e-eb55326f4789\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2" Feb 14 19:10:24 crc kubenswrapper[4897]: I0214 19:10:24.993659 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5wxl\" (UniqueName: \"kubernetes.io/projected/53f34fde-c1f7-4d7c-906e-eb55326f4789-kube-api-access-g5wxl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2\" (UID: \"53f34fde-c1f7-4d7c-906e-eb55326f4789\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2" Feb 14 19:10:25 crc kubenswrapper[4897]: I0214 19:10:25.095739 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5wxl\" (UniqueName: \"kubernetes.io/projected/53f34fde-c1f7-4d7c-906e-eb55326f4789-kube-api-access-g5wxl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2\" (UID: \"53f34fde-c1f7-4d7c-906e-eb55326f4789\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2" Feb 14 19:10:25 crc kubenswrapper[4897]: I0214 19:10:25.095927 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/53f34fde-c1f7-4d7c-906e-eb55326f4789-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2\" (UID: \"53f34fde-c1f7-4d7c-906e-eb55326f4789\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2" Feb 14 19:10:25 crc kubenswrapper[4897]: I0214 19:10:25.096053 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/53f34fde-c1f7-4d7c-906e-eb55326f4789-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2\" (UID: \"53f34fde-c1f7-4d7c-906e-eb55326f4789\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2" Feb 14 19:10:25 crc kubenswrapper[4897]: I0214 19:10:25.096118 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f34fde-c1f7-4d7c-906e-eb55326f4789-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2\" (UID: \"53f34fde-c1f7-4d7c-906e-eb55326f4789\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2" Feb 14 19:10:25 crc kubenswrapper[4897]: I0214 19:10:25.101960 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/53f34fde-c1f7-4d7c-906e-eb55326f4789-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2\" (UID: \"53f34fde-c1f7-4d7c-906e-eb55326f4789\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2" Feb 14 19:10:25 crc kubenswrapper[4897]: I0214 19:10:25.102521 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f34fde-c1f7-4d7c-906e-eb55326f4789-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2\" (UID: \"53f34fde-c1f7-4d7c-906e-eb55326f4789\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2" Feb 14 19:10:25 crc kubenswrapper[4897]: I0214 19:10:25.102667 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/53f34fde-c1f7-4d7c-906e-eb55326f4789-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2\" (UID: \"53f34fde-c1f7-4d7c-906e-eb55326f4789\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2" Feb 14 19:10:25 crc kubenswrapper[4897]: I0214 19:10:25.118945 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5wxl\" (UniqueName: \"kubernetes.io/projected/53f34fde-c1f7-4d7c-906e-eb55326f4789-kube-api-access-g5wxl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2\" (UID: \"53f34fde-c1f7-4d7c-906e-eb55326f4789\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2" Feb 14 19:10:25 crc kubenswrapper[4897]: I0214 19:10:25.183917 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2" Feb 14 19:10:26 crc kubenswrapper[4897]: W0214 19:10:26.355415 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53f34fde_c1f7_4d7c_906e_eb55326f4789.slice/crio-e2280921a13b0f6a3d32455ea05673e00c082021505a231062fb8f796c7dfd6e WatchSource:0}: Error finding container e2280921a13b0f6a3d32455ea05673e00c082021505a231062fb8f796c7dfd6e: Status 404 returned error can't find the container with id e2280921a13b0f6a3d32455ea05673e00c082021505a231062fb8f796c7dfd6e Feb 14 19:10:26 crc kubenswrapper[4897]: I0214 19:10:26.364105 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2"] Feb 14 19:10:27 crc kubenswrapper[4897]: I0214 19:10:27.061176 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2" event={"ID":"53f34fde-c1f7-4d7c-906e-eb55326f4789","Type":"ContainerStarted","Data":"e2280921a13b0f6a3d32455ea05673e00c082021505a231062fb8f796c7dfd6e"} Feb 14 19:10:28 crc kubenswrapper[4897]: I0214 19:10:28.527755 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6dff47865f-dwdfs" Feb 14 19:10:28 crc kubenswrapper[4897]: I0214 19:10:28.541191 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-59bb6b8559-n8bq2" Feb 14 19:10:28 crc kubenswrapper[4897]: I0214 19:10:28.625104 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5fc95b4d56-9mkgz"] Feb 14 19:10:28 crc kubenswrapper[4897]: I0214 19:10:28.625358 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-5fc95b4d56-9mkgz" podUID="a2149326-55f7-405e-a005-d2b44e58342c" containerName="heat-api" containerID="cri-o://23d04ad7640a980ddcfd7c7faf1400d8b46afa5ac4578c55dedf9535aeca02f1" gracePeriod=60 Feb 14 19:10:28 crc kubenswrapper[4897]: I0214 19:10:28.641662 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-84bd5445c4-lf5pt"] Feb 14 19:10:28 crc kubenswrapper[4897]: I0214 19:10:28.641927 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" podUID="62ecb4f3-ad3f-4146-99b6-be063902ea75" containerName="heat-cfnapi" containerID="cri-o://71ac1f7e244a6daa7ed1c05a06b434a1c59fb9e3ac756c95f6d8f81ae1fc1090" gracePeriod=60 Feb 14 19:10:31 crc kubenswrapper[4897]: I0214 19:10:31.795845 4897 scope.go:117] "RemoveContainer" containerID="66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" Feb 14 19:10:31 crc kubenswrapper[4897]: E0214 19:10:31.796776 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:10:31 crc kubenswrapper[4897]: E0214 19:10:31.930265 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62ecb4f3_ad3f_4146_99b6_be063902ea75.slice/crio-71ac1f7e244a6daa7ed1c05a06b434a1c59fb9e3ac756c95f6d8f81ae1fc1090.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2149326_55f7_405e_a005_d2b44e58342c.slice/crio-conmon-23d04ad7640a980ddcfd7c7faf1400d8b46afa5ac4578c55dedf9535aeca02f1.scope\": RecentStats: unable to find data in memory cache]" Feb 14 19:10:32 crc kubenswrapper[4897]: I0214 19:10:32.121242 4897 generic.go:334] "Generic (PLEG): container finished" podID="a2149326-55f7-405e-a005-d2b44e58342c" containerID="23d04ad7640a980ddcfd7c7faf1400d8b46afa5ac4578c55dedf9535aeca02f1" exitCode=0 Feb 14 19:10:32 crc kubenswrapper[4897]: I0214 19:10:32.121316 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5fc95b4d56-9mkgz" event={"ID":"a2149326-55f7-405e-a005-d2b44e58342c","Type":"ContainerDied","Data":"23d04ad7640a980ddcfd7c7faf1400d8b46afa5ac4578c55dedf9535aeca02f1"} Feb 14 19:10:32 crc kubenswrapper[4897]: I0214 19:10:32.126929 4897 generic.go:334] "Generic (PLEG): container finished" podID="62ecb4f3-ad3f-4146-99b6-be063902ea75" containerID="71ac1f7e244a6daa7ed1c05a06b434a1c59fb9e3ac756c95f6d8f81ae1fc1090" exitCode=0 Feb 14 19:10:32 crc kubenswrapper[4897]: I0214 19:10:32.126970 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" event={"ID":"62ecb4f3-ad3f-4146-99b6-be063902ea75","Type":"ContainerDied","Data":"71ac1f7e244a6daa7ed1c05a06b434a1c59fb9e3ac756c95f6d8f81ae1fc1090"} Feb 14 19:10:33 crc kubenswrapper[4897]: I0214 19:10:33.139079 4897 generic.go:334] "Generic (PLEG): container finished" podID="540a20b2-a6ae-4527-bb75-b6d570169dc2" containerID="fc2738ccdc11b3412bc28340529e06e71691d823b557c8a44f5be09fee0dc82d" exitCode=0 Feb 14 19:10:33 crc kubenswrapper[4897]: I0214 19:10:33.139175 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"540a20b2-a6ae-4527-bb75-b6d570169dc2","Type":"ContainerDied","Data":"fc2738ccdc11b3412bc28340529e06e71691d823b557c8a44f5be09fee0dc82d"} Feb 14 19:10:33 crc kubenswrapper[4897]: I0214 19:10:33.143307 4897 generic.go:334] "Generic (PLEG): container finished" podID="292d0d53-8176-4764-84c5-a899eb11ab99" containerID="4e394d973137312fa94fefaf0d8c877935f1768784bce5d484494175d18ae595" exitCode=0 Feb 14 19:10:33 crc kubenswrapper[4897]: I0214 19:10:33.143337 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"292d0d53-8176-4764-84c5-a899eb11ab99","Type":"ContainerDied","Data":"4e394d973137312fa94fefaf0d8c877935f1768784bce5d484494175d18ae595"} Feb 14 19:10:33 crc kubenswrapper[4897]: I0214 19:10:33.323396 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-5fc95b4d56-9mkgz" podUID="a2149326-55f7-405e-a005-d2b44e58342c" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.224:8004/healthcheck\": dial tcp 10.217.0.224:8004: connect: connection refused" Feb 14 19:10:33 crc kubenswrapper[4897]: I0214 19:10:33.363123 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" podUID="62ecb4f3-ad3f-4146-99b6-be063902ea75" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.225:8000/healthcheck\": dial tcp 10.217.0.225:8000: connect: connection refused" Feb 14 19:10:33 crc kubenswrapper[4897]: I0214 19:10:33.378662 4897 scope.go:117] "RemoveContainer" containerID="7ab556f728a06c47b4fde7486fc8ac96b1b5906651fbc47fff920342644a0761" Feb 14 19:10:36 crc kubenswrapper[4897]: I0214 19:10:36.667796 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5858fcf85c-g8zcx" Feb 14 19:10:36 crc kubenswrapper[4897]: I0214 19:10:36.739677 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-dc4df654d-9w4f2"] Feb 14 19:10:36 crc kubenswrapper[4897]: I0214 19:10:36.739871 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-dc4df654d-9w4f2" podUID="3ff2fa58-497f-4e1c-8447-a25032ebac67" containerName="heat-engine" containerID="cri-o://323bab4d02197e6aa185d59bbda6d52187e4463538dc106202c3c2861baaa857" gracePeriod=60 Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.019544 4897 scope.go:117] "RemoveContainer" containerID="04221d64d92696a181811c2ac60b75595be0e422b637baaa9f90ac2bb60af323" Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.199268 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.500256 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5fc95b4d56-9mkgz" Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.611894 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-internal-tls-certs\") pod \"a2149326-55f7-405e-a005-d2b44e58342c\" (UID: \"a2149326-55f7-405e-a005-d2b44e58342c\") " Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.612276 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-combined-ca-bundle\") pod \"a2149326-55f7-405e-a005-d2b44e58342c\" (UID: \"a2149326-55f7-405e-a005-d2b44e58342c\") " Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.612477 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7rrv\" (UniqueName: \"kubernetes.io/projected/a2149326-55f7-405e-a005-d2b44e58342c-kube-api-access-l7rrv\") pod \"a2149326-55f7-405e-a005-d2b44e58342c\" (UID: \"a2149326-55f7-405e-a005-d2b44e58342c\") " Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.612583 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-config-data-custom\") pod \"a2149326-55f7-405e-a005-d2b44e58342c\" (UID: \"a2149326-55f7-405e-a005-d2b44e58342c\") " Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.612632 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-config-data\") pod \"a2149326-55f7-405e-a005-d2b44e58342c\" (UID: \"a2149326-55f7-405e-a005-d2b44e58342c\") " Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.612658 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-public-tls-certs\") pod \"a2149326-55f7-405e-a005-d2b44e58342c\" (UID: \"a2149326-55f7-405e-a005-d2b44e58342c\") " Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.654222 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a2149326-55f7-405e-a005-d2b44e58342c" (UID: "a2149326-55f7-405e-a005-d2b44e58342c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.654262 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2149326-55f7-405e-a005-d2b44e58342c-kube-api-access-l7rrv" (OuterVolumeSpecName: "kube-api-access-l7rrv") pod "a2149326-55f7-405e-a005-d2b44e58342c" (UID: "a2149326-55f7-405e-a005-d2b44e58342c"). InnerVolumeSpecName "kube-api-access-l7rrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.715638 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7rrv\" (UniqueName: \"kubernetes.io/projected/a2149326-55f7-405e-a005-d2b44e58342c-kube-api-access-l7rrv\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.715682 4897 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.730257 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2149326-55f7-405e-a005-d2b44e58342c" (UID: "a2149326-55f7-405e-a005-d2b44e58342c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.771511 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.814545 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a2149326-55f7-405e-a005-d2b44e58342c" (UID: "a2149326-55f7-405e-a005-d2b44e58342c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.814877 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-config-data" (OuterVolumeSpecName: "config-data") pod "a2149326-55f7-405e-a005-d2b44e58342c" (UID: "a2149326-55f7-405e-a005-d2b44e58342c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.818318 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.818347 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.818355 4897 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.834206 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a2149326-55f7-405e-a005-d2b44e58342c" (UID: "a2149326-55f7-405e-a005-d2b44e58342c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.919587 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-994sd\" (UniqueName: \"kubernetes.io/projected/62ecb4f3-ad3f-4146-99b6-be063902ea75-kube-api-access-994sd\") pod \"62ecb4f3-ad3f-4146-99b6-be063902ea75\" (UID: \"62ecb4f3-ad3f-4146-99b6-be063902ea75\") " Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.919776 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-internal-tls-certs\") pod \"62ecb4f3-ad3f-4146-99b6-be063902ea75\" (UID: \"62ecb4f3-ad3f-4146-99b6-be063902ea75\") " Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.919893 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-public-tls-certs\") pod \"62ecb4f3-ad3f-4146-99b6-be063902ea75\" (UID: \"62ecb4f3-ad3f-4146-99b6-be063902ea75\") " Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.919918 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-config-data\") pod \"62ecb4f3-ad3f-4146-99b6-be063902ea75\" (UID: \"62ecb4f3-ad3f-4146-99b6-be063902ea75\") " Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.919967 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-config-data-custom\") pod \"62ecb4f3-ad3f-4146-99b6-be063902ea75\" (UID: \"62ecb4f3-ad3f-4146-99b6-be063902ea75\") " Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.920019 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-combined-ca-bundle\") pod \"62ecb4f3-ad3f-4146-99b6-be063902ea75\" (UID: \"62ecb4f3-ad3f-4146-99b6-be063902ea75\") " Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.920601 4897 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2149326-55f7-405e-a005-d2b44e58342c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.924494 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62ecb4f3-ad3f-4146-99b6-be063902ea75-kube-api-access-994sd" (OuterVolumeSpecName: "kube-api-access-994sd") pod "62ecb4f3-ad3f-4146-99b6-be063902ea75" (UID: "62ecb4f3-ad3f-4146-99b6-be063902ea75"). InnerVolumeSpecName "kube-api-access-994sd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.940307 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "62ecb4f3-ad3f-4146-99b6-be063902ea75" (UID: "62ecb4f3-ad3f-4146-99b6-be063902ea75"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:10:37 crc kubenswrapper[4897]: I0214 19:10:37.965909 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62ecb4f3-ad3f-4146-99b6-be063902ea75" (UID: "62ecb4f3-ad3f-4146-99b6-be063902ea75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.004408 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "62ecb4f3-ad3f-4146-99b6-be063902ea75" (UID: "62ecb4f3-ad3f-4146-99b6-be063902ea75"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.014201 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-config-data" (OuterVolumeSpecName: "config-data") pod "62ecb4f3-ad3f-4146-99b6-be063902ea75" (UID: "62ecb4f3-ad3f-4146-99b6-be063902ea75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.023611 4897 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.023808 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.023890 4897 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.023963 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.024043 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-994sd\" (UniqueName: \"kubernetes.io/projected/62ecb4f3-ad3f-4146-99b6-be063902ea75-kube-api-access-994sd\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.036165 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "62ecb4f3-ad3f-4146-99b6-be063902ea75" (UID: "62ecb4f3-ad3f-4146-99b6-be063902ea75"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.126603 4897 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62ecb4f3-ad3f-4146-99b6-be063902ea75-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.229386 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"540a20b2-a6ae-4527-bb75-b6d570169dc2","Type":"ContainerStarted","Data":"21280252bfbb40fd0e55bcab2b4a70b67c31369bd96e5b8a2085aed6e76af66e"} Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.229906 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.230952 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.231218 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84bd5445c4-lf5pt" event={"ID":"62ecb4f3-ad3f-4146-99b6-be063902ea75","Type":"ContainerDied","Data":"cfd8c199990154c0b31159d75ba1f7ac009cbd37c3177db06842bda6dca9e4fe"} Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.231298 4897 scope.go:117] "RemoveContainer" containerID="71ac1f7e244a6daa7ed1c05a06b434a1c59fb9e3ac756c95f6d8f81ae1fc1090" Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.233147 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2" event={"ID":"53f34fde-c1f7-4d7c-906e-eb55326f4789","Type":"ContainerStarted","Data":"0e14ea85d8fd9fa65e8a539f2cd0bd153a2b5fc62605999ed24394cd0641008b"} Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.243929 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"292d0d53-8176-4764-84c5-a899eb11ab99","Type":"ContainerStarted","Data":"e92cf71e3fc47b1c7d46ebeb3e4b68a8acb07f96a33e1d7465b0594ee54ccc77"} Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.245044 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.247980 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5fc95b4d56-9mkgz" event={"ID":"a2149326-55f7-405e-a005-d2b44e58342c","Type":"ContainerDied","Data":"24776381426f3ceb275ffdbb9213fcd9be95374ca27842b95cdc009f1c5a3c7b"} Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.248132 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5fc95b4d56-9mkgz" Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.262712 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=45.26269568 podStartE2EDuration="45.26269568s" podCreationTimestamp="2026-02-14 19:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:10:38.25823843 +0000 UTC m=+1691.234646933" watchObservedRunningTime="2026-02-14 19:10:38.26269568 +0000 UTC m=+1691.239104163" Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.263301 4897 scope.go:117] "RemoveContainer" containerID="23d04ad7640a980ddcfd7c7faf1400d8b46afa5ac4578c55dedf9535aeca02f1" Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.290456 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=41.290438605 podStartE2EDuration="41.290438605s" podCreationTimestamp="2026-02-14 19:09:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:10:38.284448198 +0000 UTC m=+1691.260856691" watchObservedRunningTime="2026-02-14 19:10:38.290438605 +0000 UTC m=+1691.266847088" Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.307097 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2" podStartSLOduration=3.471519383 podStartE2EDuration="14.307080165s" podCreationTimestamp="2026-02-14 19:10:24 +0000 UTC" firstStartedPulling="2026-02-14 19:10:26.358899822 +0000 UTC m=+1679.335308305" lastFinishedPulling="2026-02-14 19:10:37.194460604 +0000 UTC m=+1690.170869087" observedRunningTime="2026-02-14 19:10:38.303533384 +0000 UTC m=+1691.279941867" watchObservedRunningTime="2026-02-14 19:10:38.307080165 +0000 UTC m=+1691.283488648" Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.337658 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-84bd5445c4-lf5pt"] Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.347853 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-84bd5445c4-lf5pt"] Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.359065 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5fc95b4d56-9mkgz"] Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.370448 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5fc95b4d56-9mkgz"] Feb 14 19:10:38 crc kubenswrapper[4897]: I0214 19:10:38.624761 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 14 19:10:39 crc kubenswrapper[4897]: I0214 19:10:39.827297 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62ecb4f3-ad3f-4146-99b6-be063902ea75" path="/var/lib/kubelet/pods/62ecb4f3-ad3f-4146-99b6-be063902ea75/volumes" Feb 14 19:10:39 crc kubenswrapper[4897]: I0214 19:10:39.828769 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2149326-55f7-405e-a005-d2b44e58342c" path="/var/lib/kubelet/pods/a2149326-55f7-405e-a005-d2b44e58342c/volumes" Feb 14 19:10:42 crc kubenswrapper[4897]: I0214 19:10:42.794081 4897 scope.go:117] "RemoveContainer" containerID="66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" Feb 14 19:10:42 crc kubenswrapper[4897]: E0214 19:10:42.795959 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:10:43 crc kubenswrapper[4897]: I0214 19:10:43.680601 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-hb6vq"] Feb 14 19:10:43 crc kubenswrapper[4897]: I0214 19:10:43.696723 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-hb6vq"] Feb 14 19:10:43 crc kubenswrapper[4897]: I0214 19:10:43.793978 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-vxthf"] Feb 14 19:10:43 crc kubenswrapper[4897]: E0214 19:10:43.794644 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62ecb4f3-ad3f-4146-99b6-be063902ea75" containerName="heat-cfnapi" Feb 14 19:10:43 crc kubenswrapper[4897]: I0214 19:10:43.794662 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="62ecb4f3-ad3f-4146-99b6-be063902ea75" containerName="heat-cfnapi" Feb 14 19:10:43 crc kubenswrapper[4897]: E0214 19:10:43.794716 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2149326-55f7-405e-a005-d2b44e58342c" containerName="heat-api" Feb 14 19:10:43 crc kubenswrapper[4897]: I0214 19:10:43.794728 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2149326-55f7-405e-a005-d2b44e58342c" containerName="heat-api" Feb 14 19:10:43 crc kubenswrapper[4897]: I0214 19:10:43.795017 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2149326-55f7-405e-a005-d2b44e58342c" containerName="heat-api" Feb 14 19:10:43 crc kubenswrapper[4897]: I0214 19:10:43.795137 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="62ecb4f3-ad3f-4146-99b6-be063902ea75" containerName="heat-cfnapi" Feb 14 19:10:43 crc kubenswrapper[4897]: I0214 19:10:43.798336 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-vxthf" Feb 14 19:10:43 crc kubenswrapper[4897]: I0214 19:10:43.835314 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 14 19:10:43 crc kubenswrapper[4897]: I0214 19:10:43.847777 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9ec880e-a3d2-47d3-86b2-b3e826d66a52" path="/var/lib/kubelet/pods/b9ec880e-a3d2-47d3-86b2-b3e826d66a52/volumes" Feb 14 19:10:43 crc kubenswrapper[4897]: I0214 19:10:43.848454 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-vxthf"] Feb 14 19:10:43 crc kubenswrapper[4897]: I0214 19:10:43.864970 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20144c84-5098-42ee-9c62-576ed65ac421-config-data\") pod \"aodh-db-sync-vxthf\" (UID: \"20144c84-5098-42ee-9c62-576ed65ac421\") " pod="openstack/aodh-db-sync-vxthf" Feb 14 19:10:43 crc kubenswrapper[4897]: I0214 19:10:43.865463 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20144c84-5098-42ee-9c62-576ed65ac421-scripts\") pod \"aodh-db-sync-vxthf\" (UID: \"20144c84-5098-42ee-9c62-576ed65ac421\") " pod="openstack/aodh-db-sync-vxthf" Feb 14 19:10:43 crc kubenswrapper[4897]: I0214 19:10:43.866018 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20144c84-5098-42ee-9c62-576ed65ac421-combined-ca-bundle\") pod \"aodh-db-sync-vxthf\" (UID: \"20144c84-5098-42ee-9c62-576ed65ac421\") " pod="openstack/aodh-db-sync-vxthf" Feb 14 19:10:43 crc kubenswrapper[4897]: I0214 19:10:43.866355 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccbk4\" (UniqueName: \"kubernetes.io/projected/20144c84-5098-42ee-9c62-576ed65ac421-kube-api-access-ccbk4\") pod \"aodh-db-sync-vxthf\" (UID: \"20144c84-5098-42ee-9c62-576ed65ac421\") " pod="openstack/aodh-db-sync-vxthf" Feb 14 19:10:43 crc kubenswrapper[4897]: I0214 19:10:43.969355 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20144c84-5098-42ee-9c62-576ed65ac421-config-data\") pod \"aodh-db-sync-vxthf\" (UID: \"20144c84-5098-42ee-9c62-576ed65ac421\") " pod="openstack/aodh-db-sync-vxthf" Feb 14 19:10:43 crc kubenswrapper[4897]: I0214 19:10:43.970399 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20144c84-5098-42ee-9c62-576ed65ac421-scripts\") pod \"aodh-db-sync-vxthf\" (UID: \"20144c84-5098-42ee-9c62-576ed65ac421\") " pod="openstack/aodh-db-sync-vxthf" Feb 14 19:10:43 crc kubenswrapper[4897]: I0214 19:10:43.970574 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20144c84-5098-42ee-9c62-576ed65ac421-combined-ca-bundle\") pod \"aodh-db-sync-vxthf\" (UID: \"20144c84-5098-42ee-9c62-576ed65ac421\") " pod="openstack/aodh-db-sync-vxthf" Feb 14 19:10:43 crc kubenswrapper[4897]: I0214 19:10:43.970797 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccbk4\" (UniqueName: \"kubernetes.io/projected/20144c84-5098-42ee-9c62-576ed65ac421-kube-api-access-ccbk4\") pod \"aodh-db-sync-vxthf\" (UID: \"20144c84-5098-42ee-9c62-576ed65ac421\") " pod="openstack/aodh-db-sync-vxthf" Feb 14 19:10:43 crc kubenswrapper[4897]: I0214 19:10:43.975653 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20144c84-5098-42ee-9c62-576ed65ac421-scripts\") pod \"aodh-db-sync-vxthf\" (UID: \"20144c84-5098-42ee-9c62-576ed65ac421\") " pod="openstack/aodh-db-sync-vxthf" Feb 14 19:10:43 crc kubenswrapper[4897]: I0214 19:10:43.978354 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20144c84-5098-42ee-9c62-576ed65ac421-combined-ca-bundle\") pod \"aodh-db-sync-vxthf\" (UID: \"20144c84-5098-42ee-9c62-576ed65ac421\") " pod="openstack/aodh-db-sync-vxthf" Feb 14 19:10:43 crc kubenswrapper[4897]: I0214 19:10:43.988778 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccbk4\" (UniqueName: \"kubernetes.io/projected/20144c84-5098-42ee-9c62-576ed65ac421-kube-api-access-ccbk4\") pod \"aodh-db-sync-vxthf\" (UID: \"20144c84-5098-42ee-9c62-576ed65ac421\") " pod="openstack/aodh-db-sync-vxthf" Feb 14 19:10:43 crc kubenswrapper[4897]: I0214 19:10:43.989006 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20144c84-5098-42ee-9c62-576ed65ac421-config-data\") pod \"aodh-db-sync-vxthf\" (UID: \"20144c84-5098-42ee-9c62-576ed65ac421\") " pod="openstack/aodh-db-sync-vxthf" Feb 14 19:10:44 crc kubenswrapper[4897]: I0214 19:10:44.157872 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-vxthf" Feb 14 19:10:44 crc kubenswrapper[4897]: I0214 19:10:44.684806 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-vxthf"] Feb 14 19:10:44 crc kubenswrapper[4897]: W0214 19:10:44.686838 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20144c84_5098_42ee_9c62_576ed65ac421.slice/crio-13653217fdb60a6fb338e440a42f16ac7805215702e1f85cb4d1bacb5ae205e3 WatchSource:0}: Error finding container 13653217fdb60a6fb338e440a42f16ac7805215702e1f85cb4d1bacb5ae205e3: Status 404 returned error can't find the container with id 13653217fdb60a6fb338e440a42f16ac7805215702e1f85cb4d1bacb5ae205e3 Feb 14 19:10:45 crc kubenswrapper[4897]: I0214 19:10:45.341455 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-vxthf" event={"ID":"20144c84-5098-42ee-9c62-576ed65ac421","Type":"ContainerStarted","Data":"13653217fdb60a6fb338e440a42f16ac7805215702e1f85cb4d1bacb5ae205e3"} Feb 14 19:10:46 crc kubenswrapper[4897]: E0214 19:10:46.405537 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="323bab4d02197e6aa185d59bbda6d52187e4463538dc106202c3c2861baaa857" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 19:10:46 crc kubenswrapper[4897]: E0214 19:10:46.406851 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="323bab4d02197e6aa185d59bbda6d52187e4463538dc106202c3c2861baaa857" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 19:10:46 crc kubenswrapper[4897]: E0214 19:10:46.411761 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="323bab4d02197e6aa185d59bbda6d52187e4463538dc106202c3c2861baaa857" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 19:10:46 crc kubenswrapper[4897]: E0214 19:10:46.411797 4897 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-dc4df654d-9w4f2" podUID="3ff2fa58-497f-4e1c-8447-a25032ebac67" containerName="heat-engine" Feb 14 19:10:48 crc kubenswrapper[4897]: I0214 19:10:48.009131 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="292d0d53-8176-4764-84c5-a899eb11ab99" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.23:5671: connect: connection refused" Feb 14 19:10:53 crc kubenswrapper[4897]: I0214 19:10:53.794378 4897 scope.go:117] "RemoveContainer" containerID="66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" Feb 14 19:10:53 crc kubenswrapper[4897]: E0214 19:10:53.795236 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:10:53 crc kubenswrapper[4897]: I0214 19:10:53.876703 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="540a20b2-a6ae-4527-bb75-b6d570169dc2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.22:5671: connect: connection refused" Feb 14 19:10:55 crc kubenswrapper[4897]: I0214 19:10:55.500626 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-vxthf" event={"ID":"20144c84-5098-42ee-9c62-576ed65ac421","Type":"ContainerStarted","Data":"ce8e06f868cc33d4d8d6e6a625e0df406ac0bda4a3373304f4aaafadda3adb1e"} Feb 14 19:10:55 crc kubenswrapper[4897]: I0214 19:10:55.503470 4897 generic.go:334] "Generic (PLEG): container finished" podID="53f34fde-c1f7-4d7c-906e-eb55326f4789" containerID="0e14ea85d8fd9fa65e8a539f2cd0bd153a2b5fc62605999ed24394cd0641008b" exitCode=0 Feb 14 19:10:55 crc kubenswrapper[4897]: I0214 19:10:55.503511 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2" event={"ID":"53f34fde-c1f7-4d7c-906e-eb55326f4789","Type":"ContainerDied","Data":"0e14ea85d8fd9fa65e8a539f2cd0bd153a2b5fc62605999ed24394cd0641008b"} Feb 14 19:10:55 crc kubenswrapper[4897]: I0214 19:10:55.544918 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-vxthf" podStartSLOduration=2.197150584 podStartE2EDuration="12.544902064s" podCreationTimestamp="2026-02-14 19:10:43 +0000 UTC" firstStartedPulling="2026-02-14 19:10:44.690840354 +0000 UTC m=+1697.667248837" lastFinishedPulling="2026-02-14 19:10:55.038591834 +0000 UTC m=+1708.015000317" observedRunningTime="2026-02-14 19:10:55.528308946 +0000 UTC m=+1708.504717429" watchObservedRunningTime="2026-02-14 19:10:55.544902064 +0000 UTC m=+1708.521310547" Feb 14 19:10:56 crc kubenswrapper[4897]: E0214 19:10:56.409125 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="323bab4d02197e6aa185d59bbda6d52187e4463538dc106202c3c2861baaa857" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 19:10:56 crc kubenswrapper[4897]: E0214 19:10:56.410812 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="323bab4d02197e6aa185d59bbda6d52187e4463538dc106202c3c2861baaa857" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 19:10:56 crc kubenswrapper[4897]: E0214 19:10:56.414404 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="323bab4d02197e6aa185d59bbda6d52187e4463538dc106202c3c2861baaa857" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 19:10:56 crc kubenswrapper[4897]: E0214 19:10:56.414468 4897 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-dc4df654d-9w4f2" podUID="3ff2fa58-497f-4e1c-8447-a25032ebac67" containerName="heat-engine" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.064462 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.227644 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/53f34fde-c1f7-4d7c-906e-eb55326f4789-ssh-key-openstack-edpm-ipam\") pod \"53f34fde-c1f7-4d7c-906e-eb55326f4789\" (UID: \"53f34fde-c1f7-4d7c-906e-eb55326f4789\") " Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.227939 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f34fde-c1f7-4d7c-906e-eb55326f4789-repo-setup-combined-ca-bundle\") pod \"53f34fde-c1f7-4d7c-906e-eb55326f4789\" (UID: \"53f34fde-c1f7-4d7c-906e-eb55326f4789\") " Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.227996 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5wxl\" (UniqueName: \"kubernetes.io/projected/53f34fde-c1f7-4d7c-906e-eb55326f4789-kube-api-access-g5wxl\") pod \"53f34fde-c1f7-4d7c-906e-eb55326f4789\" (UID: \"53f34fde-c1f7-4d7c-906e-eb55326f4789\") " Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.228109 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/53f34fde-c1f7-4d7c-906e-eb55326f4789-inventory\") pod \"53f34fde-c1f7-4d7c-906e-eb55326f4789\" (UID: \"53f34fde-c1f7-4d7c-906e-eb55326f4789\") " Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.240322 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53f34fde-c1f7-4d7c-906e-eb55326f4789-kube-api-access-g5wxl" (OuterVolumeSpecName: "kube-api-access-g5wxl") pod "53f34fde-c1f7-4d7c-906e-eb55326f4789" (UID: "53f34fde-c1f7-4d7c-906e-eb55326f4789"). InnerVolumeSpecName "kube-api-access-g5wxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.242919 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f34fde-c1f7-4d7c-906e-eb55326f4789-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "53f34fde-c1f7-4d7c-906e-eb55326f4789" (UID: "53f34fde-c1f7-4d7c-906e-eb55326f4789"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.299199 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f34fde-c1f7-4d7c-906e-eb55326f4789-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "53f34fde-c1f7-4d7c-906e-eb55326f4789" (UID: "53f34fde-c1f7-4d7c-906e-eb55326f4789"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.304765 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f34fde-c1f7-4d7c-906e-eb55326f4789-inventory" (OuterVolumeSpecName: "inventory") pod "53f34fde-c1f7-4d7c-906e-eb55326f4789" (UID: "53f34fde-c1f7-4d7c-906e-eb55326f4789"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.331426 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/53f34fde-c1f7-4d7c-906e-eb55326f4789-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.331458 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/53f34fde-c1f7-4d7c-906e-eb55326f4789-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.331468 4897 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f34fde-c1f7-4d7c-906e-eb55326f4789-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.331478 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5wxl\" (UniqueName: \"kubernetes.io/projected/53f34fde-c1f7-4d7c-906e-eb55326f4789-kube-api-access-g5wxl\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.532943 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2" event={"ID":"53f34fde-c1f7-4d7c-906e-eb55326f4789","Type":"ContainerDied","Data":"e2280921a13b0f6a3d32455ea05673e00c082021505a231062fb8f796c7dfd6e"} Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.533284 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2280921a13b0f6a3d32455ea05673e00c082021505a231062fb8f796c7dfd6e" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.533026 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.649981 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-58kld"] Feb 14 19:10:57 crc kubenswrapper[4897]: E0214 19:10:57.650582 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53f34fde-c1f7-4d7c-906e-eb55326f4789" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.650596 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="53f34fde-c1f7-4d7c-906e-eb55326f4789" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.650819 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="53f34fde-c1f7-4d7c-906e-eb55326f4789" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.651672 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-58kld" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.654145 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.654330 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.654454 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j869w" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.654770 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.667250 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-58kld"] Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.742534 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phc4d\" (UniqueName: \"kubernetes.io/projected/c6adeab7-7f81-44b5-8a1d-072f7c050466-kube-api-access-phc4d\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-58kld\" (UID: \"c6adeab7-7f81-44b5-8a1d-072f7c050466\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-58kld" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.742615 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6adeab7-7f81-44b5-8a1d-072f7c050466-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-58kld\" (UID: \"c6adeab7-7f81-44b5-8a1d-072f7c050466\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-58kld" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.742653 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6adeab7-7f81-44b5-8a1d-072f7c050466-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-58kld\" (UID: \"c6adeab7-7f81-44b5-8a1d-072f7c050466\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-58kld" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.844980 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phc4d\" (UniqueName: \"kubernetes.io/projected/c6adeab7-7f81-44b5-8a1d-072f7c050466-kube-api-access-phc4d\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-58kld\" (UID: \"c6adeab7-7f81-44b5-8a1d-072f7c050466\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-58kld" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.845072 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6adeab7-7f81-44b5-8a1d-072f7c050466-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-58kld\" (UID: \"c6adeab7-7f81-44b5-8a1d-072f7c050466\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-58kld" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.845099 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6adeab7-7f81-44b5-8a1d-072f7c050466-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-58kld\" (UID: \"c6adeab7-7f81-44b5-8a1d-072f7c050466\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-58kld" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.851562 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6adeab7-7f81-44b5-8a1d-072f7c050466-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-58kld\" (UID: \"c6adeab7-7f81-44b5-8a1d-072f7c050466\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-58kld" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.851822 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6adeab7-7f81-44b5-8a1d-072f7c050466-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-58kld\" (UID: \"c6adeab7-7f81-44b5-8a1d-072f7c050466\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-58kld" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.867221 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phc4d\" (UniqueName: \"kubernetes.io/projected/c6adeab7-7f81-44b5-8a1d-072f7c050466-kube-api-access-phc4d\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-58kld\" (UID: \"c6adeab7-7f81-44b5-8a1d-072f7c050466\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-58kld" Feb 14 19:10:57 crc kubenswrapper[4897]: I0214 19:10:57.922935 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-dc4df654d-9w4f2" Feb 14 19:10:58 crc kubenswrapper[4897]: I0214 19:10:58.009233 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 14 19:10:58 crc kubenswrapper[4897]: I0214 19:10:58.026378 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-58kld" Feb 14 19:10:58 crc kubenswrapper[4897]: I0214 19:10:58.054157 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6slhc\" (UniqueName: \"kubernetes.io/projected/3ff2fa58-497f-4e1c-8447-a25032ebac67-kube-api-access-6slhc\") pod \"3ff2fa58-497f-4e1c-8447-a25032ebac67\" (UID: \"3ff2fa58-497f-4e1c-8447-a25032ebac67\") " Feb 14 19:10:58 crc kubenswrapper[4897]: I0214 19:10:58.054282 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff2fa58-497f-4e1c-8447-a25032ebac67-combined-ca-bundle\") pod \"3ff2fa58-497f-4e1c-8447-a25032ebac67\" (UID: \"3ff2fa58-497f-4e1c-8447-a25032ebac67\") " Feb 14 19:10:58 crc kubenswrapper[4897]: I0214 19:10:58.054310 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ff2fa58-497f-4e1c-8447-a25032ebac67-config-data-custom\") pod \"3ff2fa58-497f-4e1c-8447-a25032ebac67\" (UID: \"3ff2fa58-497f-4e1c-8447-a25032ebac67\") " Feb 14 19:10:58 crc kubenswrapper[4897]: I0214 19:10:58.054500 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ff2fa58-497f-4e1c-8447-a25032ebac67-config-data\") pod \"3ff2fa58-497f-4e1c-8447-a25032ebac67\" (UID: \"3ff2fa58-497f-4e1c-8447-a25032ebac67\") " Feb 14 19:10:58 crc kubenswrapper[4897]: I0214 19:10:58.110016 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ff2fa58-497f-4e1c-8447-a25032ebac67-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3ff2fa58-497f-4e1c-8447-a25032ebac67" (UID: "3ff2fa58-497f-4e1c-8447-a25032ebac67"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:10:58 crc kubenswrapper[4897]: I0214 19:10:58.111422 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ff2fa58-497f-4e1c-8447-a25032ebac67-kube-api-access-6slhc" (OuterVolumeSpecName: "kube-api-access-6slhc") pod "3ff2fa58-497f-4e1c-8447-a25032ebac67" (UID: "3ff2fa58-497f-4e1c-8447-a25032ebac67"). InnerVolumeSpecName "kube-api-access-6slhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:10:58 crc kubenswrapper[4897]: I0214 19:10:58.143141 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ff2fa58-497f-4e1c-8447-a25032ebac67-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ff2fa58-497f-4e1c-8447-a25032ebac67" (UID: "3ff2fa58-497f-4e1c-8447-a25032ebac67"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:10:58 crc kubenswrapper[4897]: I0214 19:10:58.152146 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ff2fa58-497f-4e1c-8447-a25032ebac67-config-data" (OuterVolumeSpecName: "config-data") pod "3ff2fa58-497f-4e1c-8447-a25032ebac67" (UID: "3ff2fa58-497f-4e1c-8447-a25032ebac67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:10:58 crc kubenswrapper[4897]: I0214 19:10:58.163689 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6slhc\" (UniqueName: \"kubernetes.io/projected/3ff2fa58-497f-4e1c-8447-a25032ebac67-kube-api-access-6slhc\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:58 crc kubenswrapper[4897]: I0214 19:10:58.163716 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff2fa58-497f-4e1c-8447-a25032ebac67-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:58 crc kubenswrapper[4897]: I0214 19:10:58.163726 4897 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ff2fa58-497f-4e1c-8447-a25032ebac67-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:58 crc kubenswrapper[4897]: I0214 19:10:58.163735 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ff2fa58-497f-4e1c-8447-a25032ebac67-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:10:58 crc kubenswrapper[4897]: I0214 19:10:58.546526 4897 generic.go:334] "Generic (PLEG): container finished" podID="3ff2fa58-497f-4e1c-8447-a25032ebac67" containerID="323bab4d02197e6aa185d59bbda6d52187e4463538dc106202c3c2861baaa857" exitCode=0 Feb 14 19:10:58 crc kubenswrapper[4897]: I0214 19:10:58.546618 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-dc4df654d-9w4f2" Feb 14 19:10:58 crc kubenswrapper[4897]: I0214 19:10:58.546631 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-dc4df654d-9w4f2" event={"ID":"3ff2fa58-497f-4e1c-8447-a25032ebac67","Type":"ContainerDied","Data":"323bab4d02197e6aa185d59bbda6d52187e4463538dc106202c3c2861baaa857"} Feb 14 19:10:58 crc kubenswrapper[4897]: I0214 19:10:58.546955 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-dc4df654d-9w4f2" event={"ID":"3ff2fa58-497f-4e1c-8447-a25032ebac67","Type":"ContainerDied","Data":"691ea19880ab14d79ac61e482efcb2b940fd315495537416e3dfc7d8b5586d47"} Feb 14 19:10:58 crc kubenswrapper[4897]: I0214 19:10:58.546984 4897 scope.go:117] "RemoveContainer" containerID="323bab4d02197e6aa185d59bbda6d52187e4463538dc106202c3c2861baaa857" Feb 14 19:10:58 crc kubenswrapper[4897]: I0214 19:10:58.550186 4897 generic.go:334] "Generic (PLEG): container finished" podID="20144c84-5098-42ee-9c62-576ed65ac421" containerID="ce8e06f868cc33d4d8d6e6a625e0df406ac0bda4a3373304f4aaafadda3adb1e" exitCode=0 Feb 14 19:10:58 crc kubenswrapper[4897]: I0214 19:10:58.550219 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-vxthf" event={"ID":"20144c84-5098-42ee-9c62-576ed65ac421","Type":"ContainerDied","Data":"ce8e06f868cc33d4d8d6e6a625e0df406ac0bda4a3373304f4aaafadda3adb1e"} Feb 14 19:10:58 crc kubenswrapper[4897]: I0214 19:10:58.582231 4897 scope.go:117] "RemoveContainer" containerID="323bab4d02197e6aa185d59bbda6d52187e4463538dc106202c3c2861baaa857" Feb 14 19:10:58 crc kubenswrapper[4897]: E0214 19:10:58.582646 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"323bab4d02197e6aa185d59bbda6d52187e4463538dc106202c3c2861baaa857\": container with ID starting with 323bab4d02197e6aa185d59bbda6d52187e4463538dc106202c3c2861baaa857 not found: ID does not exist" containerID="323bab4d02197e6aa185d59bbda6d52187e4463538dc106202c3c2861baaa857" Feb 14 19:10:58 crc kubenswrapper[4897]: I0214 19:10:58.582676 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"323bab4d02197e6aa185d59bbda6d52187e4463538dc106202c3c2861baaa857"} err="failed to get container status \"323bab4d02197e6aa185d59bbda6d52187e4463538dc106202c3c2861baaa857\": rpc error: code = NotFound desc = could not find container \"323bab4d02197e6aa185d59bbda6d52187e4463538dc106202c3c2861baaa857\": container with ID starting with 323bab4d02197e6aa185d59bbda6d52187e4463538dc106202c3c2861baaa857 not found: ID does not exist" Feb 14 19:10:58 crc kubenswrapper[4897]: I0214 19:10:58.609229 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-dc4df654d-9w4f2"] Feb 14 19:10:58 crc kubenswrapper[4897]: I0214 19:10:58.619542 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-dc4df654d-9w4f2"] Feb 14 19:10:58 crc kubenswrapper[4897]: I0214 19:10:58.715880 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-58kld"] Feb 14 19:10:59 crc kubenswrapper[4897]: I0214 19:10:59.571178 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-58kld" event={"ID":"c6adeab7-7f81-44b5-8a1d-072f7c050466","Type":"ContainerStarted","Data":"c48276b400adb1a31f18d48e53f7f78c1572b6c1449ed8df4e696adb3b292499"} Feb 14 19:10:59 crc kubenswrapper[4897]: I0214 19:10:59.571801 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-58kld" event={"ID":"c6adeab7-7f81-44b5-8a1d-072f7c050466","Type":"ContainerStarted","Data":"845031ddf50cdea5bdbb5abcf40624c4cc074cfa83348a8a193dc7cde6809990"} Feb 14 19:10:59 crc kubenswrapper[4897]: I0214 19:10:59.616652 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-58kld" podStartSLOduration=2.170984169 podStartE2EDuration="2.616628636s" podCreationTimestamp="2026-02-14 19:10:57 +0000 UTC" firstStartedPulling="2026-02-14 19:10:58.678888823 +0000 UTC m=+1711.655297306" lastFinishedPulling="2026-02-14 19:10:59.12453329 +0000 UTC m=+1712.100941773" observedRunningTime="2026-02-14 19:10:59.593739561 +0000 UTC m=+1712.570148084" watchObservedRunningTime="2026-02-14 19:10:59.616628636 +0000 UTC m=+1712.593037149" Feb 14 19:10:59 crc kubenswrapper[4897]: I0214 19:10:59.816700 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ff2fa58-497f-4e1c-8447-a25032ebac67" path="/var/lib/kubelet/pods/3ff2fa58-497f-4e1c-8447-a25032ebac67/volumes" Feb 14 19:11:00 crc kubenswrapper[4897]: I0214 19:11:00.053807 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-vxthf" Feb 14 19:11:00 crc kubenswrapper[4897]: I0214 19:11:00.118844 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20144c84-5098-42ee-9c62-576ed65ac421-combined-ca-bundle\") pod \"20144c84-5098-42ee-9c62-576ed65ac421\" (UID: \"20144c84-5098-42ee-9c62-576ed65ac421\") " Feb 14 19:11:00 crc kubenswrapper[4897]: I0214 19:11:00.118987 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20144c84-5098-42ee-9c62-576ed65ac421-scripts\") pod \"20144c84-5098-42ee-9c62-576ed65ac421\" (UID: \"20144c84-5098-42ee-9c62-576ed65ac421\") " Feb 14 19:11:00 crc kubenswrapper[4897]: I0214 19:11:00.119285 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccbk4\" (UniqueName: \"kubernetes.io/projected/20144c84-5098-42ee-9c62-576ed65ac421-kube-api-access-ccbk4\") pod \"20144c84-5098-42ee-9c62-576ed65ac421\" (UID: \"20144c84-5098-42ee-9c62-576ed65ac421\") " Feb 14 19:11:00 crc kubenswrapper[4897]: I0214 19:11:00.119386 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20144c84-5098-42ee-9c62-576ed65ac421-config-data\") pod \"20144c84-5098-42ee-9c62-576ed65ac421\" (UID: \"20144c84-5098-42ee-9c62-576ed65ac421\") " Feb 14 19:11:00 crc kubenswrapper[4897]: I0214 19:11:00.133288 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20144c84-5098-42ee-9c62-576ed65ac421-scripts" (OuterVolumeSpecName: "scripts") pod "20144c84-5098-42ee-9c62-576ed65ac421" (UID: "20144c84-5098-42ee-9c62-576ed65ac421"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:11:00 crc kubenswrapper[4897]: I0214 19:11:00.140361 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20144c84-5098-42ee-9c62-576ed65ac421-kube-api-access-ccbk4" (OuterVolumeSpecName: "kube-api-access-ccbk4") pod "20144c84-5098-42ee-9c62-576ed65ac421" (UID: "20144c84-5098-42ee-9c62-576ed65ac421"). InnerVolumeSpecName "kube-api-access-ccbk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:11:00 crc kubenswrapper[4897]: I0214 19:11:00.154280 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20144c84-5098-42ee-9c62-576ed65ac421-config-data" (OuterVolumeSpecName: "config-data") pod "20144c84-5098-42ee-9c62-576ed65ac421" (UID: "20144c84-5098-42ee-9c62-576ed65ac421"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:11:00 crc kubenswrapper[4897]: I0214 19:11:00.184122 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20144c84-5098-42ee-9c62-576ed65ac421-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20144c84-5098-42ee-9c62-576ed65ac421" (UID: "20144c84-5098-42ee-9c62-576ed65ac421"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:11:00 crc kubenswrapper[4897]: I0214 19:11:00.222158 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20144c84-5098-42ee-9c62-576ed65ac421-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:11:00 crc kubenswrapper[4897]: I0214 19:11:00.222199 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20144c84-5098-42ee-9c62-576ed65ac421-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:11:00 crc kubenswrapper[4897]: I0214 19:11:00.222211 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccbk4\" (UniqueName: \"kubernetes.io/projected/20144c84-5098-42ee-9c62-576ed65ac421-kube-api-access-ccbk4\") on node \"crc\" DevicePath \"\"" Feb 14 19:11:00 crc kubenswrapper[4897]: I0214 19:11:00.222225 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20144c84-5098-42ee-9c62-576ed65ac421-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:11:00 crc kubenswrapper[4897]: I0214 19:11:00.589431 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-vxthf" Feb 14 19:11:00 crc kubenswrapper[4897]: I0214 19:11:00.589424 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-vxthf" event={"ID":"20144c84-5098-42ee-9c62-576ed65ac421","Type":"ContainerDied","Data":"13653217fdb60a6fb338e440a42f16ac7805215702e1f85cb4d1bacb5ae205e3"} Feb 14 19:11:00 crc kubenswrapper[4897]: I0214 19:11:00.590417 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13653217fdb60a6fb338e440a42f16ac7805215702e1f85cb4d1bacb5ae205e3" Feb 14 19:11:02 crc kubenswrapper[4897]: I0214 19:11:02.624088 4897 generic.go:334] "Generic (PLEG): container finished" podID="c6adeab7-7f81-44b5-8a1d-072f7c050466" containerID="c48276b400adb1a31f18d48e53f7f78c1572b6c1449ed8df4e696adb3b292499" exitCode=0 Feb 14 19:11:02 crc kubenswrapper[4897]: I0214 19:11:02.624220 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-58kld" event={"ID":"c6adeab7-7f81-44b5-8a1d-072f7c050466","Type":"ContainerDied","Data":"c48276b400adb1a31f18d48e53f7f78c1572b6c1449ed8df4e696adb3b292499"} Feb 14 19:11:03 crc kubenswrapper[4897]: I0214 19:11:03.880125 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 14 19:11:03 crc kubenswrapper[4897]: I0214 19:11:03.880825 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="944b8f01-b27e-4d2a-b198-b44a9b10e47b" containerName="aodh-api" containerID="cri-o://fbf363a8091a98962cc88d42321a52dcefe63a2ba2c0f4dead34de765a46d9b1" gracePeriod=30 Feb 14 19:11:03 crc kubenswrapper[4897]: I0214 19:11:03.880940 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="944b8f01-b27e-4d2a-b198-b44a9b10e47b" containerName="aodh-evaluator" containerID="cri-o://fa44bbb6b1b409a42c81589c5d6d3fe0a9db210143a884471f840efec692a131" gracePeriod=30 Feb 14 19:11:03 crc kubenswrapper[4897]: I0214 19:11:03.881072 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="944b8f01-b27e-4d2a-b198-b44a9b10e47b" containerName="aodh-notifier" containerID="cri-o://75bf987ce8bfc743c8e7002ca68f6bd80b0cd27a2bcb19f9d8e8481a23063b43" gracePeriod=30 Feb 14 19:11:03 crc kubenswrapper[4897]: I0214 19:11:03.880938 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="944b8f01-b27e-4d2a-b198-b44a9b10e47b" containerName="aodh-listener" containerID="cri-o://c495243fd15aec00ff4c52117972de70493f702eca329f10008045249c618c50" gracePeriod=30 Feb 14 19:11:03 crc kubenswrapper[4897]: I0214 19:11:03.882112 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 14 19:11:03 crc kubenswrapper[4897]: I0214 19:11:03.977023 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.250199 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-58kld" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.357197 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6adeab7-7f81-44b5-8a1d-072f7c050466-ssh-key-openstack-edpm-ipam\") pod \"c6adeab7-7f81-44b5-8a1d-072f7c050466\" (UID: \"c6adeab7-7f81-44b5-8a1d-072f7c050466\") " Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.357253 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phc4d\" (UniqueName: \"kubernetes.io/projected/c6adeab7-7f81-44b5-8a1d-072f7c050466-kube-api-access-phc4d\") pod \"c6adeab7-7f81-44b5-8a1d-072f7c050466\" (UID: \"c6adeab7-7f81-44b5-8a1d-072f7c050466\") " Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.357324 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6adeab7-7f81-44b5-8a1d-072f7c050466-inventory\") pod \"c6adeab7-7f81-44b5-8a1d-072f7c050466\" (UID: \"c6adeab7-7f81-44b5-8a1d-072f7c050466\") " Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.376068 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6adeab7-7f81-44b5-8a1d-072f7c050466-kube-api-access-phc4d" (OuterVolumeSpecName: "kube-api-access-phc4d") pod "c6adeab7-7f81-44b5-8a1d-072f7c050466" (UID: "c6adeab7-7f81-44b5-8a1d-072f7c050466"). InnerVolumeSpecName "kube-api-access-phc4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.405288 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6adeab7-7f81-44b5-8a1d-072f7c050466-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c6adeab7-7f81-44b5-8a1d-072f7c050466" (UID: "c6adeab7-7f81-44b5-8a1d-072f7c050466"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.408243 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6adeab7-7f81-44b5-8a1d-072f7c050466-inventory" (OuterVolumeSpecName: "inventory") pod "c6adeab7-7f81-44b5-8a1d-072f7c050466" (UID: "c6adeab7-7f81-44b5-8a1d-072f7c050466"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.461943 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6adeab7-7f81-44b5-8a1d-072f7c050466-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.461974 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6adeab7-7f81-44b5-8a1d-072f7c050466-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.461987 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phc4d\" (UniqueName: \"kubernetes.io/projected/c6adeab7-7f81-44b5-8a1d-072f7c050466-kube-api-access-phc4d\") on node \"crc\" DevicePath \"\"" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.648100 4897 generic.go:334] "Generic (PLEG): container finished" podID="944b8f01-b27e-4d2a-b198-b44a9b10e47b" containerID="fa44bbb6b1b409a42c81589c5d6d3fe0a9db210143a884471f840efec692a131" exitCode=0 Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.648375 4897 generic.go:334] "Generic (PLEG): container finished" podID="944b8f01-b27e-4d2a-b198-b44a9b10e47b" containerID="fbf363a8091a98962cc88d42321a52dcefe63a2ba2c0f4dead34de765a46d9b1" exitCode=0 Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.648183 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"944b8f01-b27e-4d2a-b198-b44a9b10e47b","Type":"ContainerDied","Data":"fa44bbb6b1b409a42c81589c5d6d3fe0a9db210143a884471f840efec692a131"} Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.648487 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"944b8f01-b27e-4d2a-b198-b44a9b10e47b","Type":"ContainerDied","Data":"fbf363a8091a98962cc88d42321a52dcefe63a2ba2c0f4dead34de765a46d9b1"} Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.650113 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-58kld" event={"ID":"c6adeab7-7f81-44b5-8a1d-072f7c050466","Type":"ContainerDied","Data":"845031ddf50cdea5bdbb5abcf40624c4cc074cfa83348a8a193dc7cde6809990"} Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.650220 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="845031ddf50cdea5bdbb5abcf40624c4cc074cfa83348a8a193dc7cde6809990" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.650190 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-58kld" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.730355 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk"] Feb 14 19:11:04 crc kubenswrapper[4897]: E0214 19:11:04.730821 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ff2fa58-497f-4e1c-8447-a25032ebac67" containerName="heat-engine" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.730837 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ff2fa58-497f-4e1c-8447-a25032ebac67" containerName="heat-engine" Feb 14 19:11:04 crc kubenswrapper[4897]: E0214 19:11:04.730850 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20144c84-5098-42ee-9c62-576ed65ac421" containerName="aodh-db-sync" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.730856 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="20144c84-5098-42ee-9c62-576ed65ac421" containerName="aodh-db-sync" Feb 14 19:11:04 crc kubenswrapper[4897]: E0214 19:11:04.730879 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6adeab7-7f81-44b5-8a1d-072f7c050466" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.730885 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6adeab7-7f81-44b5-8a1d-072f7c050466" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.731945 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6adeab7-7f81-44b5-8a1d-072f7c050466" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.731968 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="20144c84-5098-42ee-9c62-576ed65ac421" containerName="aodh-db-sync" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.731986 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ff2fa58-497f-4e1c-8447-a25032ebac67" containerName="heat-engine" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.732892 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.737252 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j869w" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.737274 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.737438 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.737553 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.745503 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk"] Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.768176 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/afff7c3d-a238-49b8-8b7c-d041c4eb9ac2-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk\" (UID: \"afff7c3d-a238-49b8-8b7c-d041c4eb9ac2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.768290 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/afff7c3d-a238-49b8-8b7c-d041c4eb9ac2-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk\" (UID: \"afff7c3d-a238-49b8-8b7c-d041c4eb9ac2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.768332 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6zjk\" (UniqueName: \"kubernetes.io/projected/afff7c3d-a238-49b8-8b7c-d041c4eb9ac2-kube-api-access-j6zjk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk\" (UID: \"afff7c3d-a238-49b8-8b7c-d041c4eb9ac2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.768535 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afff7c3d-a238-49b8-8b7c-d041c4eb9ac2-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk\" (UID: \"afff7c3d-a238-49b8-8b7c-d041c4eb9ac2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.870465 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afff7c3d-a238-49b8-8b7c-d041c4eb9ac2-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk\" (UID: \"afff7c3d-a238-49b8-8b7c-d041c4eb9ac2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.870578 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/afff7c3d-a238-49b8-8b7c-d041c4eb9ac2-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk\" (UID: \"afff7c3d-a238-49b8-8b7c-d041c4eb9ac2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.870642 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/afff7c3d-a238-49b8-8b7c-d041c4eb9ac2-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk\" (UID: \"afff7c3d-a238-49b8-8b7c-d041c4eb9ac2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.870670 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6zjk\" (UniqueName: \"kubernetes.io/projected/afff7c3d-a238-49b8-8b7c-d041c4eb9ac2-kube-api-access-j6zjk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk\" (UID: \"afff7c3d-a238-49b8-8b7c-d041c4eb9ac2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.875115 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afff7c3d-a238-49b8-8b7c-d041c4eb9ac2-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk\" (UID: \"afff7c3d-a238-49b8-8b7c-d041c4eb9ac2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.875407 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/afff7c3d-a238-49b8-8b7c-d041c4eb9ac2-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk\" (UID: \"afff7c3d-a238-49b8-8b7c-d041c4eb9ac2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.876719 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/afff7c3d-a238-49b8-8b7c-d041c4eb9ac2-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk\" (UID: \"afff7c3d-a238-49b8-8b7c-d041c4eb9ac2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk" Feb 14 19:11:04 crc kubenswrapper[4897]: I0214 19:11:04.895758 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6zjk\" (UniqueName: \"kubernetes.io/projected/afff7c3d-a238-49b8-8b7c-d041c4eb9ac2-kube-api-access-j6zjk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk\" (UID: \"afff7c3d-a238-49b8-8b7c-d041c4eb9ac2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk" Feb 14 19:11:05 crc kubenswrapper[4897]: I0214 19:11:05.061758 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk" Feb 14 19:11:05 crc kubenswrapper[4897]: I0214 19:11:05.630872 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk"] Feb 14 19:11:05 crc kubenswrapper[4897]: I0214 19:11:05.663660 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk" event={"ID":"afff7c3d-a238-49b8-8b7c-d041c4eb9ac2","Type":"ContainerStarted","Data":"1971ff5d9170bd832fe012daa64ef1f67521c137c390834c8fa8efcafecd8df4"} Feb 14 19:11:06 crc kubenswrapper[4897]: I0214 19:11:06.707659 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk" event={"ID":"afff7c3d-a238-49b8-8b7c-d041c4eb9ac2","Type":"ContainerStarted","Data":"209719dc4cd051982d8dfbc5bf76a44ba4e4f12224d7e34ec7c79ebffd41dfff"} Feb 14 19:11:06 crc kubenswrapper[4897]: I0214 19:11:06.736990 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk" podStartSLOduration=2.360881347 podStartE2EDuration="2.736969462s" podCreationTimestamp="2026-02-14 19:11:04 +0000 UTC" firstStartedPulling="2026-02-14 19:11:05.63358209 +0000 UTC m=+1718.609990583" lastFinishedPulling="2026-02-14 19:11:06.009670225 +0000 UTC m=+1718.986078698" observedRunningTime="2026-02-14 19:11:06.725086921 +0000 UTC m=+1719.701495404" watchObservedRunningTime="2026-02-14 19:11:06.736969462 +0000 UTC m=+1719.713377955" Feb 14 19:11:07 crc kubenswrapper[4897]: I0214 19:11:07.727128 4897 generic.go:334] "Generic (PLEG): container finished" podID="944b8f01-b27e-4d2a-b198-b44a9b10e47b" containerID="c495243fd15aec00ff4c52117972de70493f702eca329f10008045249c618c50" exitCode=0 Feb 14 19:11:07 crc kubenswrapper[4897]: I0214 19:11:07.727249 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"944b8f01-b27e-4d2a-b198-b44a9b10e47b","Type":"ContainerDied","Data":"c495243fd15aec00ff4c52117972de70493f702eca329f10008045249c618c50"} Feb 14 19:11:08 crc kubenswrapper[4897]: I0214 19:11:08.742071 4897 generic.go:334] "Generic (PLEG): container finished" podID="944b8f01-b27e-4d2a-b198-b44a9b10e47b" containerID="75bf987ce8bfc743c8e7002ca68f6bd80b0cd27a2bcb19f9d8e8481a23063b43" exitCode=0 Feb 14 19:11:08 crc kubenswrapper[4897]: I0214 19:11:08.742113 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"944b8f01-b27e-4d2a-b198-b44a9b10e47b","Type":"ContainerDied","Data":"75bf987ce8bfc743c8e7002ca68f6bd80b0cd27a2bcb19f9d8e8481a23063b43"} Feb 14 19:11:08 crc kubenswrapper[4897]: I0214 19:11:08.742335 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"944b8f01-b27e-4d2a-b198-b44a9b10e47b","Type":"ContainerDied","Data":"8c077af132f308fc9645f8f948119cfeb743e6123a9f67d199ceff7fb4a926da"} Feb 14 19:11:08 crc kubenswrapper[4897]: I0214 19:11:08.742349 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c077af132f308fc9645f8f948119cfeb743e6123a9f67d199ceff7fb4a926da" Feb 14 19:11:08 crc kubenswrapper[4897]: I0214 19:11:08.786023 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 19:11:08 crc kubenswrapper[4897]: I0214 19:11:08.794399 4897 scope.go:117] "RemoveContainer" containerID="66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" Feb 14 19:11:08 crc kubenswrapper[4897]: E0214 19:11:08.794703 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:11:08 crc kubenswrapper[4897]: I0214 19:11:08.881949 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppnhp\" (UniqueName: \"kubernetes.io/projected/944b8f01-b27e-4d2a-b198-b44a9b10e47b-kube-api-access-ppnhp\") pod \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\" (UID: \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\") " Feb 14 19:11:08 crc kubenswrapper[4897]: I0214 19:11:08.882181 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-internal-tls-certs\") pod \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\" (UID: \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\") " Feb 14 19:11:08 crc kubenswrapper[4897]: I0214 19:11:08.883426 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-combined-ca-bundle\") pod \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\" (UID: \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\") " Feb 14 19:11:08 crc kubenswrapper[4897]: I0214 19:11:08.883804 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-config-data\") pod \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\" (UID: \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\") " Feb 14 19:11:08 crc kubenswrapper[4897]: I0214 19:11:08.883880 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-scripts\") pod \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\" (UID: \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\") " Feb 14 19:11:08 crc kubenswrapper[4897]: I0214 19:11:08.883909 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-public-tls-certs\") pod \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\" (UID: \"944b8f01-b27e-4d2a-b198-b44a9b10e47b\") " Feb 14 19:11:08 crc kubenswrapper[4897]: I0214 19:11:08.888573 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-scripts" (OuterVolumeSpecName: "scripts") pod "944b8f01-b27e-4d2a-b198-b44a9b10e47b" (UID: "944b8f01-b27e-4d2a-b198-b44a9b10e47b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:11:08 crc kubenswrapper[4897]: I0214 19:11:08.889249 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/944b8f01-b27e-4d2a-b198-b44a9b10e47b-kube-api-access-ppnhp" (OuterVolumeSpecName: "kube-api-access-ppnhp") pod "944b8f01-b27e-4d2a-b198-b44a9b10e47b" (UID: "944b8f01-b27e-4d2a-b198-b44a9b10e47b"). InnerVolumeSpecName "kube-api-access-ppnhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:11:08 crc kubenswrapper[4897]: I0214 19:11:08.949012 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "944b8f01-b27e-4d2a-b198-b44a9b10e47b" (UID: "944b8f01-b27e-4d2a-b198-b44a9b10e47b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:11:08 crc kubenswrapper[4897]: I0214 19:11:08.995112 4897 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 19:11:08 crc kubenswrapper[4897]: I0214 19:11:08.995347 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppnhp\" (UniqueName: \"kubernetes.io/projected/944b8f01-b27e-4d2a-b198-b44a9b10e47b-kube-api-access-ppnhp\") on node \"crc\" DevicePath \"\"" Feb 14 19:11:08 crc kubenswrapper[4897]: I0214 19:11:08.995361 4897 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.036232 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "944b8f01-b27e-4d2a-b198-b44a9b10e47b" (UID: "944b8f01-b27e-4d2a-b198-b44a9b10e47b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.080503 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "944b8f01-b27e-4d2a-b198-b44a9b10e47b" (UID: "944b8f01-b27e-4d2a-b198-b44a9b10e47b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.082583 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-config-data" (OuterVolumeSpecName: "config-data") pod "944b8f01-b27e-4d2a-b198-b44a9b10e47b" (UID: "944b8f01-b27e-4d2a-b198-b44a9b10e47b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.098159 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.098200 4897 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.098216 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/944b8f01-b27e-4d2a-b198-b44a9b10e47b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.523963 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="c8eb488b-8b48-4dea-8a34-dee3346005ef" containerName="rabbitmq" containerID="cri-o://1f4276d1d9c3894f4b7ccd2d8622cc95da988f32fd0a009d8f0acb8310cff86e" gracePeriod=604795 Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.753041 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.792829 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.848729 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.870873 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 14 19:11:09 crc kubenswrapper[4897]: E0214 19:11:09.872595 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="944b8f01-b27e-4d2a-b198-b44a9b10e47b" containerName="aodh-api" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.872633 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="944b8f01-b27e-4d2a-b198-b44a9b10e47b" containerName="aodh-api" Feb 14 19:11:09 crc kubenswrapper[4897]: E0214 19:11:09.872721 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="944b8f01-b27e-4d2a-b198-b44a9b10e47b" containerName="aodh-evaluator" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.874868 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="944b8f01-b27e-4d2a-b198-b44a9b10e47b" containerName="aodh-evaluator" Feb 14 19:11:09 crc kubenswrapper[4897]: E0214 19:11:09.874986 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="944b8f01-b27e-4d2a-b198-b44a9b10e47b" containerName="aodh-listener" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.875002 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="944b8f01-b27e-4d2a-b198-b44a9b10e47b" containerName="aodh-listener" Feb 14 19:11:09 crc kubenswrapper[4897]: E0214 19:11:09.875060 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="944b8f01-b27e-4d2a-b198-b44a9b10e47b" containerName="aodh-notifier" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.875071 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="944b8f01-b27e-4d2a-b198-b44a9b10e47b" containerName="aodh-notifier" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.876575 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="944b8f01-b27e-4d2a-b198-b44a9b10e47b" containerName="aodh-api" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.876641 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="944b8f01-b27e-4d2a-b198-b44a9b10e47b" containerName="aodh-evaluator" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.876718 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="944b8f01-b27e-4d2a-b198-b44a9b10e47b" containerName="aodh-listener" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.876746 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="944b8f01-b27e-4d2a-b198-b44a9b10e47b" containerName="aodh-notifier" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.883275 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.887448 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.890122 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.894996 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.896444 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.896807 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.900394 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-5zcr5" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.918198 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05b9fa1a-7c1c-464e-a03e-8067e2bb6c80-scripts\") pod \"aodh-0\" (UID: \"05b9fa1a-7c1c-464e-a03e-8067e2bb6c80\") " pod="openstack/aodh-0" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.918347 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6d6p\" (UniqueName: \"kubernetes.io/projected/05b9fa1a-7c1c-464e-a03e-8067e2bb6c80-kube-api-access-r6d6p\") pod \"aodh-0\" (UID: \"05b9fa1a-7c1c-464e-a03e-8067e2bb6c80\") " pod="openstack/aodh-0" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.918416 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b9fa1a-7c1c-464e-a03e-8067e2bb6c80-public-tls-certs\") pod \"aodh-0\" (UID: \"05b9fa1a-7c1c-464e-a03e-8067e2bb6c80\") " pod="openstack/aodh-0" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.918444 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b9fa1a-7c1c-464e-a03e-8067e2bb6c80-combined-ca-bundle\") pod \"aodh-0\" (UID: \"05b9fa1a-7c1c-464e-a03e-8067e2bb6c80\") " pod="openstack/aodh-0" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.918531 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b9fa1a-7c1c-464e-a03e-8067e2bb6c80-internal-tls-certs\") pod \"aodh-0\" (UID: \"05b9fa1a-7c1c-464e-a03e-8067e2bb6c80\") " pod="openstack/aodh-0" Feb 14 19:11:09 crc kubenswrapper[4897]: I0214 19:11:09.918592 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05b9fa1a-7c1c-464e-a03e-8067e2bb6c80-config-data\") pod \"aodh-0\" (UID: \"05b9fa1a-7c1c-464e-a03e-8067e2bb6c80\") " pod="openstack/aodh-0" Feb 14 19:11:10 crc kubenswrapper[4897]: I0214 19:11:10.022184 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b9fa1a-7c1c-464e-a03e-8067e2bb6c80-public-tls-certs\") pod \"aodh-0\" (UID: \"05b9fa1a-7c1c-464e-a03e-8067e2bb6c80\") " pod="openstack/aodh-0" Feb 14 19:11:10 crc kubenswrapper[4897]: I0214 19:11:10.022449 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b9fa1a-7c1c-464e-a03e-8067e2bb6c80-combined-ca-bundle\") pod \"aodh-0\" (UID: \"05b9fa1a-7c1c-464e-a03e-8067e2bb6c80\") " pod="openstack/aodh-0" Feb 14 19:11:10 crc kubenswrapper[4897]: I0214 19:11:10.022537 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b9fa1a-7c1c-464e-a03e-8067e2bb6c80-internal-tls-certs\") pod \"aodh-0\" (UID: \"05b9fa1a-7c1c-464e-a03e-8067e2bb6c80\") " pod="openstack/aodh-0" Feb 14 19:11:10 crc kubenswrapper[4897]: I0214 19:11:10.022571 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05b9fa1a-7c1c-464e-a03e-8067e2bb6c80-config-data\") pod \"aodh-0\" (UID: \"05b9fa1a-7c1c-464e-a03e-8067e2bb6c80\") " pod="openstack/aodh-0" Feb 14 19:11:10 crc kubenswrapper[4897]: I0214 19:11:10.022624 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05b9fa1a-7c1c-464e-a03e-8067e2bb6c80-scripts\") pod \"aodh-0\" (UID: \"05b9fa1a-7c1c-464e-a03e-8067e2bb6c80\") " pod="openstack/aodh-0" Feb 14 19:11:10 crc kubenswrapper[4897]: I0214 19:11:10.022688 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6d6p\" (UniqueName: \"kubernetes.io/projected/05b9fa1a-7c1c-464e-a03e-8067e2bb6c80-kube-api-access-r6d6p\") pod \"aodh-0\" (UID: \"05b9fa1a-7c1c-464e-a03e-8067e2bb6c80\") " pod="openstack/aodh-0" Feb 14 19:11:10 crc kubenswrapper[4897]: I0214 19:11:10.034878 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b9fa1a-7c1c-464e-a03e-8067e2bb6c80-combined-ca-bundle\") pod \"aodh-0\" (UID: \"05b9fa1a-7c1c-464e-a03e-8067e2bb6c80\") " pod="openstack/aodh-0" Feb 14 19:11:10 crc kubenswrapper[4897]: I0214 19:11:10.037482 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05b9fa1a-7c1c-464e-a03e-8067e2bb6c80-scripts\") pod \"aodh-0\" (UID: \"05b9fa1a-7c1c-464e-a03e-8067e2bb6c80\") " pod="openstack/aodh-0" Feb 14 19:11:10 crc kubenswrapper[4897]: I0214 19:11:10.038139 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b9fa1a-7c1c-464e-a03e-8067e2bb6c80-internal-tls-certs\") pod \"aodh-0\" (UID: \"05b9fa1a-7c1c-464e-a03e-8067e2bb6c80\") " pod="openstack/aodh-0" Feb 14 19:11:10 crc kubenswrapper[4897]: I0214 19:11:10.042580 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6d6p\" (UniqueName: \"kubernetes.io/projected/05b9fa1a-7c1c-464e-a03e-8067e2bb6c80-kube-api-access-r6d6p\") pod \"aodh-0\" (UID: \"05b9fa1a-7c1c-464e-a03e-8067e2bb6c80\") " pod="openstack/aodh-0" Feb 14 19:11:10 crc kubenswrapper[4897]: I0214 19:11:10.076935 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b9fa1a-7c1c-464e-a03e-8067e2bb6c80-public-tls-certs\") pod \"aodh-0\" (UID: \"05b9fa1a-7c1c-464e-a03e-8067e2bb6c80\") " pod="openstack/aodh-0" Feb 14 19:11:10 crc kubenswrapper[4897]: I0214 19:11:10.078151 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05b9fa1a-7c1c-464e-a03e-8067e2bb6c80-config-data\") pod \"aodh-0\" (UID: \"05b9fa1a-7c1c-464e-a03e-8067e2bb6c80\") " pod="openstack/aodh-0" Feb 14 19:11:10 crc kubenswrapper[4897]: I0214 19:11:10.212855 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 19:11:10 crc kubenswrapper[4897]: I0214 19:11:10.743810 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 14 19:11:10 crc kubenswrapper[4897]: I0214 19:11:10.776345 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"05b9fa1a-7c1c-464e-a03e-8067e2bb6c80","Type":"ContainerStarted","Data":"5b5637115e2c9b59fa53f0c8f5478c2dd3c94b03a4a29f4ecbaa4ba989ad4750"} Feb 14 19:11:11 crc kubenswrapper[4897]: I0214 19:11:11.792604 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"05b9fa1a-7c1c-464e-a03e-8067e2bb6c80","Type":"ContainerStarted","Data":"c1a9d2695bfbb179a83dccc49158ade16bb93d0241f8947603315d22502cbe97"} Feb 14 19:11:11 crc kubenswrapper[4897]: I0214 19:11:11.811973 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="944b8f01-b27e-4d2a-b198-b44a9b10e47b" path="/var/lib/kubelet/pods/944b8f01-b27e-4d2a-b198-b44a9b10e47b/volumes" Feb 14 19:11:12 crc kubenswrapper[4897]: I0214 19:11:12.815548 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"05b9fa1a-7c1c-464e-a03e-8067e2bb6c80","Type":"ContainerStarted","Data":"46f7cbcaa5abb52eade24f7be9d53d5057eb1a78993ee28ba6c513659aeb20b7"} Feb 14 19:11:13 crc kubenswrapper[4897]: I0214 19:11:13.562249 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="c8eb488b-8b48-4dea-8a34-dee3346005ef" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Feb 14 19:11:13 crc kubenswrapper[4897]: I0214 19:11:13.869425 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"05b9fa1a-7c1c-464e-a03e-8067e2bb6c80","Type":"ContainerStarted","Data":"3c8a2cd982e06fee1fbad9bf54d7d5353b5bc63b407cf2a4ba5144d2e4074624"} Feb 14 19:11:14 crc kubenswrapper[4897]: I0214 19:11:14.881104 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"05b9fa1a-7c1c-464e-a03e-8067e2bb6c80","Type":"ContainerStarted","Data":"42100769425bb6fae3b2e74b159a1560f7f2bf3acb7018270c6c9935c5d12500"} Feb 14 19:11:14 crc kubenswrapper[4897]: I0214 19:11:14.908219 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.216129438 podStartE2EDuration="5.908197161s" podCreationTimestamp="2026-02-14 19:11:09 +0000 UTC" firstStartedPulling="2026-02-14 19:11:10.751676294 +0000 UTC m=+1723.728084777" lastFinishedPulling="2026-02-14 19:11:14.443744017 +0000 UTC m=+1727.420152500" observedRunningTime="2026-02-14 19:11:14.903089062 +0000 UTC m=+1727.879497555" watchObservedRunningTime="2026-02-14 19:11:14.908197161 +0000 UTC m=+1727.884605644" Feb 14 19:11:15 crc kubenswrapper[4897]: I0214 19:11:15.897047 4897 generic.go:334] "Generic (PLEG): container finished" podID="c8eb488b-8b48-4dea-8a34-dee3346005ef" containerID="1f4276d1d9c3894f4b7ccd2d8622cc95da988f32fd0a009d8f0acb8310cff86e" exitCode=0 Feb 14 19:11:15 crc kubenswrapper[4897]: I0214 19:11:15.897337 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"c8eb488b-8b48-4dea-8a34-dee3346005ef","Type":"ContainerDied","Data":"1f4276d1d9c3894f4b7ccd2d8622cc95da988f32fd0a009d8f0acb8310cff86e"} Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.251105 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.396730 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c8eb488b-8b48-4dea-8a34-dee3346005ef-rabbitmq-plugins\") pod \"c8eb488b-8b48-4dea-8a34-dee3346005ef\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.397077 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c8eb488b-8b48-4dea-8a34-dee3346005ef-server-conf\") pod \"c8eb488b-8b48-4dea-8a34-dee3346005ef\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.398360 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f\") pod \"c8eb488b-8b48-4dea-8a34-dee3346005ef\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.398444 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c8eb488b-8b48-4dea-8a34-dee3346005ef-rabbitmq-erlang-cookie\") pod \"c8eb488b-8b48-4dea-8a34-dee3346005ef\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.398682 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c8eb488b-8b48-4dea-8a34-dee3346005ef-plugins-conf\") pod \"c8eb488b-8b48-4dea-8a34-dee3346005ef\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.398813 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c8eb488b-8b48-4dea-8a34-dee3346005ef-config-data\") pod \"c8eb488b-8b48-4dea-8a34-dee3346005ef\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.399181 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c8eb488b-8b48-4dea-8a34-dee3346005ef-rabbitmq-confd\") pod \"c8eb488b-8b48-4dea-8a34-dee3346005ef\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.399489 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c8eb488b-8b48-4dea-8a34-dee3346005ef-rabbitmq-tls\") pod \"c8eb488b-8b48-4dea-8a34-dee3346005ef\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.399683 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c8eb488b-8b48-4dea-8a34-dee3346005ef-erlang-cookie-secret\") pod \"c8eb488b-8b48-4dea-8a34-dee3346005ef\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.399744 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvpmx\" (UniqueName: \"kubernetes.io/projected/c8eb488b-8b48-4dea-8a34-dee3346005ef-kube-api-access-xvpmx\") pod \"c8eb488b-8b48-4dea-8a34-dee3346005ef\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.399817 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c8eb488b-8b48-4dea-8a34-dee3346005ef-pod-info\") pod \"c8eb488b-8b48-4dea-8a34-dee3346005ef\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.403545 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8eb488b-8b48-4dea-8a34-dee3346005ef-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "c8eb488b-8b48-4dea-8a34-dee3346005ef" (UID: "c8eb488b-8b48-4dea-8a34-dee3346005ef"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.405610 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8eb488b-8b48-4dea-8a34-dee3346005ef-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "c8eb488b-8b48-4dea-8a34-dee3346005ef" (UID: "c8eb488b-8b48-4dea-8a34-dee3346005ef"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.407261 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8eb488b-8b48-4dea-8a34-dee3346005ef-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "c8eb488b-8b48-4dea-8a34-dee3346005ef" (UID: "c8eb488b-8b48-4dea-8a34-dee3346005ef"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.412858 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8eb488b-8b48-4dea-8a34-dee3346005ef-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "c8eb488b-8b48-4dea-8a34-dee3346005ef" (UID: "c8eb488b-8b48-4dea-8a34-dee3346005ef"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.415358 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8eb488b-8b48-4dea-8a34-dee3346005ef-kube-api-access-xvpmx" (OuterVolumeSpecName: "kube-api-access-xvpmx") pod "c8eb488b-8b48-4dea-8a34-dee3346005ef" (UID: "c8eb488b-8b48-4dea-8a34-dee3346005ef"). InnerVolumeSpecName "kube-api-access-xvpmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.423009 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8eb488b-8b48-4dea-8a34-dee3346005ef-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "c8eb488b-8b48-4dea-8a34-dee3346005ef" (UID: "c8eb488b-8b48-4dea-8a34-dee3346005ef"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.426839 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/c8eb488b-8b48-4dea-8a34-dee3346005ef-pod-info" (OuterVolumeSpecName: "pod-info") pod "c8eb488b-8b48-4dea-8a34-dee3346005ef" (UID: "c8eb488b-8b48-4dea-8a34-dee3346005ef"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.505486 4897 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c8eb488b-8b48-4dea-8a34-dee3346005ef-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.505945 4897 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c8eb488b-8b48-4dea-8a34-dee3346005ef-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.505968 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvpmx\" (UniqueName: \"kubernetes.io/projected/c8eb488b-8b48-4dea-8a34-dee3346005ef-kube-api-access-xvpmx\") on node \"crc\" DevicePath \"\"" Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.505984 4897 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c8eb488b-8b48-4dea-8a34-dee3346005ef-pod-info\") on node \"crc\" DevicePath \"\"" Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.506042 4897 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c8eb488b-8b48-4dea-8a34-dee3346005ef-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.506062 4897 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c8eb488b-8b48-4dea-8a34-dee3346005ef-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.506076 4897 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c8eb488b-8b48-4dea-8a34-dee3346005ef-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.512818 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8eb488b-8b48-4dea-8a34-dee3346005ef-server-conf" (OuterVolumeSpecName: "server-conf") pod "c8eb488b-8b48-4dea-8a34-dee3346005ef" (UID: "c8eb488b-8b48-4dea-8a34-dee3346005ef"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.516591 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8eb488b-8b48-4dea-8a34-dee3346005ef-config-data" (OuterVolumeSpecName: "config-data") pod "c8eb488b-8b48-4dea-8a34-dee3346005ef" (UID: "c8eb488b-8b48-4dea-8a34-dee3346005ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:11:16 crc kubenswrapper[4897]: E0214 19:11:16.539101 4897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f podName:c8eb488b-8b48-4dea-8a34-dee3346005ef nodeName:}" failed. No retries permitted until 2026-02-14 19:11:17.039018361 +0000 UTC m=+1730.015426844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "persistence" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f") pod "c8eb488b-8b48-4dea-8a34-dee3346005ef" (UID: "c8eb488b-8b48-4dea-8a34-dee3346005ef") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.589050 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8eb488b-8b48-4dea-8a34-dee3346005ef-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "c8eb488b-8b48-4dea-8a34-dee3346005ef" (UID: "c8eb488b-8b48-4dea-8a34-dee3346005ef"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.608112 4897 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c8eb488b-8b48-4dea-8a34-dee3346005ef-server-conf\") on node \"crc\" DevicePath \"\"" Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.608137 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c8eb488b-8b48-4dea-8a34-dee3346005ef-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.608146 4897 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c8eb488b-8b48-4dea-8a34-dee3346005ef-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.911626 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"c8eb488b-8b48-4dea-8a34-dee3346005ef","Type":"ContainerDied","Data":"41d7f57883ed13bd8d08d07218a119dbac5e13e0d5bbc38cae3d44024d9798af"} Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.912043 4897 scope.go:117] "RemoveContainer" containerID="1f4276d1d9c3894f4b7ccd2d8622cc95da988f32fd0a009d8f0acb8310cff86e" Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.912229 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 14 19:11:16 crc kubenswrapper[4897]: I0214 19:11:16.979215 4897 scope.go:117] "RemoveContainer" containerID="a95df9cbd2a6de16e6cd9decf3036159b9c57f996a07f4fb70e3865a9af7ea81" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.131900 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f\") pod \"c8eb488b-8b48-4dea-8a34-dee3346005ef\" (UID: \"c8eb488b-8b48-4dea-8a34-dee3346005ef\") " Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.246465 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f" (OuterVolumeSpecName: "persistence") pod "c8eb488b-8b48-4dea-8a34-dee3346005ef" (UID: "c8eb488b-8b48-4dea-8a34-dee3346005ef"). InnerVolumeSpecName "pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.337630 4897 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f\") on node \"crc\" " Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.355799 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.374397 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.389118 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 19:11:17 crc kubenswrapper[4897]: E0214 19:11:17.389590 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8eb488b-8b48-4dea-8a34-dee3346005ef" containerName="setup-container" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.389603 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8eb488b-8b48-4dea-8a34-dee3346005ef" containerName="setup-container" Feb 14 19:11:17 crc kubenswrapper[4897]: E0214 19:11:17.389622 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8eb488b-8b48-4dea-8a34-dee3346005ef" containerName="rabbitmq" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.389627 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8eb488b-8b48-4dea-8a34-dee3346005ef" containerName="rabbitmq" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.389858 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8eb488b-8b48-4dea-8a34-dee3346005ef" containerName="rabbitmq" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.391002 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.391712 4897 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.391927 4897 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f") on node "crc" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.439962 4897 reconciler_common.go:293] "Volume detached for volume \"pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f\") on node \"crc\" DevicePath \"\"" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.445840 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.541695 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.542076 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-config-data\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.542097 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-server-conf\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.542115 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpxl8\" (UniqueName: \"kubernetes.io/projected/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-kube-api-access-xpxl8\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.542156 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.542430 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-pod-info\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.542569 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.542683 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.542709 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.542887 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.542911 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.647495 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-config-data\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.647530 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-server-conf\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.647546 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpxl8\" (UniqueName: \"kubernetes.io/projected/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-kube-api-access-xpxl8\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.647590 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.647637 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-pod-info\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.647659 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.647685 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.647699 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.647745 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.647760 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.647785 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.648851 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.649658 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.653559 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-server-conf\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.653785 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-config-data\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.654087 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.654385 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.654405 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c8298b982ac0a8950d87841fa11447940cdea275839e8718e250a1f9acab59f7/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.654600 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-pod-info\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.654977 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.656591 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.657537 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.672354 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpxl8\" (UniqueName: \"kubernetes.io/projected/01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe-kube-api-access-xpxl8\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.714432 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8aa38a72-3f79-47d5-bd16-25282f4b764f\") pod \"rabbitmq-server-1\" (UID: \"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe\") " pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.740873 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 14 19:11:17 crc kubenswrapper[4897]: I0214 19:11:17.812494 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8eb488b-8b48-4dea-8a34-dee3346005ef" path="/var/lib/kubelet/pods/c8eb488b-8b48-4dea-8a34-dee3346005ef/volumes" Feb 14 19:11:18 crc kubenswrapper[4897]: I0214 19:11:18.266124 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 19:11:18 crc kubenswrapper[4897]: I0214 19:11:18.963608 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe","Type":"ContainerStarted","Data":"f717d2be5806c4a43e029509619811577bfa9cb110037dddeb903245207974bf"} Feb 14 19:11:20 crc kubenswrapper[4897]: I0214 19:11:20.993454 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe","Type":"ContainerStarted","Data":"46bbae4fd2486204a09fe87b17f4a669a0dbd1180bd6b2b0b8d898724edc9eb2"} Feb 14 19:11:23 crc kubenswrapper[4897]: I0214 19:11:23.794420 4897 scope.go:117] "RemoveContainer" containerID="66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" Feb 14 19:11:23 crc kubenswrapper[4897]: E0214 19:11:23.795076 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:11:34 crc kubenswrapper[4897]: I0214 19:11:34.795282 4897 scope.go:117] "RemoveContainer" containerID="66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" Feb 14 19:11:34 crc kubenswrapper[4897]: E0214 19:11:34.796160 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:11:37 crc kubenswrapper[4897]: I0214 19:11:37.709820 4897 scope.go:117] "RemoveContainer" containerID="eee94c9fd239e653cc0a0ffca6a13a2f2f49f3bf9e7f99594c093072baed3b5f" Feb 14 19:11:37 crc kubenswrapper[4897]: I0214 19:11:37.756259 4897 scope.go:117] "RemoveContainer" containerID="0148c16d6c818afd1210fd9f66d1e08ddc906dda9c37b68da948287b3ca66b8b" Feb 14 19:11:37 crc kubenswrapper[4897]: I0214 19:11:37.811151 4897 scope.go:117] "RemoveContainer" containerID="ebdbb10eebc8deea4b7f629fcf730b38457933a0731c74ceac878a4d4864ca1c" Feb 14 19:11:37 crc kubenswrapper[4897]: I0214 19:11:37.857354 4897 scope.go:117] "RemoveContainer" containerID="5f2587ec5324386036de347b4ccf04d694c83f6a3d608ca0151c373a8dd5dd21" Feb 14 19:11:45 crc kubenswrapper[4897]: I0214 19:11:45.799145 4897 scope.go:117] "RemoveContainer" containerID="66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" Feb 14 19:11:45 crc kubenswrapper[4897]: E0214 19:11:45.799932 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:11:53 crc kubenswrapper[4897]: I0214 19:11:53.471682 4897 generic.go:334] "Generic (PLEG): container finished" podID="01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe" containerID="46bbae4fd2486204a09fe87b17f4a669a0dbd1180bd6b2b0b8d898724edc9eb2" exitCode=0 Feb 14 19:11:53 crc kubenswrapper[4897]: I0214 19:11:53.471777 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe","Type":"ContainerDied","Data":"46bbae4fd2486204a09fe87b17f4a669a0dbd1180bd6b2b0b8d898724edc9eb2"} Feb 14 19:11:54 crc kubenswrapper[4897]: I0214 19:11:54.489872 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe","Type":"ContainerStarted","Data":"3377b54495b2ccf09c0865de89f9144648568836ad94e130b5dcd630d862a7b3"} Feb 14 19:11:54 crc kubenswrapper[4897]: I0214 19:11:54.490842 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 14 19:11:54 crc kubenswrapper[4897]: I0214 19:11:54.547412 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=37.547390232 podStartE2EDuration="37.547390232s" podCreationTimestamp="2026-02-14 19:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:11:54.531413263 +0000 UTC m=+1767.507821786" watchObservedRunningTime="2026-02-14 19:11:54.547390232 +0000 UTC m=+1767.523798725" Feb 14 19:11:56 crc kubenswrapper[4897]: I0214 19:11:56.793848 4897 scope.go:117] "RemoveContainer" containerID="66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" Feb 14 19:11:56 crc kubenswrapper[4897]: E0214 19:11:56.794491 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:12:07 crc kubenswrapper[4897]: I0214 19:12:07.744399 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 14 19:12:07 crc kubenswrapper[4897]: I0214 19:12:07.826301 4897 scope.go:117] "RemoveContainer" containerID="66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" Feb 14 19:12:07 crc kubenswrapper[4897]: E0214 19:12:07.826666 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:12:07 crc kubenswrapper[4897]: I0214 19:12:07.828082 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 19:12:12 crc kubenswrapper[4897]: I0214 19:12:12.625815 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="32d6ef5f-5f6d-4563-91e7-94928fbe901d" containerName="rabbitmq" containerID="cri-o://a85f12fcf585b3b64a3ade14f0c0902b6ccba6b56c914aeec7902ae4dd26d831" gracePeriod=604796 Feb 14 19:12:13 crc kubenswrapper[4897]: I0214 19:12:13.516158 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="32d6ef5f-5f6d-4563-91e7-94928fbe901d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.315560 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.436082 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32d6ef5f-5f6d-4563-91e7-94928fbe901d-config-data\") pod \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.436125 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/32d6ef5f-5f6d-4563-91e7-94928fbe901d-rabbitmq-plugins\") pod \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.436161 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/32d6ef5f-5f6d-4563-91e7-94928fbe901d-rabbitmq-tls\") pod \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.436788 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c16cec9c-062d-434a-aa5a-479a61b795d3\") pod \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.436854 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/32d6ef5f-5f6d-4563-91e7-94928fbe901d-plugins-conf\") pod \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.436971 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/32d6ef5f-5f6d-4563-91e7-94928fbe901d-erlang-cookie-secret\") pod \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.437000 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/32d6ef5f-5f6d-4563-91e7-94928fbe901d-pod-info\") pod \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.437059 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/32d6ef5f-5f6d-4563-91e7-94928fbe901d-rabbitmq-confd\") pod \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.437129 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr2xq\" (UniqueName: \"kubernetes.io/projected/32d6ef5f-5f6d-4563-91e7-94928fbe901d-kube-api-access-nr2xq\") pod \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.437154 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/32d6ef5f-5f6d-4563-91e7-94928fbe901d-server-conf\") pod \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.437203 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/32d6ef5f-5f6d-4563-91e7-94928fbe901d-rabbitmq-erlang-cookie\") pod \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\" (UID: \"32d6ef5f-5f6d-4563-91e7-94928fbe901d\") " Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.437567 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32d6ef5f-5f6d-4563-91e7-94928fbe901d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "32d6ef5f-5f6d-4563-91e7-94928fbe901d" (UID: "32d6ef5f-5f6d-4563-91e7-94928fbe901d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.438103 4897 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/32d6ef5f-5f6d-4563-91e7-94928fbe901d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.439759 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d6ef5f-5f6d-4563-91e7-94928fbe901d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "32d6ef5f-5f6d-4563-91e7-94928fbe901d" (UID: "32d6ef5f-5f6d-4563-91e7-94928fbe901d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.444195 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32d6ef5f-5f6d-4563-91e7-94928fbe901d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "32d6ef5f-5f6d-4563-91e7-94928fbe901d" (UID: "32d6ef5f-5f6d-4563-91e7-94928fbe901d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.445325 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32d6ef5f-5f6d-4563-91e7-94928fbe901d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "32d6ef5f-5f6d-4563-91e7-94928fbe901d" (UID: "32d6ef5f-5f6d-4563-91e7-94928fbe901d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.461251 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32d6ef5f-5f6d-4563-91e7-94928fbe901d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "32d6ef5f-5f6d-4563-91e7-94928fbe901d" (UID: "32d6ef5f-5f6d-4563-91e7-94928fbe901d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.461300 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32d6ef5f-5f6d-4563-91e7-94928fbe901d-kube-api-access-nr2xq" (OuterVolumeSpecName: "kube-api-access-nr2xq") pod "32d6ef5f-5f6d-4563-91e7-94928fbe901d" (UID: "32d6ef5f-5f6d-4563-91e7-94928fbe901d"). InnerVolumeSpecName "kube-api-access-nr2xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.485986 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/32d6ef5f-5f6d-4563-91e7-94928fbe901d-pod-info" (OuterVolumeSpecName: "pod-info") pod "32d6ef5f-5f6d-4563-91e7-94928fbe901d" (UID: "32d6ef5f-5f6d-4563-91e7-94928fbe901d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.494981 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d6ef5f-5f6d-4563-91e7-94928fbe901d-config-data" (OuterVolumeSpecName: "config-data") pod "32d6ef5f-5f6d-4563-91e7-94928fbe901d" (UID: "32d6ef5f-5f6d-4563-91e7-94928fbe901d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.521546 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c16cec9c-062d-434a-aa5a-479a61b795d3" (OuterVolumeSpecName: "persistence") pod "32d6ef5f-5f6d-4563-91e7-94928fbe901d" (UID: "32d6ef5f-5f6d-4563-91e7-94928fbe901d"). InnerVolumeSpecName "pvc-c16cec9c-062d-434a-aa5a-479a61b795d3". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.541899 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr2xq\" (UniqueName: \"kubernetes.io/projected/32d6ef5f-5f6d-4563-91e7-94928fbe901d-kube-api-access-nr2xq\") on node \"crc\" DevicePath \"\"" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.541927 4897 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/32d6ef5f-5f6d-4563-91e7-94928fbe901d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.541936 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32d6ef5f-5f6d-4563-91e7-94928fbe901d-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.541945 4897 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/32d6ef5f-5f6d-4563-91e7-94928fbe901d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.541968 4897 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c16cec9c-062d-434a-aa5a-479a61b795d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c16cec9c-062d-434a-aa5a-479a61b795d3\") on node \"crc\" " Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.541978 4897 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/32d6ef5f-5f6d-4563-91e7-94928fbe901d-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.541988 4897 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/32d6ef5f-5f6d-4563-91e7-94928fbe901d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.541996 4897 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/32d6ef5f-5f6d-4563-91e7-94928fbe901d-pod-info\") on node \"crc\" DevicePath \"\"" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.578288 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d6ef5f-5f6d-4563-91e7-94928fbe901d-server-conf" (OuterVolumeSpecName: "server-conf") pod "32d6ef5f-5f6d-4563-91e7-94928fbe901d" (UID: "32d6ef5f-5f6d-4563-91e7-94928fbe901d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.590586 4897 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.590705 4897 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c16cec9c-062d-434a-aa5a-479a61b795d3" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c16cec9c-062d-434a-aa5a-479a61b795d3") on node "crc" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.645145 4897 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/32d6ef5f-5f6d-4563-91e7-94928fbe901d-server-conf\") on node \"crc\" DevicePath \"\"" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.645179 4897 reconciler_common.go:293] "Volume detached for volume \"pvc-c16cec9c-062d-434a-aa5a-479a61b795d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c16cec9c-062d-434a-aa5a-479a61b795d3\") on node \"crc\" DevicePath \"\"" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.648276 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32d6ef5f-5f6d-4563-91e7-94928fbe901d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "32d6ef5f-5f6d-4563-91e7-94928fbe901d" (UID: "32d6ef5f-5f6d-4563-91e7-94928fbe901d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.748153 4897 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/32d6ef5f-5f6d-4563-91e7-94928fbe901d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.815887 4897 generic.go:334] "Generic (PLEG): container finished" podID="32d6ef5f-5f6d-4563-91e7-94928fbe901d" containerID="a85f12fcf585b3b64a3ade14f0c0902b6ccba6b56c914aeec7902ae4dd26d831" exitCode=0 Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.815980 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.818507 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"32d6ef5f-5f6d-4563-91e7-94928fbe901d","Type":"ContainerDied","Data":"a85f12fcf585b3b64a3ade14f0c0902b6ccba6b56c914aeec7902ae4dd26d831"} Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.818552 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"32d6ef5f-5f6d-4563-91e7-94928fbe901d","Type":"ContainerDied","Data":"e60cf35cbde19440beb3f4aa715cf94e9620365e6deae980ed1dbe70aac693e7"} Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.818974 4897 scope.go:117] "RemoveContainer" containerID="a85f12fcf585b3b64a3ade14f0c0902b6ccba6b56c914aeec7902ae4dd26d831" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.879417 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.885394 4897 scope.go:117] "RemoveContainer" containerID="0559c79f3f8e876a576da1845a722e9632027d8d7c9eb9100730338292c01d04" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.920260 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.942648 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 19:12:19 crc kubenswrapper[4897]: E0214 19:12:19.943214 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32d6ef5f-5f6d-4563-91e7-94928fbe901d" containerName="setup-container" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.943232 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="32d6ef5f-5f6d-4563-91e7-94928fbe901d" containerName="setup-container" Feb 14 19:12:19 crc kubenswrapper[4897]: E0214 19:12:19.943257 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32d6ef5f-5f6d-4563-91e7-94928fbe901d" containerName="rabbitmq" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.943264 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="32d6ef5f-5f6d-4563-91e7-94928fbe901d" containerName="rabbitmq" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.943558 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="32d6ef5f-5f6d-4563-91e7-94928fbe901d" containerName="rabbitmq" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.945118 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.949113 4897 scope.go:117] "RemoveContainer" containerID="a85f12fcf585b3b64a3ade14f0c0902b6ccba6b56c914aeec7902ae4dd26d831" Feb 14 19:12:19 crc kubenswrapper[4897]: E0214 19:12:19.949554 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a85f12fcf585b3b64a3ade14f0c0902b6ccba6b56c914aeec7902ae4dd26d831\": container with ID starting with a85f12fcf585b3b64a3ade14f0c0902b6ccba6b56c914aeec7902ae4dd26d831 not found: ID does not exist" containerID="a85f12fcf585b3b64a3ade14f0c0902b6ccba6b56c914aeec7902ae4dd26d831" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.949593 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a85f12fcf585b3b64a3ade14f0c0902b6ccba6b56c914aeec7902ae4dd26d831"} err="failed to get container status \"a85f12fcf585b3b64a3ade14f0c0902b6ccba6b56c914aeec7902ae4dd26d831\": rpc error: code = NotFound desc = could not find container \"a85f12fcf585b3b64a3ade14f0c0902b6ccba6b56c914aeec7902ae4dd26d831\": container with ID starting with a85f12fcf585b3b64a3ade14f0c0902b6ccba6b56c914aeec7902ae4dd26d831 not found: ID does not exist" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.949619 4897 scope.go:117] "RemoveContainer" containerID="0559c79f3f8e876a576da1845a722e9632027d8d7c9eb9100730338292c01d04" Feb 14 19:12:19 crc kubenswrapper[4897]: E0214 19:12:19.949833 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0559c79f3f8e876a576da1845a722e9632027d8d7c9eb9100730338292c01d04\": container with ID starting with 0559c79f3f8e876a576da1845a722e9632027d8d7c9eb9100730338292c01d04 not found: ID does not exist" containerID="0559c79f3f8e876a576da1845a722e9632027d8d7c9eb9100730338292c01d04" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.949863 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0559c79f3f8e876a576da1845a722e9632027d8d7c9eb9100730338292c01d04"} err="failed to get container status \"0559c79f3f8e876a576da1845a722e9632027d8d7c9eb9100730338292c01d04\": rpc error: code = NotFound desc = could not find container \"0559c79f3f8e876a576da1845a722e9632027d8d7c9eb9100730338292c01d04\": container with ID starting with 0559c79f3f8e876a576da1845a722e9632027d8d7c9eb9100730338292c01d04 not found: ID does not exist" Feb 14 19:12:19 crc kubenswrapper[4897]: I0214 19:12:19.982554 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.061586 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a19f01a-c85c-492f-a991-b0a499611db3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.061926 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a19f01a-c85c-492f-a991-b0a499611db3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.061969 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a19f01a-c85c-492f-a991-b0a499611db3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.061998 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a19f01a-c85c-492f-a991-b0a499611db3-config-data\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.062080 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a19f01a-c85c-492f-a991-b0a499611db3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.062370 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a19f01a-c85c-492f-a991-b0a499611db3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.062677 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2mrx\" (UniqueName: \"kubernetes.io/projected/8a19f01a-c85c-492f-a991-b0a499611db3-kube-api-access-t2mrx\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.062716 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c16cec9c-062d-434a-aa5a-479a61b795d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c16cec9c-062d-434a-aa5a-479a61b795d3\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.063057 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a19f01a-c85c-492f-a991-b0a499611db3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.063091 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a19f01a-c85c-492f-a991-b0a499611db3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.063882 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a19f01a-c85c-492f-a991-b0a499611db3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.165648 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a19f01a-c85c-492f-a991-b0a499611db3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.165720 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a19f01a-c85c-492f-a991-b0a499611db3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.165774 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2mrx\" (UniqueName: \"kubernetes.io/projected/8a19f01a-c85c-492f-a991-b0a499611db3-kube-api-access-t2mrx\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.165796 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c16cec9c-062d-434a-aa5a-479a61b795d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c16cec9c-062d-434a-aa5a-479a61b795d3\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.165829 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a19f01a-c85c-492f-a991-b0a499611db3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.165845 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a19f01a-c85c-492f-a991-b0a499611db3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.165941 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a19f01a-c85c-492f-a991-b0a499611db3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.166060 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a19f01a-c85c-492f-a991-b0a499611db3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.166082 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a19f01a-c85c-492f-a991-b0a499611db3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.166096 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a19f01a-c85c-492f-a991-b0a499611db3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.166113 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a19f01a-c85c-492f-a991-b0a499611db3-config-data\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.166839 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a19f01a-c85c-492f-a991-b0a499611db3-config-data\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.166864 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a19f01a-c85c-492f-a991-b0a499611db3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.168745 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a19f01a-c85c-492f-a991-b0a499611db3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.168989 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a19f01a-c85c-492f-a991-b0a499611db3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.169531 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a19f01a-c85c-492f-a991-b0a499611db3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.172263 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a19f01a-c85c-492f-a991-b0a499611db3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.176763 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a19f01a-c85c-492f-a991-b0a499611db3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.177748 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a19f01a-c85c-492f-a991-b0a499611db3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.189835 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a19f01a-c85c-492f-a991-b0a499611db3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.210132 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2mrx\" (UniqueName: \"kubernetes.io/projected/8a19f01a-c85c-492f-a991-b0a499611db3-kube-api-access-t2mrx\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.306944 4897 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.306984 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c16cec9c-062d-434a-aa5a-479a61b795d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c16cec9c-062d-434a-aa5a-479a61b795d3\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bdd9fdd2f7f7e3465101c97ccaf93539e86bf50672dc0be4645c042fca69f0d6/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.401847 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c16cec9c-062d-434a-aa5a-479a61b795d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c16cec9c-062d-434a-aa5a-479a61b795d3\") pod \"rabbitmq-server-0\" (UID: \"8a19f01a-c85c-492f-a991-b0a499611db3\") " pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.639049 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 19:12:20 crc kubenswrapper[4897]: I0214 19:12:20.795379 4897 scope.go:117] "RemoveContainer" containerID="66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" Feb 14 19:12:20 crc kubenswrapper[4897]: E0214 19:12:20.796160 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:12:21 crc kubenswrapper[4897]: I0214 19:12:21.144847 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 19:12:21 crc kubenswrapper[4897]: I0214 19:12:21.808929 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32d6ef5f-5f6d-4563-91e7-94928fbe901d" path="/var/lib/kubelet/pods/32d6ef5f-5f6d-4563-91e7-94928fbe901d/volumes" Feb 14 19:12:21 crc kubenswrapper[4897]: I0214 19:12:21.872941 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a19f01a-c85c-492f-a991-b0a499611db3","Type":"ContainerStarted","Data":"ad435b3ca8a112e87e959c4ede84d632463c6bb7ed51cbada61ab18439582818"} Feb 14 19:12:23 crc kubenswrapper[4897]: I0214 19:12:23.898812 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a19f01a-c85c-492f-a991-b0a499611db3","Type":"ContainerStarted","Data":"bc01a8ecef5178712d8c97829d6eaf4418dcbf55d8d8d8712d24daabd95a4068"} Feb 14 19:12:32 crc kubenswrapper[4897]: I0214 19:12:32.795497 4897 scope.go:117] "RemoveContainer" containerID="66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" Feb 14 19:12:32 crc kubenswrapper[4897]: E0214 19:12:32.796681 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:12:38 crc kubenswrapper[4897]: I0214 19:12:38.132146 4897 scope.go:117] "RemoveContainer" containerID="e3e9114fc9373e26b71f0639699cd234267efb30b593aac5d5850cf5a83642d9" Feb 14 19:12:38 crc kubenswrapper[4897]: I0214 19:12:38.168036 4897 scope.go:117] "RemoveContainer" containerID="83274da8965f06d985e62ea5e9947a8492df9f55c1a320626386a00ae230fdc1" Feb 14 19:12:44 crc kubenswrapper[4897]: I0214 19:12:44.795247 4897 scope.go:117] "RemoveContainer" containerID="66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" Feb 14 19:12:44 crc kubenswrapper[4897]: E0214 19:12:44.796617 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:12:56 crc kubenswrapper[4897]: I0214 19:12:56.343723 4897 generic.go:334] "Generic (PLEG): container finished" podID="8a19f01a-c85c-492f-a991-b0a499611db3" containerID="bc01a8ecef5178712d8c97829d6eaf4418dcbf55d8d8d8712d24daabd95a4068" exitCode=0 Feb 14 19:12:56 crc kubenswrapper[4897]: I0214 19:12:56.344196 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a19f01a-c85c-492f-a991-b0a499611db3","Type":"ContainerDied","Data":"bc01a8ecef5178712d8c97829d6eaf4418dcbf55d8d8d8712d24daabd95a4068"} Feb 14 19:12:57 crc kubenswrapper[4897]: I0214 19:12:57.364175 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a19f01a-c85c-492f-a991-b0a499611db3","Type":"ContainerStarted","Data":"ac96fef351bc8f58a7f2390736e765f9022e8484b937414e235a2d2ea5ccd0df"} Feb 14 19:12:57 crc kubenswrapper[4897]: I0214 19:12:57.365078 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 14 19:12:57 crc kubenswrapper[4897]: I0214 19:12:57.403913 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.403893396 podStartE2EDuration="38.403893396s" podCreationTimestamp="2026-02-14 19:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:12:57.391675685 +0000 UTC m=+1830.368084178" watchObservedRunningTime="2026-02-14 19:12:57.403893396 +0000 UTC m=+1830.380301879" Feb 14 19:12:58 crc kubenswrapper[4897]: I0214 19:12:58.794705 4897 scope.go:117] "RemoveContainer" containerID="66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" Feb 14 19:12:58 crc kubenswrapper[4897]: E0214 19:12:58.795521 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:13:10 crc kubenswrapper[4897]: I0214 19:13:10.642727 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 14 19:13:12 crc kubenswrapper[4897]: I0214 19:13:12.795449 4897 scope.go:117] "RemoveContainer" containerID="66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" Feb 14 19:13:12 crc kubenswrapper[4897]: E0214 19:13:12.796182 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:13:27 crc kubenswrapper[4897]: I0214 19:13:27.811014 4897 scope.go:117] "RemoveContainer" containerID="66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" Feb 14 19:13:27 crc kubenswrapper[4897]: E0214 19:13:27.812084 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:13:38 crc kubenswrapper[4897]: I0214 19:13:38.296801 4897 scope.go:117] "RemoveContainer" containerID="4a771529b77da3107bf7598218eb12e5e79ea442afbb4ed2ea26bdae62f474b6" Feb 14 19:13:38 crc kubenswrapper[4897]: I0214 19:13:38.359080 4897 scope.go:117] "RemoveContainer" containerID="3dce2f8ca0ce29f937e9656ad397b0b4280859f17385c52f799bb314b5d7703d" Feb 14 19:13:38 crc kubenswrapper[4897]: I0214 19:13:38.414283 4897 scope.go:117] "RemoveContainer" containerID="13ace9822242cdd4466d44c25c2cc75782bef32490759595330f5cf6b28ec21d" Feb 14 19:13:38 crc kubenswrapper[4897]: I0214 19:13:38.490691 4897 scope.go:117] "RemoveContainer" containerID="e7f8ca1035fe4ab44b56a7b5335080d9770958fed63e28ce65ad1d38e20044cc" Feb 14 19:13:40 crc kubenswrapper[4897]: I0214 19:13:40.794294 4897 scope.go:117] "RemoveContainer" containerID="66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" Feb 14 19:13:40 crc kubenswrapper[4897]: E0214 19:13:40.795344 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:13:45 crc kubenswrapper[4897]: I0214 19:13:45.078507 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-xvs2m"] Feb 14 19:13:45 crc kubenswrapper[4897]: I0214 19:13:45.093694 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-6000-account-create-update-phbsk"] Feb 14 19:13:45 crc kubenswrapper[4897]: I0214 19:13:45.107294 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-6000-account-create-update-phbsk"] Feb 14 19:13:45 crc kubenswrapper[4897]: I0214 19:13:45.118162 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-xvs2m"] Feb 14 19:13:45 crc kubenswrapper[4897]: I0214 19:13:45.820761 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4940f666-ec19-4b4c-9eb6-4cce233844f9" path="/var/lib/kubelet/pods/4940f666-ec19-4b4c-9eb6-4cce233844f9/volumes" Feb 14 19:13:45 crc kubenswrapper[4897]: I0214 19:13:45.824725 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59cbf86b-ab14-4d24-953d-5dc1388d0371" path="/var/lib/kubelet/pods/59cbf86b-ab14-4d24-953d-5dc1388d0371/volumes" Feb 14 19:13:46 crc kubenswrapper[4897]: I0214 19:13:46.042473 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-6627-account-create-update-jr9tq"] Feb 14 19:13:46 crc kubenswrapper[4897]: I0214 19:13:46.057092 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-vs6xr"] Feb 14 19:13:46 crc kubenswrapper[4897]: I0214 19:13:46.074083 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-4dvpw"] Feb 14 19:13:46 crc kubenswrapper[4897]: I0214 19:13:46.084762 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-vs6xr"] Feb 14 19:13:46 crc kubenswrapper[4897]: I0214 19:13:46.099843 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7b27-account-create-update-p5jlf"] Feb 14 19:13:46 crc kubenswrapper[4897]: I0214 19:13:46.108005 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-6627-account-create-update-jr9tq"] Feb 14 19:13:46 crc kubenswrapper[4897]: I0214 19:13:46.118880 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-xlfcf"] Feb 14 19:13:46 crc kubenswrapper[4897]: I0214 19:13:46.130480 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7b27-account-create-update-p5jlf"] Feb 14 19:13:46 crc kubenswrapper[4897]: I0214 19:13:46.140767 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-4dvpw"] Feb 14 19:13:46 crc kubenswrapper[4897]: I0214 19:13:46.150754 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-xlfcf"] Feb 14 19:13:46 crc kubenswrapper[4897]: I0214 19:13:46.160724 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-2543-account-create-update-66jmj"] Feb 14 19:13:46 crc kubenswrapper[4897]: I0214 19:13:46.171352 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-2543-account-create-update-66jmj"] Feb 14 19:13:47 crc kubenswrapper[4897]: I0214 19:13:47.808634 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="064037fd-b986-4cd9-bb3e-1000c25a3606" path="/var/lib/kubelet/pods/064037fd-b986-4cd9-bb3e-1000c25a3606/volumes" Feb 14 19:13:47 crc kubenswrapper[4897]: I0214 19:13:47.810240 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17b0c552-7591-4dbd-85ae-bab84ebb7763" path="/var/lib/kubelet/pods/17b0c552-7591-4dbd-85ae-bab84ebb7763/volumes" Feb 14 19:13:47 crc kubenswrapper[4897]: I0214 19:13:47.811253 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b336c8ba-c121-4c43-a75b-8111283a595b" path="/var/lib/kubelet/pods/b336c8ba-c121-4c43-a75b-8111283a595b/volumes" Feb 14 19:13:47 crc kubenswrapper[4897]: I0214 19:13:47.812186 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c20fa3de-5325-4d13-a447-78392f703250" path="/var/lib/kubelet/pods/c20fa3de-5325-4d13-a447-78392f703250/volumes" Feb 14 19:13:47 crc kubenswrapper[4897]: I0214 19:13:47.813257 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9" path="/var/lib/kubelet/pods/d5eb3b81-c6d0-4f33-8f62-2ad5d3c703f9/volumes" Feb 14 19:13:47 crc kubenswrapper[4897]: I0214 19:13:47.813894 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe5bbf96-28f9-4afd-ae13-d4927c001e7a" path="/var/lib/kubelet/pods/fe5bbf96-28f9-4afd-ae13-d4927c001e7a/volumes" Feb 14 19:13:53 crc kubenswrapper[4897]: I0214 19:13:53.069499 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-hrbjk"] Feb 14 19:13:53 crc kubenswrapper[4897]: I0214 19:13:53.091726 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-hrbjk"] Feb 14 19:13:53 crc kubenswrapper[4897]: I0214 19:13:53.816808 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ddc51d6-ba42-4a8c-8488-24ab847bd808" path="/var/lib/kubelet/pods/1ddc51d6-ba42-4a8c-8488-24ab847bd808/volumes" Feb 14 19:13:54 crc kubenswrapper[4897]: I0214 19:13:54.053663 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-9baa-account-create-update-rbzr5"] Feb 14 19:13:54 crc kubenswrapper[4897]: I0214 19:13:54.079361 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-9baa-account-create-update-rbzr5"] Feb 14 19:13:54 crc kubenswrapper[4897]: I0214 19:13:54.793730 4897 scope.go:117] "RemoveContainer" containerID="66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" Feb 14 19:13:54 crc kubenswrapper[4897]: E0214 19:13:54.794432 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:13:55 crc kubenswrapper[4897]: I0214 19:13:55.818288 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e556a75-3106-43db-b4da-53c6df99cd35" path="/var/lib/kubelet/pods/7e556a75-3106-43db-b4da-53c6df99cd35/volumes" Feb 14 19:14:03 crc kubenswrapper[4897]: I0214 19:14:03.033745 4897 generic.go:334] "Generic (PLEG): container finished" podID="afff7c3d-a238-49b8-8b7c-d041c4eb9ac2" containerID="209719dc4cd051982d8dfbc5bf76a44ba4e4f12224d7e34ec7c79ebffd41dfff" exitCode=0 Feb 14 19:14:03 crc kubenswrapper[4897]: I0214 19:14:03.034428 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk" event={"ID":"afff7c3d-a238-49b8-8b7c-d041c4eb9ac2","Type":"ContainerDied","Data":"209719dc4cd051982d8dfbc5bf76a44ba4e4f12224d7e34ec7c79ebffd41dfff"} Feb 14 19:14:04 crc kubenswrapper[4897]: I0214 19:14:04.649076 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk" Feb 14 19:14:04 crc kubenswrapper[4897]: I0214 19:14:04.761223 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/afff7c3d-a238-49b8-8b7c-d041c4eb9ac2-ssh-key-openstack-edpm-ipam\") pod \"afff7c3d-a238-49b8-8b7c-d041c4eb9ac2\" (UID: \"afff7c3d-a238-49b8-8b7c-d041c4eb9ac2\") " Feb 14 19:14:04 crc kubenswrapper[4897]: I0214 19:14:04.761480 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afff7c3d-a238-49b8-8b7c-d041c4eb9ac2-bootstrap-combined-ca-bundle\") pod \"afff7c3d-a238-49b8-8b7c-d041c4eb9ac2\" (UID: \"afff7c3d-a238-49b8-8b7c-d041c4eb9ac2\") " Feb 14 19:14:04 crc kubenswrapper[4897]: I0214 19:14:04.761603 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6zjk\" (UniqueName: \"kubernetes.io/projected/afff7c3d-a238-49b8-8b7c-d041c4eb9ac2-kube-api-access-j6zjk\") pod \"afff7c3d-a238-49b8-8b7c-d041c4eb9ac2\" (UID: \"afff7c3d-a238-49b8-8b7c-d041c4eb9ac2\") " Feb 14 19:14:04 crc kubenswrapper[4897]: I0214 19:14:04.761794 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/afff7c3d-a238-49b8-8b7c-d041c4eb9ac2-inventory\") pod \"afff7c3d-a238-49b8-8b7c-d041c4eb9ac2\" (UID: \"afff7c3d-a238-49b8-8b7c-d041c4eb9ac2\") " Feb 14 19:14:04 crc kubenswrapper[4897]: I0214 19:14:04.767057 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afff7c3d-a238-49b8-8b7c-d041c4eb9ac2-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "afff7c3d-a238-49b8-8b7c-d041c4eb9ac2" (UID: "afff7c3d-a238-49b8-8b7c-d041c4eb9ac2"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:14:04 crc kubenswrapper[4897]: I0214 19:14:04.767733 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afff7c3d-a238-49b8-8b7c-d041c4eb9ac2-kube-api-access-j6zjk" (OuterVolumeSpecName: "kube-api-access-j6zjk") pod "afff7c3d-a238-49b8-8b7c-d041c4eb9ac2" (UID: "afff7c3d-a238-49b8-8b7c-d041c4eb9ac2"). InnerVolumeSpecName "kube-api-access-j6zjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:14:04 crc kubenswrapper[4897]: I0214 19:14:04.809340 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afff7c3d-a238-49b8-8b7c-d041c4eb9ac2-inventory" (OuterVolumeSpecName: "inventory") pod "afff7c3d-a238-49b8-8b7c-d041c4eb9ac2" (UID: "afff7c3d-a238-49b8-8b7c-d041c4eb9ac2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:14:04 crc kubenswrapper[4897]: I0214 19:14:04.811378 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afff7c3d-a238-49b8-8b7c-d041c4eb9ac2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "afff7c3d-a238-49b8-8b7c-d041c4eb9ac2" (UID: "afff7c3d-a238-49b8-8b7c-d041c4eb9ac2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:14:04 crc kubenswrapper[4897]: I0214 19:14:04.868480 4897 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afff7c3d-a238-49b8-8b7c-d041c4eb9ac2-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:14:04 crc kubenswrapper[4897]: I0214 19:14:04.869174 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6zjk\" (UniqueName: \"kubernetes.io/projected/afff7c3d-a238-49b8-8b7c-d041c4eb9ac2-kube-api-access-j6zjk\") on node \"crc\" DevicePath \"\"" Feb 14 19:14:04 crc kubenswrapper[4897]: I0214 19:14:04.869244 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/afff7c3d-a238-49b8-8b7c-d041c4eb9ac2-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 19:14:04 crc kubenswrapper[4897]: I0214 19:14:04.869302 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/afff7c3d-a238-49b8-8b7c-d041c4eb9ac2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 19:14:05 crc kubenswrapper[4897]: I0214 19:14:05.069863 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk" event={"ID":"afff7c3d-a238-49b8-8b7c-d041c4eb9ac2","Type":"ContainerDied","Data":"1971ff5d9170bd832fe012daa64ef1f67521c137c390834c8fa8efcafecd8df4"} Feb 14 19:14:05 crc kubenswrapper[4897]: I0214 19:14:05.069921 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1971ff5d9170bd832fe012daa64ef1f67521c137c390834c8fa8efcafecd8df4" Feb 14 19:14:05 crc kubenswrapper[4897]: I0214 19:14:05.069974 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk" Feb 14 19:14:05 crc kubenswrapper[4897]: I0214 19:14:05.238099 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp"] Feb 14 19:14:05 crc kubenswrapper[4897]: E0214 19:14:05.239115 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afff7c3d-a238-49b8-8b7c-d041c4eb9ac2" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 14 19:14:05 crc kubenswrapper[4897]: I0214 19:14:05.239139 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="afff7c3d-a238-49b8-8b7c-d041c4eb9ac2" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 14 19:14:05 crc kubenswrapper[4897]: I0214 19:14:05.239461 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="afff7c3d-a238-49b8-8b7c-d041c4eb9ac2" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 14 19:14:05 crc kubenswrapper[4897]: I0214 19:14:05.241937 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp" Feb 14 19:14:05 crc kubenswrapper[4897]: I0214 19:14:05.246153 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 19:14:05 crc kubenswrapper[4897]: I0214 19:14:05.246282 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 19:14:05 crc kubenswrapper[4897]: I0214 19:14:05.246472 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 19:14:05 crc kubenswrapper[4897]: I0214 19:14:05.246474 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j869w" Feb 14 19:14:05 crc kubenswrapper[4897]: I0214 19:14:05.249560 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp"] Feb 14 19:14:05 crc kubenswrapper[4897]: I0214 19:14:05.279435 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1587215e-5d70-4aa9-b4a6-e3f84ae07453-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp\" (UID: \"1587215e-5d70-4aa9-b4a6-e3f84ae07453\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp" Feb 14 19:14:05 crc kubenswrapper[4897]: I0214 19:14:05.279572 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1587215e-5d70-4aa9-b4a6-e3f84ae07453-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp\" (UID: \"1587215e-5d70-4aa9-b4a6-e3f84ae07453\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp" Feb 14 19:14:05 crc kubenswrapper[4897]: I0214 19:14:05.279620 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmlp6\" (UniqueName: \"kubernetes.io/projected/1587215e-5d70-4aa9-b4a6-e3f84ae07453-kube-api-access-cmlp6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp\" (UID: \"1587215e-5d70-4aa9-b4a6-e3f84ae07453\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp" Feb 14 19:14:05 crc kubenswrapper[4897]: I0214 19:14:05.381474 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1587215e-5d70-4aa9-b4a6-e3f84ae07453-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp\" (UID: \"1587215e-5d70-4aa9-b4a6-e3f84ae07453\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp" Feb 14 19:14:05 crc kubenswrapper[4897]: I0214 19:14:05.381637 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1587215e-5d70-4aa9-b4a6-e3f84ae07453-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp\" (UID: \"1587215e-5d70-4aa9-b4a6-e3f84ae07453\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp" Feb 14 19:14:05 crc kubenswrapper[4897]: I0214 19:14:05.381709 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmlp6\" (UniqueName: \"kubernetes.io/projected/1587215e-5d70-4aa9-b4a6-e3f84ae07453-kube-api-access-cmlp6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp\" (UID: \"1587215e-5d70-4aa9-b4a6-e3f84ae07453\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp" Feb 14 19:14:05 crc kubenswrapper[4897]: I0214 19:14:05.385504 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1587215e-5d70-4aa9-b4a6-e3f84ae07453-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp\" (UID: \"1587215e-5d70-4aa9-b4a6-e3f84ae07453\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp" Feb 14 19:14:05 crc kubenswrapper[4897]: I0214 19:14:05.386281 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1587215e-5d70-4aa9-b4a6-e3f84ae07453-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp\" (UID: \"1587215e-5d70-4aa9-b4a6-e3f84ae07453\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp" Feb 14 19:14:05 crc kubenswrapper[4897]: I0214 19:14:05.404529 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmlp6\" (UniqueName: \"kubernetes.io/projected/1587215e-5d70-4aa9-b4a6-e3f84ae07453-kube-api-access-cmlp6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp\" (UID: \"1587215e-5d70-4aa9-b4a6-e3f84ae07453\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp" Feb 14 19:14:05 crc kubenswrapper[4897]: I0214 19:14:05.565126 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp" Feb 14 19:14:05 crc kubenswrapper[4897]: I0214 19:14:05.793424 4897 scope.go:117] "RemoveContainer" containerID="66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" Feb 14 19:14:05 crc kubenswrapper[4897]: E0214 19:14:05.794209 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:14:06 crc kubenswrapper[4897]: I0214 19:14:06.218511 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp"] Feb 14 19:14:06 crc kubenswrapper[4897]: W0214 19:14:06.225794 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1587215e_5d70_4aa9_b4a6_e3f84ae07453.slice/crio-9166870b8742a357037fb4e542980303fba7f0246137d82888a7086f1edf4b08 WatchSource:0}: Error finding container 9166870b8742a357037fb4e542980303fba7f0246137d82888a7086f1edf4b08: Status 404 returned error can't find the container with id 9166870b8742a357037fb4e542980303fba7f0246137d82888a7086f1edf4b08 Feb 14 19:14:07 crc kubenswrapper[4897]: I0214 19:14:07.107202 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp" event={"ID":"1587215e-5d70-4aa9-b4a6-e3f84ae07453","Type":"ContainerStarted","Data":"9166870b8742a357037fb4e542980303fba7f0246137d82888a7086f1edf4b08"} Feb 14 19:14:08 crc kubenswrapper[4897]: I0214 19:14:08.085353 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9gch6"] Feb 14 19:14:08 crc kubenswrapper[4897]: I0214 19:14:08.104342 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9gch6"] Feb 14 19:14:08 crc kubenswrapper[4897]: I0214 19:14:08.119812 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp" event={"ID":"1587215e-5d70-4aa9-b4a6-e3f84ae07453","Type":"ContainerStarted","Data":"83e223e566865ec2620bfbbb8bad2f17632051dafb537feff8b7e433d421efe2"} Feb 14 19:14:08 crc kubenswrapper[4897]: I0214 19:14:08.154977 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp" podStartSLOduration=2.444856362 podStartE2EDuration="3.154958807s" podCreationTimestamp="2026-02-14 19:14:05 +0000 UTC" firstStartedPulling="2026-02-14 19:14:06.228832884 +0000 UTC m=+1899.205241367" lastFinishedPulling="2026-02-14 19:14:06.938935289 +0000 UTC m=+1899.915343812" observedRunningTime="2026-02-14 19:14:08.141352104 +0000 UTC m=+1901.117760597" watchObservedRunningTime="2026-02-14 19:14:08.154958807 +0000 UTC m=+1901.131367290" Feb 14 19:14:09 crc kubenswrapper[4897]: I0214 19:14:09.817913 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0aeb6a0-bc14-4f52-8c20-d483e67320b5" path="/var/lib/kubelet/pods/d0aeb6a0-bc14-4f52-8c20-d483e67320b5/volumes" Feb 14 19:14:17 crc kubenswrapper[4897]: I0214 19:14:17.041952 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-7gfbg"] Feb 14 19:14:17 crc kubenswrapper[4897]: I0214 19:14:17.056753 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-7gfbg"] Feb 14 19:14:17 crc kubenswrapper[4897]: I0214 19:14:17.811451 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="731750fa-408a-46ef-89bb-5491267222fb" path="/var/lib/kubelet/pods/731750fa-408a-46ef-89bb-5491267222fb/volumes" Feb 14 19:14:19 crc kubenswrapper[4897]: I0214 19:14:19.823272 4897 scope.go:117] "RemoveContainer" containerID="66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" Feb 14 19:14:19 crc kubenswrapper[4897]: E0214 19:14:19.823685 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:14:30 crc kubenswrapper[4897]: I0214 19:14:30.056060 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-2zx2p"] Feb 14 19:14:30 crc kubenswrapper[4897]: I0214 19:14:30.074482 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-2zx2p"] Feb 14 19:14:30 crc kubenswrapper[4897]: I0214 19:14:30.793776 4897 scope.go:117] "RemoveContainer" containerID="66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" Feb 14 19:14:30 crc kubenswrapper[4897]: E0214 19:14:30.794354 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:14:31 crc kubenswrapper[4897]: I0214 19:14:31.050570 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-mgj9f"] Feb 14 19:14:31 crc kubenswrapper[4897]: I0214 19:14:31.070101 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-mgj9f"] Feb 14 19:14:31 crc kubenswrapper[4897]: I0214 19:14:31.084964 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-c567-account-create-update-nzrhx"] Feb 14 19:14:31 crc kubenswrapper[4897]: I0214 19:14:31.094985 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-a45e-account-create-update-g6bl6"] Feb 14 19:14:31 crc kubenswrapper[4897]: I0214 19:14:31.105207 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-c567-account-create-update-nzrhx"] Feb 14 19:14:31 crc kubenswrapper[4897]: I0214 19:14:31.116534 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-f75f-account-create-update-8kqcq"] Feb 14 19:14:31 crc kubenswrapper[4897]: I0214 19:14:31.126896 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-a45e-account-create-update-g6bl6"] Feb 14 19:14:31 crc kubenswrapper[4897]: I0214 19:14:31.136764 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-f75f-account-create-update-8kqcq"] Feb 14 19:14:31 crc kubenswrapper[4897]: I0214 19:14:31.148294 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-2wcvc"] Feb 14 19:14:31 crc kubenswrapper[4897]: I0214 19:14:31.157354 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-shmjs"] Feb 14 19:14:31 crc kubenswrapper[4897]: I0214 19:14:31.166727 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-822c-account-create-update-wk5r5"] Feb 14 19:14:31 crc kubenswrapper[4897]: I0214 19:14:31.176581 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-2wcvc"] Feb 14 19:14:31 crc kubenswrapper[4897]: I0214 19:14:31.186671 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-shmjs"] Feb 14 19:14:31 crc kubenswrapper[4897]: I0214 19:14:31.196567 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-822c-account-create-update-wk5r5"] Feb 14 19:14:31 crc kubenswrapper[4897]: I0214 19:14:31.820468 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14911dd9-fb30-4512-bfff-1e5acd6b0b50" path="/var/lib/kubelet/pods/14911dd9-fb30-4512-bfff-1e5acd6b0b50/volumes" Feb 14 19:14:31 crc kubenswrapper[4897]: I0214 19:14:31.823492 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="265faf60-c453-4644-9b9a-bb4d6d53cb74" path="/var/lib/kubelet/pods/265faf60-c453-4644-9b9a-bb4d6d53cb74/volumes" Feb 14 19:14:31 crc kubenswrapper[4897]: I0214 19:14:31.825698 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d3ace23-df5c-40ae-a726-90ebe47317ac" path="/var/lib/kubelet/pods/4d3ace23-df5c-40ae-a726-90ebe47317ac/volumes" Feb 14 19:14:31 crc kubenswrapper[4897]: I0214 19:14:31.827722 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6566173c-4067-420f-8df0-ad21cab585fd" path="/var/lib/kubelet/pods/6566173c-4067-420f-8df0-ad21cab585fd/volumes" Feb 14 19:14:31 crc kubenswrapper[4897]: I0214 19:14:31.832505 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="778f41f8-59c5-4a26-9dcc-409778b0bddd" path="/var/lib/kubelet/pods/778f41f8-59c5-4a26-9dcc-409778b0bddd/volumes" Feb 14 19:14:31 crc kubenswrapper[4897]: I0214 19:14:31.835259 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83ba161a-cd6c-4998-8094-a4d05d9722d2" path="/var/lib/kubelet/pods/83ba161a-cd6c-4998-8094-a4d05d9722d2/volumes" Feb 14 19:14:31 crc kubenswrapper[4897]: I0214 19:14:31.837074 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88ea3e1a-f046-40c8-9af9-72a1fc228a7c" path="/var/lib/kubelet/pods/88ea3e1a-f046-40c8-9af9-72a1fc228a7c/volumes" Feb 14 19:14:31 crc kubenswrapper[4897]: I0214 19:14:31.839801 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="915fbcb5-f3cd-4597-a771-54c7ebae16a8" path="/var/lib/kubelet/pods/915fbcb5-f3cd-4597-a771-54c7ebae16a8/volumes" Feb 14 19:14:38 crc kubenswrapper[4897]: I0214 19:14:38.618564 4897 scope.go:117] "RemoveContainer" containerID="35004ae46b90519f0a307a00890566c224d828b57484ce86c8db5cce73276be7" Feb 14 19:14:38 crc kubenswrapper[4897]: I0214 19:14:38.659681 4897 scope.go:117] "RemoveContainer" containerID="f5e64bfb563f23c6ec0fe6f5e0a4a36bee8bfe647cc7003eb26406bc73ce47a6" Feb 14 19:14:38 crc kubenswrapper[4897]: I0214 19:14:38.688135 4897 scope.go:117] "RemoveContainer" containerID="b6c780467831e9ecc8226893feb04cd5fb39d32275431c5cfe579b681fac3f02" Feb 14 19:14:38 crc kubenswrapper[4897]: I0214 19:14:38.724538 4897 scope.go:117] "RemoveContainer" containerID="c495243fd15aec00ff4c52117972de70493f702eca329f10008045249c618c50" Feb 14 19:14:38 crc kubenswrapper[4897]: I0214 19:14:38.798853 4897 scope.go:117] "RemoveContainer" containerID="9431cb9bbf0fb3594f8c7eb95f93b749700eb842dfc5699bd6c14340599534a8" Feb 14 19:14:38 crc kubenswrapper[4897]: I0214 19:14:38.825392 4897 scope.go:117] "RemoveContainer" containerID="4584029be60adf77f36e9076ad681cf2a9a6c580f2839fcb7574d0a471a06f0f" Feb 14 19:14:38 crc kubenswrapper[4897]: I0214 19:14:38.855281 4897 scope.go:117] "RemoveContainer" containerID="c8ae0358a1da4f7011f4b7fb3ca54d054d2b5cc51f5f21f3d18861677b8a13f0" Feb 14 19:14:38 crc kubenswrapper[4897]: I0214 19:14:38.877307 4897 scope.go:117] "RemoveContainer" containerID="7ed401468f44d5ee9f4acbfaa3c359b34bb2becc5e599ed8c47b3bd54fa18f84" Feb 14 19:14:38 crc kubenswrapper[4897]: I0214 19:14:38.947549 4897 scope.go:117] "RemoveContainer" containerID="3fc86ce73213728dfb4597851f035174890d5122dc4df74304b7dae14943da93" Feb 14 19:14:39 crc kubenswrapper[4897]: I0214 19:14:39.010439 4897 scope.go:117] "RemoveContainer" containerID="c74d445a737c375bee8a01d9bf3450f8e4268cf4ec0fae55c46f535645e79997" Feb 14 19:14:39 crc kubenswrapper[4897]: I0214 19:14:39.088907 4897 scope.go:117] "RemoveContainer" containerID="b79850ead4cbcf0016b329c20855eb14b393ef6e9bf11faa7571b8df600150c4" Feb 14 19:14:39 crc kubenswrapper[4897]: I0214 19:14:39.153708 4897 scope.go:117] "RemoveContainer" containerID="3e2962ffb0dcd6e9b3ecfe98106f4eb414eeecdc4cba132c04589a87453f174c" Feb 14 19:14:39 crc kubenswrapper[4897]: I0214 19:14:39.176777 4897 scope.go:117] "RemoveContainer" containerID="92d31e4bfe331edc54debbf0fa29daa0b4c6a31c37b7e70cf91c3ed1b7a0a7e2" Feb 14 19:14:39 crc kubenswrapper[4897]: I0214 19:14:39.197550 4897 scope.go:117] "RemoveContainer" containerID="75bf987ce8bfc743c8e7002ca68f6bd80b0cd27a2bcb19f9d8e8481a23063b43" Feb 14 19:14:39 crc kubenswrapper[4897]: I0214 19:14:39.218474 4897 scope.go:117] "RemoveContainer" containerID="e18080bfcdd4ee93a0bbe26ab97104aadfbb4e4778aaa6c8faafabc246290a60" Feb 14 19:14:39 crc kubenswrapper[4897]: I0214 19:14:39.243575 4897 scope.go:117] "RemoveContainer" containerID="fbf363a8091a98962cc88d42321a52dcefe63a2ba2c0f4dead34de765a46d9b1" Feb 14 19:14:39 crc kubenswrapper[4897]: I0214 19:14:39.274764 4897 scope.go:117] "RemoveContainer" containerID="6486ffdafc27d5a8464330b77e4278af807c73c5fb798426c68013d04ef615ba" Feb 14 19:14:39 crc kubenswrapper[4897]: I0214 19:14:39.303888 4897 scope.go:117] "RemoveContainer" containerID="22f630311c7bec2728ed93f90741d6afc03d046c71dcdeb1a0dae600ea5f579a" Feb 14 19:14:39 crc kubenswrapper[4897]: I0214 19:14:39.337438 4897 scope.go:117] "RemoveContainer" containerID="b80ec8163eb0793ed425e9a7a931f54593bd3245bbc6454c578b10d00c0069ba" Feb 14 19:14:39 crc kubenswrapper[4897]: I0214 19:14:39.367898 4897 scope.go:117] "RemoveContainer" containerID="fa44bbb6b1b409a42c81589c5d6d3fe0a9db210143a884471f840efec692a131" Feb 14 19:14:39 crc kubenswrapper[4897]: I0214 19:14:39.403165 4897 scope.go:117] "RemoveContainer" containerID="604f633fa0066f23160306e68341239037e1969dd6a9950b99139041efd51728" Feb 14 19:14:39 crc kubenswrapper[4897]: I0214 19:14:39.429987 4897 scope.go:117] "RemoveContainer" containerID="af8e13fa059457da4b28a33305d835bbbfcdca8d17c865c95f7c3bbcc0e7a01c" Feb 14 19:14:39 crc kubenswrapper[4897]: I0214 19:14:39.455115 4897 scope.go:117] "RemoveContainer" containerID="b9453114091ce0721c5829c8a28fcf2fdab580fada4c6f8accf9ca2b6c27bc6b" Feb 14 19:14:39 crc kubenswrapper[4897]: I0214 19:14:39.480754 4897 scope.go:117] "RemoveContainer" containerID="3b4bd92a7c76eecd3e4ea8a0eb8b9ffc335763f835cb28820e37b5104f0d8ef8" Feb 14 19:14:39 crc kubenswrapper[4897]: I0214 19:14:39.508916 4897 scope.go:117] "RemoveContainer" containerID="b9adf73bc90557a49b5277f4e1c17e9d8547ab4da364480f92f937bdac9ed118" Feb 14 19:14:39 crc kubenswrapper[4897]: I0214 19:14:39.536609 4897 scope.go:117] "RemoveContainer" containerID="eb5ef38e4e5417c9868a36bd645cef67b11fa1102dc361c744d7d29092a7d455" Feb 14 19:14:39 crc kubenswrapper[4897]: I0214 19:14:39.568746 4897 scope.go:117] "RemoveContainer" containerID="67cac65508d4a33fe9a215b3ce18d5580367f6d0223cce05773a5e4995929415" Feb 14 19:14:39 crc kubenswrapper[4897]: I0214 19:14:39.598338 4897 scope.go:117] "RemoveContainer" containerID="abd6d69c6c9760f3ab2587009eaa4590199627c35874477a70c22f833c6f384c" Feb 14 19:14:39 crc kubenswrapper[4897]: I0214 19:14:39.620681 4897 scope.go:117] "RemoveContainer" containerID="5ceee8207abb0f8e607590cb3be3de1accb9c50f6b0ec1319bfc1c74c7561608" Feb 14 19:14:39 crc kubenswrapper[4897]: I0214 19:14:39.638522 4897 scope.go:117] "RemoveContainer" containerID="3fffb61f615afaa98a0b5adbddabb548d77bd6b052a72ac670ddc2da16f9e975" Feb 14 19:14:39 crc kubenswrapper[4897]: I0214 19:14:39.669014 4897 scope.go:117] "RemoveContainer" containerID="99e1b916759cbd65cc2fe9c5eb37c5a4f92325c3cf72fc55eb003601db030e02" Feb 14 19:14:39 crc kubenswrapper[4897]: I0214 19:14:39.689483 4897 scope.go:117] "RemoveContainer" containerID="84b1b2ba21c137d84a2cecc0ab53e6c8e2ec2460981434e9231f02872d0e2d5a" Feb 14 19:14:39 crc kubenswrapper[4897]: I0214 19:14:39.712726 4897 scope.go:117] "RemoveContainer" containerID="222ac0e3a5fd2dabfd1e5940f06bbb37e4faa29f071c2dd63a406c165506d9ca" Feb 14 19:14:40 crc kubenswrapper[4897]: I0214 19:14:40.044197 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-x7p8g"] Feb 14 19:14:40 crc kubenswrapper[4897]: I0214 19:14:40.056458 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-x7p8g"] Feb 14 19:14:41 crc kubenswrapper[4897]: I0214 19:14:41.794913 4897 scope.go:117] "RemoveContainer" containerID="66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" Feb 14 19:14:41 crc kubenswrapper[4897]: E0214 19:14:41.796211 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:14:41 crc kubenswrapper[4897]: I0214 19:14:41.819743 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03e7174e-f39e-41c4-8482-29f7d420c887" path="/var/lib/kubelet/pods/03e7174e-f39e-41c4-8482-29f7d420c887/volumes" Feb 14 19:14:55 crc kubenswrapper[4897]: I0214 19:14:55.793991 4897 scope.go:117] "RemoveContainer" containerID="66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" Feb 14 19:14:55 crc kubenswrapper[4897]: E0214 19:14:55.795144 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:15:00 crc kubenswrapper[4897]: I0214 19:15:00.170418 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518275-96ngn"] Feb 14 19:15:00 crc kubenswrapper[4897]: I0214 19:15:00.174533 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518275-96ngn" Feb 14 19:15:00 crc kubenswrapper[4897]: I0214 19:15:00.182849 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518275-96ngn"] Feb 14 19:15:00 crc kubenswrapper[4897]: I0214 19:15:00.184560 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 19:15:00 crc kubenswrapper[4897]: I0214 19:15:00.184617 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 19:15:00 crc kubenswrapper[4897]: I0214 19:15:00.240132 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xrfl\" (UniqueName: \"kubernetes.io/projected/38610d34-ba7c-44fe-b975-6a8218c6937c-kube-api-access-6xrfl\") pod \"collect-profiles-29518275-96ngn\" (UID: \"38610d34-ba7c-44fe-b975-6a8218c6937c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518275-96ngn" Feb 14 19:15:00 crc kubenswrapper[4897]: I0214 19:15:00.240209 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38610d34-ba7c-44fe-b975-6a8218c6937c-config-volume\") pod \"collect-profiles-29518275-96ngn\" (UID: \"38610d34-ba7c-44fe-b975-6a8218c6937c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518275-96ngn" Feb 14 19:15:00 crc kubenswrapper[4897]: I0214 19:15:00.240394 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/38610d34-ba7c-44fe-b975-6a8218c6937c-secret-volume\") pod \"collect-profiles-29518275-96ngn\" (UID: \"38610d34-ba7c-44fe-b975-6a8218c6937c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518275-96ngn" Feb 14 19:15:00 crc kubenswrapper[4897]: I0214 19:15:00.342468 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38610d34-ba7c-44fe-b975-6a8218c6937c-config-volume\") pod \"collect-profiles-29518275-96ngn\" (UID: \"38610d34-ba7c-44fe-b975-6a8218c6937c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518275-96ngn" Feb 14 19:15:00 crc kubenswrapper[4897]: I0214 19:15:00.342684 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/38610d34-ba7c-44fe-b975-6a8218c6937c-secret-volume\") pod \"collect-profiles-29518275-96ngn\" (UID: \"38610d34-ba7c-44fe-b975-6a8218c6937c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518275-96ngn" Feb 14 19:15:00 crc kubenswrapper[4897]: I0214 19:15:00.342777 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xrfl\" (UniqueName: \"kubernetes.io/projected/38610d34-ba7c-44fe-b975-6a8218c6937c-kube-api-access-6xrfl\") pod \"collect-profiles-29518275-96ngn\" (UID: \"38610d34-ba7c-44fe-b975-6a8218c6937c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518275-96ngn" Feb 14 19:15:00 crc kubenswrapper[4897]: I0214 19:15:00.343388 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38610d34-ba7c-44fe-b975-6a8218c6937c-config-volume\") pod \"collect-profiles-29518275-96ngn\" (UID: \"38610d34-ba7c-44fe-b975-6a8218c6937c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518275-96ngn" Feb 14 19:15:00 crc kubenswrapper[4897]: I0214 19:15:00.350191 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/38610d34-ba7c-44fe-b975-6a8218c6937c-secret-volume\") pod \"collect-profiles-29518275-96ngn\" (UID: \"38610d34-ba7c-44fe-b975-6a8218c6937c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518275-96ngn" Feb 14 19:15:00 crc kubenswrapper[4897]: I0214 19:15:00.358349 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xrfl\" (UniqueName: \"kubernetes.io/projected/38610d34-ba7c-44fe-b975-6a8218c6937c-kube-api-access-6xrfl\") pod \"collect-profiles-29518275-96ngn\" (UID: \"38610d34-ba7c-44fe-b975-6a8218c6937c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518275-96ngn" Feb 14 19:15:00 crc kubenswrapper[4897]: I0214 19:15:00.507133 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518275-96ngn" Feb 14 19:15:01 crc kubenswrapper[4897]: I0214 19:15:01.027187 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518275-96ngn"] Feb 14 19:15:01 crc kubenswrapper[4897]: E0214 19:15:01.904429 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38610d34_ba7c_44fe_b975_6a8218c6937c.slice/crio-926fe056f2b2bd17d840f888e0e3c737732eaa76736dbf33d0f5c4ce42ccfeed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38610d34_ba7c_44fe_b975_6a8218c6937c.slice/crio-conmon-926fe056f2b2bd17d840f888e0e3c737732eaa76736dbf33d0f5c4ce42ccfeed.scope\": RecentStats: unable to find data in memory cache]" Feb 14 19:15:02 crc kubenswrapper[4897]: I0214 19:15:02.017024 4897 generic.go:334] "Generic (PLEG): container finished" podID="38610d34-ba7c-44fe-b975-6a8218c6937c" containerID="926fe056f2b2bd17d840f888e0e3c737732eaa76736dbf33d0f5c4ce42ccfeed" exitCode=0 Feb 14 19:15:02 crc kubenswrapper[4897]: I0214 19:15:02.017111 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518275-96ngn" event={"ID":"38610d34-ba7c-44fe-b975-6a8218c6937c","Type":"ContainerDied","Data":"926fe056f2b2bd17d840f888e0e3c737732eaa76736dbf33d0f5c4ce42ccfeed"} Feb 14 19:15:02 crc kubenswrapper[4897]: I0214 19:15:02.017151 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518275-96ngn" event={"ID":"38610d34-ba7c-44fe-b975-6a8218c6937c","Type":"ContainerStarted","Data":"9a19bc3cd0cb662dbd04ff13031af9084d1bf46c5dffa8d03724a8a890b8e59c"} Feb 14 19:15:03 crc kubenswrapper[4897]: I0214 19:15:03.443436 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518275-96ngn" Feb 14 19:15:03 crc kubenswrapper[4897]: I0214 19:15:03.622083 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xrfl\" (UniqueName: \"kubernetes.io/projected/38610d34-ba7c-44fe-b975-6a8218c6937c-kube-api-access-6xrfl\") pod \"38610d34-ba7c-44fe-b975-6a8218c6937c\" (UID: \"38610d34-ba7c-44fe-b975-6a8218c6937c\") " Feb 14 19:15:03 crc kubenswrapper[4897]: I0214 19:15:03.622583 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/38610d34-ba7c-44fe-b975-6a8218c6937c-secret-volume\") pod \"38610d34-ba7c-44fe-b975-6a8218c6937c\" (UID: \"38610d34-ba7c-44fe-b975-6a8218c6937c\") " Feb 14 19:15:03 crc kubenswrapper[4897]: I0214 19:15:03.622653 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38610d34-ba7c-44fe-b975-6a8218c6937c-config-volume\") pod \"38610d34-ba7c-44fe-b975-6a8218c6937c\" (UID: \"38610d34-ba7c-44fe-b975-6a8218c6937c\") " Feb 14 19:15:03 crc kubenswrapper[4897]: I0214 19:15:03.623645 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38610d34-ba7c-44fe-b975-6a8218c6937c-config-volume" (OuterVolumeSpecName: "config-volume") pod "38610d34-ba7c-44fe-b975-6a8218c6937c" (UID: "38610d34-ba7c-44fe-b975-6a8218c6937c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:15:03 crc kubenswrapper[4897]: I0214 19:15:03.628948 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38610d34-ba7c-44fe-b975-6a8218c6937c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "38610d34-ba7c-44fe-b975-6a8218c6937c" (UID: "38610d34-ba7c-44fe-b975-6a8218c6937c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:15:03 crc kubenswrapper[4897]: I0214 19:15:03.629314 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38610d34-ba7c-44fe-b975-6a8218c6937c-kube-api-access-6xrfl" (OuterVolumeSpecName: "kube-api-access-6xrfl") pod "38610d34-ba7c-44fe-b975-6a8218c6937c" (UID: "38610d34-ba7c-44fe-b975-6a8218c6937c"). InnerVolumeSpecName "kube-api-access-6xrfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:15:03 crc kubenswrapper[4897]: I0214 19:15:03.724912 4897 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/38610d34-ba7c-44fe-b975-6a8218c6937c-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 19:15:03 crc kubenswrapper[4897]: I0214 19:15:03.724947 4897 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38610d34-ba7c-44fe-b975-6a8218c6937c-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 19:15:03 crc kubenswrapper[4897]: I0214 19:15:03.724960 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xrfl\" (UniqueName: \"kubernetes.io/projected/38610d34-ba7c-44fe-b975-6a8218c6937c-kube-api-access-6xrfl\") on node \"crc\" DevicePath \"\"" Feb 14 19:15:04 crc kubenswrapper[4897]: I0214 19:15:04.048053 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518275-96ngn" event={"ID":"38610d34-ba7c-44fe-b975-6a8218c6937c","Type":"ContainerDied","Data":"9a19bc3cd0cb662dbd04ff13031af9084d1bf46c5dffa8d03724a8a890b8e59c"} Feb 14 19:15:04 crc kubenswrapper[4897]: I0214 19:15:04.048094 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a19bc3cd0cb662dbd04ff13031af9084d1bf46c5dffa8d03724a8a890b8e59c" Feb 14 19:15:04 crc kubenswrapper[4897]: I0214 19:15:04.048165 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518275-96ngn" Feb 14 19:15:04 crc kubenswrapper[4897]: I0214 19:15:04.519712 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518230-qxs85"] Feb 14 19:15:04 crc kubenswrapper[4897]: I0214 19:15:04.535162 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518230-qxs85"] Feb 14 19:15:05 crc kubenswrapper[4897]: I0214 19:15:05.820100 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85830a53-70c2-433d-a359-025fababa083" path="/var/lib/kubelet/pods/85830a53-70c2-433d-a359-025fababa083/volumes" Feb 14 19:15:07 crc kubenswrapper[4897]: I0214 19:15:07.812852 4897 scope.go:117] "RemoveContainer" containerID="66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" Feb 14 19:15:09 crc kubenswrapper[4897]: I0214 19:15:09.116384 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerStarted","Data":"dd708665e8ea240d87012ffb10ef37fcbe9e649061cee70ad605f1da4f00112e"} Feb 14 19:15:14 crc kubenswrapper[4897]: I0214 19:15:14.067013 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-qnrpp"] Feb 14 19:15:14 crc kubenswrapper[4897]: I0214 19:15:14.085695 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-qnrpp"] Feb 14 19:15:15 crc kubenswrapper[4897]: I0214 19:15:15.819330 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec" path="/var/lib/kubelet/pods/a6ab0602-97a3-4dac-b1f2-31ea3e9ccfec/volumes" Feb 14 19:15:25 crc kubenswrapper[4897]: I0214 19:15:25.045347 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-9l57t"] Feb 14 19:15:25 crc kubenswrapper[4897]: I0214 19:15:25.067379 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-j2sgf"] Feb 14 19:15:25 crc kubenswrapper[4897]: I0214 19:15:25.079934 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-j2sgf"] Feb 14 19:15:25 crc kubenswrapper[4897]: I0214 19:15:25.092734 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-9l57t"] Feb 14 19:15:25 crc kubenswrapper[4897]: I0214 19:15:25.810658 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4cf787d-aa82-449b-917e-b5863b11b429" path="/var/lib/kubelet/pods/e4cf787d-aa82-449b-917e-b5863b11b429/volumes" Feb 14 19:15:25 crc kubenswrapper[4897]: I0214 19:15:25.811652 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efcb9cd7-17f6-4705-96e9-40a25d718a72" path="/var/lib/kubelet/pods/efcb9cd7-17f6-4705-96e9-40a25d718a72/volumes" Feb 14 19:15:26 crc kubenswrapper[4897]: I0214 19:15:26.044794 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jsr6q"] Feb 14 19:15:26 crc kubenswrapper[4897]: I0214 19:15:26.093557 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jsr6q"] Feb 14 19:15:27 crc kubenswrapper[4897]: I0214 19:15:27.819948 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1a362ef-bc82-43d1-93d2-81806d08bd50" path="/var/lib/kubelet/pods/d1a362ef-bc82-43d1-93d2-81806d08bd50/volumes" Feb 14 19:15:40 crc kubenswrapper[4897]: I0214 19:15:40.472476 4897 scope.go:117] "RemoveContainer" containerID="a378e17ae0874d53855434093e5097afb540694529af263653e557e5f2441b47" Feb 14 19:15:40 crc kubenswrapper[4897]: I0214 19:15:40.512116 4897 scope.go:117] "RemoveContainer" containerID="f8cd91ba1c8c6fb76daf258862386009b48967e82b99e1e0fbdf4a9bc00a4e60" Feb 14 19:15:40 crc kubenswrapper[4897]: I0214 19:15:40.580307 4897 scope.go:117] "RemoveContainer" containerID="3da22534b4a50b124fd6dc677c4c4db3a8b75124372f35a822a5b2175fbb0745" Feb 14 19:15:40 crc kubenswrapper[4897]: I0214 19:15:40.651726 4897 scope.go:117] "RemoveContainer" containerID="98a95501c242ba75d29f36480bc1c47367e6d9a98059217e4839e3abf6e2dc23" Feb 14 19:15:40 crc kubenswrapper[4897]: I0214 19:15:40.704805 4897 scope.go:117] "RemoveContainer" containerID="888db2ff379959435c61dc26127d1e91334587a3e7b84df3b25c50f03facd53b" Feb 14 19:15:40 crc kubenswrapper[4897]: I0214 19:15:40.755559 4897 scope.go:117] "RemoveContainer" containerID="724b6b8d591ef873313016a2196eaf552614bb96abc2d4fabc2e66edcd2f2a8b" Feb 14 19:15:45 crc kubenswrapper[4897]: I0214 19:15:45.052413 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-577t2"] Feb 14 19:15:45 crc kubenswrapper[4897]: I0214 19:15:45.066433 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-577t2"] Feb 14 19:15:45 crc kubenswrapper[4897]: I0214 19:15:45.810364 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1" path="/var/lib/kubelet/pods/6fe6e36d-7ef1-478c-b6b4-0fbfc2fbfcc1/volumes" Feb 14 19:15:51 crc kubenswrapper[4897]: I0214 19:15:51.777612 4897 generic.go:334] "Generic (PLEG): container finished" podID="1587215e-5d70-4aa9-b4a6-e3f84ae07453" containerID="83e223e566865ec2620bfbbb8bad2f17632051dafb537feff8b7e433d421efe2" exitCode=0 Feb 14 19:15:51 crc kubenswrapper[4897]: I0214 19:15:51.777704 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp" event={"ID":"1587215e-5d70-4aa9-b4a6-e3f84ae07453","Type":"ContainerDied","Data":"83e223e566865ec2620bfbbb8bad2f17632051dafb537feff8b7e433d421efe2"} Feb 14 19:15:53 crc kubenswrapper[4897]: I0214 19:15:53.435258 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp" Feb 14 19:15:53 crc kubenswrapper[4897]: I0214 19:15:53.538316 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1587215e-5d70-4aa9-b4a6-e3f84ae07453-ssh-key-openstack-edpm-ipam\") pod \"1587215e-5d70-4aa9-b4a6-e3f84ae07453\" (UID: \"1587215e-5d70-4aa9-b4a6-e3f84ae07453\") " Feb 14 19:15:53 crc kubenswrapper[4897]: I0214 19:15:53.538470 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmlp6\" (UniqueName: \"kubernetes.io/projected/1587215e-5d70-4aa9-b4a6-e3f84ae07453-kube-api-access-cmlp6\") pod \"1587215e-5d70-4aa9-b4a6-e3f84ae07453\" (UID: \"1587215e-5d70-4aa9-b4a6-e3f84ae07453\") " Feb 14 19:15:53 crc kubenswrapper[4897]: I0214 19:15:53.538596 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1587215e-5d70-4aa9-b4a6-e3f84ae07453-inventory\") pod \"1587215e-5d70-4aa9-b4a6-e3f84ae07453\" (UID: \"1587215e-5d70-4aa9-b4a6-e3f84ae07453\") " Feb 14 19:15:53 crc kubenswrapper[4897]: I0214 19:15:53.545333 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1587215e-5d70-4aa9-b4a6-e3f84ae07453-kube-api-access-cmlp6" (OuterVolumeSpecName: "kube-api-access-cmlp6") pod "1587215e-5d70-4aa9-b4a6-e3f84ae07453" (UID: "1587215e-5d70-4aa9-b4a6-e3f84ae07453"). InnerVolumeSpecName "kube-api-access-cmlp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:15:53 crc kubenswrapper[4897]: I0214 19:15:53.574596 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1587215e-5d70-4aa9-b4a6-e3f84ae07453-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1587215e-5d70-4aa9-b4a6-e3f84ae07453" (UID: "1587215e-5d70-4aa9-b4a6-e3f84ae07453"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:15:53 crc kubenswrapper[4897]: I0214 19:15:53.579485 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1587215e-5d70-4aa9-b4a6-e3f84ae07453-inventory" (OuterVolumeSpecName: "inventory") pod "1587215e-5d70-4aa9-b4a6-e3f84ae07453" (UID: "1587215e-5d70-4aa9-b4a6-e3f84ae07453"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:15:53 crc kubenswrapper[4897]: I0214 19:15:53.642614 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1587215e-5d70-4aa9-b4a6-e3f84ae07453-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 19:15:53 crc kubenswrapper[4897]: I0214 19:15:53.642668 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmlp6\" (UniqueName: \"kubernetes.io/projected/1587215e-5d70-4aa9-b4a6-e3f84ae07453-kube-api-access-cmlp6\") on node \"crc\" DevicePath \"\"" Feb 14 19:15:53 crc kubenswrapper[4897]: I0214 19:15:53.642689 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1587215e-5d70-4aa9-b4a6-e3f84ae07453-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 19:15:53 crc kubenswrapper[4897]: I0214 19:15:53.805823 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp" Feb 14 19:15:53 crc kubenswrapper[4897]: I0214 19:15:53.810401 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp" event={"ID":"1587215e-5d70-4aa9-b4a6-e3f84ae07453","Type":"ContainerDied","Data":"9166870b8742a357037fb4e542980303fba7f0246137d82888a7086f1edf4b08"} Feb 14 19:15:53 crc kubenswrapper[4897]: I0214 19:15:53.810451 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9166870b8742a357037fb4e542980303fba7f0246137d82888a7086f1edf4b08" Feb 14 19:15:53 crc kubenswrapper[4897]: I0214 19:15:53.910290 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr"] Feb 14 19:15:53 crc kubenswrapper[4897]: E0214 19:15:53.910807 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1587215e-5d70-4aa9-b4a6-e3f84ae07453" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 14 19:15:53 crc kubenswrapper[4897]: I0214 19:15:53.910828 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="1587215e-5d70-4aa9-b4a6-e3f84ae07453" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 14 19:15:53 crc kubenswrapper[4897]: E0214 19:15:53.910847 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38610d34-ba7c-44fe-b975-6a8218c6937c" containerName="collect-profiles" Feb 14 19:15:53 crc kubenswrapper[4897]: I0214 19:15:53.910856 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="38610d34-ba7c-44fe-b975-6a8218c6937c" containerName="collect-profiles" Feb 14 19:15:53 crc kubenswrapper[4897]: I0214 19:15:53.911195 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="38610d34-ba7c-44fe-b975-6a8218c6937c" containerName="collect-profiles" Feb 14 19:15:53 crc kubenswrapper[4897]: I0214 19:15:53.911227 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="1587215e-5d70-4aa9-b4a6-e3f84ae07453" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 14 19:15:53 crc kubenswrapper[4897]: I0214 19:15:53.912124 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr" Feb 14 19:15:53 crc kubenswrapper[4897]: I0214 19:15:53.915048 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 19:15:53 crc kubenswrapper[4897]: I0214 19:15:53.915080 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 19:15:53 crc kubenswrapper[4897]: I0214 19:15:53.915079 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 19:15:53 crc kubenswrapper[4897]: I0214 19:15:53.921979 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j869w" Feb 14 19:15:53 crc kubenswrapper[4897]: I0214 19:15:53.934301 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr"] Feb 14 19:15:54 crc kubenswrapper[4897]: I0214 19:15:54.060448 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86btj\" (UniqueName: \"kubernetes.io/projected/56912149-5519-4b45-8e6e-4585b86ee278-kube-api-access-86btj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr\" (UID: \"56912149-5519-4b45-8e6e-4585b86ee278\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr" Feb 14 19:15:54 crc kubenswrapper[4897]: I0214 19:15:54.061546 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56912149-5519-4b45-8e6e-4585b86ee278-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr\" (UID: \"56912149-5519-4b45-8e6e-4585b86ee278\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr" Feb 14 19:15:54 crc kubenswrapper[4897]: I0214 19:15:54.061832 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56912149-5519-4b45-8e6e-4585b86ee278-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr\" (UID: \"56912149-5519-4b45-8e6e-4585b86ee278\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr" Feb 14 19:15:54 crc kubenswrapper[4897]: I0214 19:15:54.163668 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86btj\" (UniqueName: \"kubernetes.io/projected/56912149-5519-4b45-8e6e-4585b86ee278-kube-api-access-86btj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr\" (UID: \"56912149-5519-4b45-8e6e-4585b86ee278\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr" Feb 14 19:15:54 crc kubenswrapper[4897]: I0214 19:15:54.163888 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56912149-5519-4b45-8e6e-4585b86ee278-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr\" (UID: \"56912149-5519-4b45-8e6e-4585b86ee278\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr" Feb 14 19:15:54 crc kubenswrapper[4897]: I0214 19:15:54.163959 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56912149-5519-4b45-8e6e-4585b86ee278-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr\" (UID: \"56912149-5519-4b45-8e6e-4585b86ee278\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr" Feb 14 19:15:54 crc kubenswrapper[4897]: I0214 19:15:54.169744 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56912149-5519-4b45-8e6e-4585b86ee278-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr\" (UID: \"56912149-5519-4b45-8e6e-4585b86ee278\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr" Feb 14 19:15:54 crc kubenswrapper[4897]: I0214 19:15:54.182259 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56912149-5519-4b45-8e6e-4585b86ee278-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr\" (UID: \"56912149-5519-4b45-8e6e-4585b86ee278\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr" Feb 14 19:15:54 crc kubenswrapper[4897]: I0214 19:15:54.186899 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86btj\" (UniqueName: \"kubernetes.io/projected/56912149-5519-4b45-8e6e-4585b86ee278-kube-api-access-86btj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr\" (UID: \"56912149-5519-4b45-8e6e-4585b86ee278\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr" Feb 14 19:15:54 crc kubenswrapper[4897]: I0214 19:15:54.262714 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr" Feb 14 19:15:54 crc kubenswrapper[4897]: I0214 19:15:54.910667 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr"] Feb 14 19:15:54 crc kubenswrapper[4897]: W0214 19:15:54.912356 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56912149_5519_4b45_8e6e_4585b86ee278.slice/crio-12ef183ec4fdf559f115fb031004ca7228ea24220682d94f46746418ab626834 WatchSource:0}: Error finding container 12ef183ec4fdf559f115fb031004ca7228ea24220682d94f46746418ab626834: Status 404 returned error can't find the container with id 12ef183ec4fdf559f115fb031004ca7228ea24220682d94f46746418ab626834 Feb 14 19:15:54 crc kubenswrapper[4897]: I0214 19:15:54.915428 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 19:15:55 crc kubenswrapper[4897]: I0214 19:15:55.837628 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr" event={"ID":"56912149-5519-4b45-8e6e-4585b86ee278","Type":"ContainerStarted","Data":"ff58686ca303766d14690d37346bb15ad6b1f75b9d49bb260d0ed9d4a7e200f8"} Feb 14 19:15:55 crc kubenswrapper[4897]: I0214 19:15:55.838147 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr" event={"ID":"56912149-5519-4b45-8e6e-4585b86ee278","Type":"ContainerStarted","Data":"12ef183ec4fdf559f115fb031004ca7228ea24220682d94f46746418ab626834"} Feb 14 19:15:55 crc kubenswrapper[4897]: I0214 19:15:55.872364 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr" podStartSLOduration=2.369801446 podStartE2EDuration="2.872335715s" podCreationTimestamp="2026-02-14 19:15:53 +0000 UTC" firstStartedPulling="2026-02-14 19:15:54.915187297 +0000 UTC m=+2007.891595780" lastFinishedPulling="2026-02-14 19:15:55.417721566 +0000 UTC m=+2008.394130049" observedRunningTime="2026-02-14 19:15:55.851307241 +0000 UTC m=+2008.827715764" watchObservedRunningTime="2026-02-14 19:15:55.872335715 +0000 UTC m=+2008.848744228" Feb 14 19:16:30 crc kubenswrapper[4897]: I0214 19:16:30.087957 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-c9dv9"] Feb 14 19:16:30 crc kubenswrapper[4897]: I0214 19:16:30.129388 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-c9dv9"] Feb 14 19:16:30 crc kubenswrapper[4897]: I0214 19:16:30.141427 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-f35f-account-create-update-nxqdw"] Feb 14 19:16:30 crc kubenswrapper[4897]: I0214 19:16:30.151213 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-sc5s2"] Feb 14 19:16:30 crc kubenswrapper[4897]: I0214 19:16:30.160916 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-f35f-account-create-update-nxqdw"] Feb 14 19:16:30 crc kubenswrapper[4897]: I0214 19:16:30.169694 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-sc5s2"] Feb 14 19:16:31 crc kubenswrapper[4897]: I0214 19:16:31.039618 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-5q8wx"] Feb 14 19:16:31 crc kubenswrapper[4897]: I0214 19:16:31.052927 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-e2ee-account-create-update-g2kt2"] Feb 14 19:16:31 crc kubenswrapper[4897]: I0214 19:16:31.072508 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-5q8wx"] Feb 14 19:16:31 crc kubenswrapper[4897]: I0214 19:16:31.086337 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-e2ee-account-create-update-g2kt2"] Feb 14 19:16:31 crc kubenswrapper[4897]: I0214 19:16:31.830514 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="312c2219-c7db-4a28-901f-1d03a379e088" path="/var/lib/kubelet/pods/312c2219-c7db-4a28-901f-1d03a379e088/volumes" Feb 14 19:16:31 crc kubenswrapper[4897]: I0214 19:16:31.831906 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f8b79c5-fdc5-49a7-8da5-278bbc982740" path="/var/lib/kubelet/pods/8f8b79c5-fdc5-49a7-8da5-278bbc982740/volumes" Feb 14 19:16:31 crc kubenswrapper[4897]: I0214 19:16:31.832928 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adaee017-ddec-4818-acc9-54a5caa1571f" path="/var/lib/kubelet/pods/adaee017-ddec-4818-acc9-54a5caa1571f/volumes" Feb 14 19:16:31 crc kubenswrapper[4897]: I0214 19:16:31.834020 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c892fc72-2d4f-4417-9078-65f0519fcc2d" path="/var/lib/kubelet/pods/c892fc72-2d4f-4417-9078-65f0519fcc2d/volumes" Feb 14 19:16:31 crc kubenswrapper[4897]: I0214 19:16:31.836563 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e276c7c0-3036-4f26-8971-92a5c22b7840" path="/var/lib/kubelet/pods/e276c7c0-3036-4f26-8971-92a5c22b7840/volumes" Feb 14 19:16:32 crc kubenswrapper[4897]: I0214 19:16:32.033775 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-5d42-account-create-update-kw2zk"] Feb 14 19:16:32 crc kubenswrapper[4897]: I0214 19:16:32.042560 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-5d42-account-create-update-kw2zk"] Feb 14 19:16:33 crc kubenswrapper[4897]: I0214 19:16:33.820755 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fd09d35-34e4-4a37-ac93-455f2f12b0d5" path="/var/lib/kubelet/pods/6fd09d35-34e4-4a37-ac93-455f2f12b0d5/volumes" Feb 14 19:16:40 crc kubenswrapper[4897]: I0214 19:16:40.942009 4897 scope.go:117] "RemoveContainer" containerID="09bce61846694409c52d5561b533845c9a9af05db94f0dffac6228107bde0ee9" Feb 14 19:16:40 crc kubenswrapper[4897]: I0214 19:16:40.982984 4897 scope.go:117] "RemoveContainer" containerID="51f15af252060cf0e1250230839d179aa78a4e866fd1898e7e767a9d820f37fe" Feb 14 19:16:41 crc kubenswrapper[4897]: I0214 19:16:41.043274 4897 scope.go:117] "RemoveContainer" containerID="d44b65540a14b38bc434bfcb225316b32acaf10f73a82eef8a61ea2478482b1e" Feb 14 19:16:41 crc kubenswrapper[4897]: I0214 19:16:41.115267 4897 scope.go:117] "RemoveContainer" containerID="0a0b3c262e416aa480cc223b9d58998fbe0039550bfa686625055983b1fb03ef" Feb 14 19:16:41 crc kubenswrapper[4897]: I0214 19:16:41.151015 4897 scope.go:117] "RemoveContainer" containerID="1aee4ac646dc92ceb127741e10d83251661e23f5271fd7774954da5da9967412" Feb 14 19:16:41 crc kubenswrapper[4897]: I0214 19:16:41.200117 4897 scope.go:117] "RemoveContainer" containerID="2ba67fcb195e8f8d2f528735bbb600ef91d9a9fc692c8bd8b1c99e45aa5a6068" Feb 14 19:16:41 crc kubenswrapper[4897]: I0214 19:16:41.251898 4897 scope.go:117] "RemoveContainer" containerID="c5e4b357e3e2a2032666ed8ac6a46c18162b9637d256ffc347ec143c21db4e3c" Feb 14 19:16:58 crc kubenswrapper[4897]: I0214 19:16:58.710258 4897 generic.go:334] "Generic (PLEG): container finished" podID="56912149-5519-4b45-8e6e-4585b86ee278" containerID="ff58686ca303766d14690d37346bb15ad6b1f75b9d49bb260d0ed9d4a7e200f8" exitCode=0 Feb 14 19:16:58 crc kubenswrapper[4897]: I0214 19:16:58.710373 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr" event={"ID":"56912149-5519-4b45-8e6e-4585b86ee278","Type":"ContainerDied","Data":"ff58686ca303766d14690d37346bb15ad6b1f75b9d49bb260d0ed9d4a7e200f8"} Feb 14 19:17:00 crc kubenswrapper[4897]: I0214 19:17:00.334952 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr" Feb 14 19:17:00 crc kubenswrapper[4897]: I0214 19:17:00.471278 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56912149-5519-4b45-8e6e-4585b86ee278-inventory\") pod \"56912149-5519-4b45-8e6e-4585b86ee278\" (UID: \"56912149-5519-4b45-8e6e-4585b86ee278\") " Feb 14 19:17:00 crc kubenswrapper[4897]: I0214 19:17:00.471838 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56912149-5519-4b45-8e6e-4585b86ee278-ssh-key-openstack-edpm-ipam\") pod \"56912149-5519-4b45-8e6e-4585b86ee278\" (UID: \"56912149-5519-4b45-8e6e-4585b86ee278\") " Feb 14 19:17:00 crc kubenswrapper[4897]: I0214 19:17:00.472318 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86btj\" (UniqueName: \"kubernetes.io/projected/56912149-5519-4b45-8e6e-4585b86ee278-kube-api-access-86btj\") pod \"56912149-5519-4b45-8e6e-4585b86ee278\" (UID: \"56912149-5519-4b45-8e6e-4585b86ee278\") " Feb 14 19:17:00 crc kubenswrapper[4897]: I0214 19:17:00.484397 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56912149-5519-4b45-8e6e-4585b86ee278-kube-api-access-86btj" (OuterVolumeSpecName: "kube-api-access-86btj") pod "56912149-5519-4b45-8e6e-4585b86ee278" (UID: "56912149-5519-4b45-8e6e-4585b86ee278"). InnerVolumeSpecName "kube-api-access-86btj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:17:00 crc kubenswrapper[4897]: I0214 19:17:00.525239 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56912149-5519-4b45-8e6e-4585b86ee278-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "56912149-5519-4b45-8e6e-4585b86ee278" (UID: "56912149-5519-4b45-8e6e-4585b86ee278"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:17:00 crc kubenswrapper[4897]: I0214 19:17:00.527960 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56912149-5519-4b45-8e6e-4585b86ee278-inventory" (OuterVolumeSpecName: "inventory") pod "56912149-5519-4b45-8e6e-4585b86ee278" (UID: "56912149-5519-4b45-8e6e-4585b86ee278"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:17:00 crc kubenswrapper[4897]: I0214 19:17:00.575555 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86btj\" (UniqueName: \"kubernetes.io/projected/56912149-5519-4b45-8e6e-4585b86ee278-kube-api-access-86btj\") on node \"crc\" DevicePath \"\"" Feb 14 19:17:00 crc kubenswrapper[4897]: I0214 19:17:00.575592 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56912149-5519-4b45-8e6e-4585b86ee278-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 19:17:00 crc kubenswrapper[4897]: I0214 19:17:00.575602 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56912149-5519-4b45-8e6e-4585b86ee278-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 19:17:00 crc kubenswrapper[4897]: I0214 19:17:00.739236 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr" event={"ID":"56912149-5519-4b45-8e6e-4585b86ee278","Type":"ContainerDied","Data":"12ef183ec4fdf559f115fb031004ca7228ea24220682d94f46746418ab626834"} Feb 14 19:17:00 crc kubenswrapper[4897]: I0214 19:17:00.739275 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12ef183ec4fdf559f115fb031004ca7228ea24220682d94f46746418ab626834" Feb 14 19:17:00 crc kubenswrapper[4897]: I0214 19:17:00.739310 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr" Feb 14 19:17:00 crc kubenswrapper[4897]: I0214 19:17:00.849834 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tshsh"] Feb 14 19:17:00 crc kubenswrapper[4897]: E0214 19:17:00.850771 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56912149-5519-4b45-8e6e-4585b86ee278" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 14 19:17:00 crc kubenswrapper[4897]: I0214 19:17:00.850796 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="56912149-5519-4b45-8e6e-4585b86ee278" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 14 19:17:00 crc kubenswrapper[4897]: I0214 19:17:00.851174 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="56912149-5519-4b45-8e6e-4585b86ee278" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 14 19:17:00 crc kubenswrapper[4897]: I0214 19:17:00.852306 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tshsh" Feb 14 19:17:00 crc kubenswrapper[4897]: I0214 19:17:00.861442 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j869w" Feb 14 19:17:00 crc kubenswrapper[4897]: I0214 19:17:00.861893 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 19:17:00 crc kubenswrapper[4897]: I0214 19:17:00.861908 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 19:17:00 crc kubenswrapper[4897]: I0214 19:17:00.862115 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 19:17:00 crc kubenswrapper[4897]: I0214 19:17:00.864235 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tshsh"] Feb 14 19:17:00 crc kubenswrapper[4897]: I0214 19:17:00.986257 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c7ad489-e9fd-47b8-aab8-7042415968af-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tshsh\" (UID: \"9c7ad489-e9fd-47b8-aab8-7042415968af\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tshsh" Feb 14 19:17:00 crc kubenswrapper[4897]: I0214 19:17:00.986348 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8gtv\" (UniqueName: \"kubernetes.io/projected/9c7ad489-e9fd-47b8-aab8-7042415968af-kube-api-access-g8gtv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tshsh\" (UID: \"9c7ad489-e9fd-47b8-aab8-7042415968af\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tshsh" Feb 14 19:17:00 crc kubenswrapper[4897]: I0214 19:17:00.986447 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c7ad489-e9fd-47b8-aab8-7042415968af-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tshsh\" (UID: \"9c7ad489-e9fd-47b8-aab8-7042415968af\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tshsh" Feb 14 19:17:01 crc kubenswrapper[4897]: I0214 19:17:01.088365 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c7ad489-e9fd-47b8-aab8-7042415968af-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tshsh\" (UID: \"9c7ad489-e9fd-47b8-aab8-7042415968af\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tshsh" Feb 14 19:17:01 crc kubenswrapper[4897]: I0214 19:17:01.088550 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c7ad489-e9fd-47b8-aab8-7042415968af-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tshsh\" (UID: \"9c7ad489-e9fd-47b8-aab8-7042415968af\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tshsh" Feb 14 19:17:01 crc kubenswrapper[4897]: I0214 19:17:01.088649 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8gtv\" (UniqueName: \"kubernetes.io/projected/9c7ad489-e9fd-47b8-aab8-7042415968af-kube-api-access-g8gtv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tshsh\" (UID: \"9c7ad489-e9fd-47b8-aab8-7042415968af\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tshsh" Feb 14 19:17:01 crc kubenswrapper[4897]: I0214 19:17:01.092782 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c7ad489-e9fd-47b8-aab8-7042415968af-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tshsh\" (UID: \"9c7ad489-e9fd-47b8-aab8-7042415968af\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tshsh" Feb 14 19:17:01 crc kubenswrapper[4897]: I0214 19:17:01.096348 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c7ad489-e9fd-47b8-aab8-7042415968af-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tshsh\" (UID: \"9c7ad489-e9fd-47b8-aab8-7042415968af\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tshsh" Feb 14 19:17:01 crc kubenswrapper[4897]: I0214 19:17:01.121977 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8gtv\" (UniqueName: \"kubernetes.io/projected/9c7ad489-e9fd-47b8-aab8-7042415968af-kube-api-access-g8gtv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tshsh\" (UID: \"9c7ad489-e9fd-47b8-aab8-7042415968af\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tshsh" Feb 14 19:17:01 crc kubenswrapper[4897]: I0214 19:17:01.175368 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tshsh" Feb 14 19:17:01 crc kubenswrapper[4897]: I0214 19:17:01.772425 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tshsh"] Feb 14 19:17:02 crc kubenswrapper[4897]: I0214 19:17:02.779889 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tshsh" event={"ID":"9c7ad489-e9fd-47b8-aab8-7042415968af","Type":"ContainerStarted","Data":"098efdb58095bb5ffddae1e01bfa86d3fadfa1d321f74dbe48df4f9f2613e0a1"} Feb 14 19:17:02 crc kubenswrapper[4897]: I0214 19:17:02.780363 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tshsh" event={"ID":"9c7ad489-e9fd-47b8-aab8-7042415968af","Type":"ContainerStarted","Data":"ce436c98aa145985a01620fc8e75462243214ed121a6ef2244070834e8cc25fb"} Feb 14 19:17:02 crc kubenswrapper[4897]: I0214 19:17:02.809349 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tshsh" podStartSLOduration=2.357397206 podStartE2EDuration="2.809326021s" podCreationTimestamp="2026-02-14 19:17:00 +0000 UTC" firstStartedPulling="2026-02-14 19:17:01.768655236 +0000 UTC m=+2074.745063719" lastFinishedPulling="2026-02-14 19:17:02.220584001 +0000 UTC m=+2075.196992534" observedRunningTime="2026-02-14 19:17:02.807214845 +0000 UTC m=+2075.783623328" watchObservedRunningTime="2026-02-14 19:17:02.809326021 +0000 UTC m=+2075.785734524" Feb 14 19:17:03 crc kubenswrapper[4897]: I0214 19:17:03.063006 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4tshq"] Feb 14 19:17:03 crc kubenswrapper[4897]: I0214 19:17:03.097260 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4tshq"] Feb 14 19:17:03 crc kubenswrapper[4897]: I0214 19:17:03.808483 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5" path="/var/lib/kubelet/pods/1d7a49e4-fe90-44e0-87d2-dc4ca1872ed5/volumes" Feb 14 19:17:07 crc kubenswrapper[4897]: I0214 19:17:07.871103 4897 generic.go:334] "Generic (PLEG): container finished" podID="9c7ad489-e9fd-47b8-aab8-7042415968af" containerID="098efdb58095bb5ffddae1e01bfa86d3fadfa1d321f74dbe48df4f9f2613e0a1" exitCode=0 Feb 14 19:17:07 crc kubenswrapper[4897]: I0214 19:17:07.871230 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tshsh" event={"ID":"9c7ad489-e9fd-47b8-aab8-7042415968af","Type":"ContainerDied","Data":"098efdb58095bb5ffddae1e01bfa86d3fadfa1d321f74dbe48df4f9f2613e0a1"} Feb 14 19:17:09 crc kubenswrapper[4897]: I0214 19:17:09.437749 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tshsh" Feb 14 19:17:09 crc kubenswrapper[4897]: I0214 19:17:09.525339 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c7ad489-e9fd-47b8-aab8-7042415968af-ssh-key-openstack-edpm-ipam\") pod \"9c7ad489-e9fd-47b8-aab8-7042415968af\" (UID: \"9c7ad489-e9fd-47b8-aab8-7042415968af\") " Feb 14 19:17:09 crc kubenswrapper[4897]: I0214 19:17:09.525447 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8gtv\" (UniqueName: \"kubernetes.io/projected/9c7ad489-e9fd-47b8-aab8-7042415968af-kube-api-access-g8gtv\") pod \"9c7ad489-e9fd-47b8-aab8-7042415968af\" (UID: \"9c7ad489-e9fd-47b8-aab8-7042415968af\") " Feb 14 19:17:09 crc kubenswrapper[4897]: I0214 19:17:09.525793 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c7ad489-e9fd-47b8-aab8-7042415968af-inventory\") pod \"9c7ad489-e9fd-47b8-aab8-7042415968af\" (UID: \"9c7ad489-e9fd-47b8-aab8-7042415968af\") " Feb 14 19:17:09 crc kubenswrapper[4897]: I0214 19:17:09.543584 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c7ad489-e9fd-47b8-aab8-7042415968af-kube-api-access-g8gtv" (OuterVolumeSpecName: "kube-api-access-g8gtv") pod "9c7ad489-e9fd-47b8-aab8-7042415968af" (UID: "9c7ad489-e9fd-47b8-aab8-7042415968af"). InnerVolumeSpecName "kube-api-access-g8gtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:17:09 crc kubenswrapper[4897]: I0214 19:17:09.565694 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c7ad489-e9fd-47b8-aab8-7042415968af-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9c7ad489-e9fd-47b8-aab8-7042415968af" (UID: "9c7ad489-e9fd-47b8-aab8-7042415968af"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:17:09 crc kubenswrapper[4897]: I0214 19:17:09.616008 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c7ad489-e9fd-47b8-aab8-7042415968af-inventory" (OuterVolumeSpecName: "inventory") pod "9c7ad489-e9fd-47b8-aab8-7042415968af" (UID: "9c7ad489-e9fd-47b8-aab8-7042415968af"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:17:09 crc kubenswrapper[4897]: I0214 19:17:09.629518 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c7ad489-e9fd-47b8-aab8-7042415968af-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 19:17:09 crc kubenswrapper[4897]: I0214 19:17:09.629724 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8gtv\" (UniqueName: \"kubernetes.io/projected/9c7ad489-e9fd-47b8-aab8-7042415968af-kube-api-access-g8gtv\") on node \"crc\" DevicePath \"\"" Feb 14 19:17:09 crc kubenswrapper[4897]: I0214 19:17:09.629823 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c7ad489-e9fd-47b8-aab8-7042415968af-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 19:17:09 crc kubenswrapper[4897]: I0214 19:17:09.899207 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tshsh" event={"ID":"9c7ad489-e9fd-47b8-aab8-7042415968af","Type":"ContainerDied","Data":"ce436c98aa145985a01620fc8e75462243214ed121a6ef2244070834e8cc25fb"} Feb 14 19:17:09 crc kubenswrapper[4897]: I0214 19:17:09.899247 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce436c98aa145985a01620fc8e75462243214ed121a6ef2244070834e8cc25fb" Feb 14 19:17:09 crc kubenswrapper[4897]: I0214 19:17:09.899288 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tshsh" Feb 14 19:17:09 crc kubenswrapper[4897]: I0214 19:17:09.990115 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mbp8l"] Feb 14 19:17:09 crc kubenswrapper[4897]: E0214 19:17:09.990599 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c7ad489-e9fd-47b8-aab8-7042415968af" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 14 19:17:09 crc kubenswrapper[4897]: I0214 19:17:09.990613 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c7ad489-e9fd-47b8-aab8-7042415968af" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 14 19:17:09 crc kubenswrapper[4897]: I0214 19:17:09.990864 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c7ad489-e9fd-47b8-aab8-7042415968af" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 14 19:17:09 crc kubenswrapper[4897]: I0214 19:17:09.991692 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mbp8l" Feb 14 19:17:10 crc kubenswrapper[4897]: I0214 19:17:09.998383 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j869w" Feb 14 19:17:10 crc kubenswrapper[4897]: I0214 19:17:10.000729 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 19:17:10 crc kubenswrapper[4897]: I0214 19:17:10.000966 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 19:17:10 crc kubenswrapper[4897]: I0214 19:17:10.001142 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 19:17:10 crc kubenswrapper[4897]: I0214 19:17:10.013535 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mbp8l"] Feb 14 19:17:10 crc kubenswrapper[4897]: I0214 19:17:10.146572 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7ad74b7-7e30-4bfd-b608-a4c89a5286c1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mbp8l\" (UID: \"b7ad74b7-7e30-4bfd-b608-a4c89a5286c1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mbp8l" Feb 14 19:17:10 crc kubenswrapper[4897]: I0214 19:17:10.146820 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxw57\" (UniqueName: \"kubernetes.io/projected/b7ad74b7-7e30-4bfd-b608-a4c89a5286c1-kube-api-access-rxw57\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mbp8l\" (UID: \"b7ad74b7-7e30-4bfd-b608-a4c89a5286c1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mbp8l" Feb 14 19:17:10 crc kubenswrapper[4897]: I0214 19:17:10.147020 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b7ad74b7-7e30-4bfd-b608-a4c89a5286c1-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mbp8l\" (UID: \"b7ad74b7-7e30-4bfd-b608-a4c89a5286c1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mbp8l" Feb 14 19:17:10 crc kubenswrapper[4897]: I0214 19:17:10.249851 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7ad74b7-7e30-4bfd-b608-a4c89a5286c1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mbp8l\" (UID: \"b7ad74b7-7e30-4bfd-b608-a4c89a5286c1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mbp8l" Feb 14 19:17:10 crc kubenswrapper[4897]: I0214 19:17:10.250062 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxw57\" (UniqueName: \"kubernetes.io/projected/b7ad74b7-7e30-4bfd-b608-a4c89a5286c1-kube-api-access-rxw57\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mbp8l\" (UID: \"b7ad74b7-7e30-4bfd-b608-a4c89a5286c1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mbp8l" Feb 14 19:17:10 crc kubenswrapper[4897]: I0214 19:17:10.250190 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b7ad74b7-7e30-4bfd-b608-a4c89a5286c1-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mbp8l\" (UID: \"b7ad74b7-7e30-4bfd-b608-a4c89a5286c1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mbp8l" Feb 14 19:17:10 crc kubenswrapper[4897]: I0214 19:17:10.269175 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7ad74b7-7e30-4bfd-b608-a4c89a5286c1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mbp8l\" (UID: \"b7ad74b7-7e30-4bfd-b608-a4c89a5286c1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mbp8l" Feb 14 19:17:10 crc kubenswrapper[4897]: I0214 19:17:10.270537 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b7ad74b7-7e30-4bfd-b608-a4c89a5286c1-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mbp8l\" (UID: \"b7ad74b7-7e30-4bfd-b608-a4c89a5286c1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mbp8l" Feb 14 19:17:10 crc kubenswrapper[4897]: I0214 19:17:10.270830 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxw57\" (UniqueName: \"kubernetes.io/projected/b7ad74b7-7e30-4bfd-b608-a4c89a5286c1-kube-api-access-rxw57\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mbp8l\" (UID: \"b7ad74b7-7e30-4bfd-b608-a4c89a5286c1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mbp8l" Feb 14 19:17:10 crc kubenswrapper[4897]: I0214 19:17:10.322727 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mbp8l" Feb 14 19:17:10 crc kubenswrapper[4897]: I0214 19:17:10.945119 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mbp8l"] Feb 14 19:17:11 crc kubenswrapper[4897]: I0214 19:17:11.926383 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mbp8l" event={"ID":"b7ad74b7-7e30-4bfd-b608-a4c89a5286c1","Type":"ContainerStarted","Data":"f4bb38b6452263f208f56f524bdccf65cceed0f3435d0e942398697e3210ef41"} Feb 14 19:17:11 crc kubenswrapper[4897]: I0214 19:17:11.926789 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mbp8l" event={"ID":"b7ad74b7-7e30-4bfd-b608-a4c89a5286c1","Type":"ContainerStarted","Data":"2509fd6158140c92728c1f93ce27f8437bbe579cd72cd80a8bb8dae89a0941c8"} Feb 14 19:17:11 crc kubenswrapper[4897]: I0214 19:17:11.954255 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mbp8l" podStartSLOduration=2.522765584 podStartE2EDuration="2.954233891s" podCreationTimestamp="2026-02-14 19:17:09 +0000 UTC" firstStartedPulling="2026-02-14 19:17:10.945372587 +0000 UTC m=+2083.921781100" lastFinishedPulling="2026-02-14 19:17:11.376840914 +0000 UTC m=+2084.353249407" observedRunningTime="2026-02-14 19:17:11.946646947 +0000 UTC m=+2084.923055430" watchObservedRunningTime="2026-02-14 19:17:11.954233891 +0000 UTC m=+2084.930642384" Feb 14 19:17:13 crc kubenswrapper[4897]: I0214 19:17:13.058524 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-t6njw"] Feb 14 19:17:13 crc kubenswrapper[4897]: I0214 19:17:13.069722 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-t6njw"] Feb 14 19:17:13 crc kubenswrapper[4897]: I0214 19:17:13.809255 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9031cb08-dfc3-4d67-b9f2-2953713beb20" path="/var/lib/kubelet/pods/9031cb08-dfc3-4d67-b9f2-2953713beb20/volumes" Feb 14 19:17:14 crc kubenswrapper[4897]: I0214 19:17:14.036241 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-99e6-account-create-update-wvnr5"] Feb 14 19:17:14 crc kubenswrapper[4897]: I0214 19:17:14.048538 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-99e6-account-create-update-wvnr5"] Feb 14 19:17:15 crc kubenswrapper[4897]: I0214 19:17:15.808436 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afcb6bce-1132-4c0b-836f-82c6b0fd1406" path="/var/lib/kubelet/pods/afcb6bce-1132-4c0b-836f-82c6b0fd1406/volumes" Feb 14 19:17:21 crc kubenswrapper[4897]: I0214 19:17:21.423163 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9sszv"] Feb 14 19:17:21 crc kubenswrapper[4897]: I0214 19:17:21.425825 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9sszv" Feb 14 19:17:21 crc kubenswrapper[4897]: I0214 19:17:21.441724 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9sszv"] Feb 14 19:17:21 crc kubenswrapper[4897]: I0214 19:17:21.449362 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/237317f5-a472-435a-9b65-4532d5d48bf1-utilities\") pod \"redhat-marketplace-9sszv\" (UID: \"237317f5-a472-435a-9b65-4532d5d48bf1\") " pod="openshift-marketplace/redhat-marketplace-9sszv" Feb 14 19:17:21 crc kubenswrapper[4897]: I0214 19:17:21.449417 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/237317f5-a472-435a-9b65-4532d5d48bf1-catalog-content\") pod \"redhat-marketplace-9sszv\" (UID: \"237317f5-a472-435a-9b65-4532d5d48bf1\") " pod="openshift-marketplace/redhat-marketplace-9sszv" Feb 14 19:17:21 crc kubenswrapper[4897]: I0214 19:17:21.449785 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcmbn\" (UniqueName: \"kubernetes.io/projected/237317f5-a472-435a-9b65-4532d5d48bf1-kube-api-access-dcmbn\") pod \"redhat-marketplace-9sszv\" (UID: \"237317f5-a472-435a-9b65-4532d5d48bf1\") " pod="openshift-marketplace/redhat-marketplace-9sszv" Feb 14 19:17:21 crc kubenswrapper[4897]: I0214 19:17:21.551594 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/237317f5-a472-435a-9b65-4532d5d48bf1-utilities\") pod \"redhat-marketplace-9sszv\" (UID: \"237317f5-a472-435a-9b65-4532d5d48bf1\") " pod="openshift-marketplace/redhat-marketplace-9sszv" Feb 14 19:17:21 crc kubenswrapper[4897]: I0214 19:17:21.551642 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/237317f5-a472-435a-9b65-4532d5d48bf1-catalog-content\") pod \"redhat-marketplace-9sszv\" (UID: \"237317f5-a472-435a-9b65-4532d5d48bf1\") " pod="openshift-marketplace/redhat-marketplace-9sszv" Feb 14 19:17:21 crc kubenswrapper[4897]: I0214 19:17:21.551769 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcmbn\" (UniqueName: \"kubernetes.io/projected/237317f5-a472-435a-9b65-4532d5d48bf1-kube-api-access-dcmbn\") pod \"redhat-marketplace-9sszv\" (UID: \"237317f5-a472-435a-9b65-4532d5d48bf1\") " pod="openshift-marketplace/redhat-marketplace-9sszv" Feb 14 19:17:21 crc kubenswrapper[4897]: I0214 19:17:21.552134 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/237317f5-a472-435a-9b65-4532d5d48bf1-utilities\") pod \"redhat-marketplace-9sszv\" (UID: \"237317f5-a472-435a-9b65-4532d5d48bf1\") " pod="openshift-marketplace/redhat-marketplace-9sszv" Feb 14 19:17:21 crc kubenswrapper[4897]: I0214 19:17:21.552193 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/237317f5-a472-435a-9b65-4532d5d48bf1-catalog-content\") pod \"redhat-marketplace-9sszv\" (UID: \"237317f5-a472-435a-9b65-4532d5d48bf1\") " pod="openshift-marketplace/redhat-marketplace-9sszv" Feb 14 19:17:21 crc kubenswrapper[4897]: I0214 19:17:21.570833 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcmbn\" (UniqueName: \"kubernetes.io/projected/237317f5-a472-435a-9b65-4532d5d48bf1-kube-api-access-dcmbn\") pod \"redhat-marketplace-9sszv\" (UID: \"237317f5-a472-435a-9b65-4532d5d48bf1\") " pod="openshift-marketplace/redhat-marketplace-9sszv" Feb 14 19:17:21 crc kubenswrapper[4897]: I0214 19:17:21.744773 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9sszv" Feb 14 19:17:22 crc kubenswrapper[4897]: I0214 19:17:22.230138 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9sszv"] Feb 14 19:17:23 crc kubenswrapper[4897]: I0214 19:17:23.059293 4897 generic.go:334] "Generic (PLEG): container finished" podID="237317f5-a472-435a-9b65-4532d5d48bf1" containerID="7dd62f71c74e839d6521086adee8479819bd14b56ae2f536713147711469068f" exitCode=0 Feb 14 19:17:23 crc kubenswrapper[4897]: I0214 19:17:23.059394 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9sszv" event={"ID":"237317f5-a472-435a-9b65-4532d5d48bf1","Type":"ContainerDied","Data":"7dd62f71c74e839d6521086adee8479819bd14b56ae2f536713147711469068f"} Feb 14 19:17:23 crc kubenswrapper[4897]: I0214 19:17:23.059592 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9sszv" event={"ID":"237317f5-a472-435a-9b65-4532d5d48bf1","Type":"ContainerStarted","Data":"479e48e788a9aee7985ce8d7540700e8f913fa7ee32b4487b5b2a6588e2f54ec"} Feb 14 19:17:24 crc kubenswrapper[4897]: I0214 19:17:24.071877 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9sszv" event={"ID":"237317f5-a472-435a-9b65-4532d5d48bf1","Type":"ContainerStarted","Data":"68d5d3785acd088f3eb1d4e954cc4225f4a6e2964218584a76c20a90b451c37d"} Feb 14 19:17:25 crc kubenswrapper[4897]: I0214 19:17:25.087571 4897 generic.go:334] "Generic (PLEG): container finished" podID="237317f5-a472-435a-9b65-4532d5d48bf1" containerID="68d5d3785acd088f3eb1d4e954cc4225f4a6e2964218584a76c20a90b451c37d" exitCode=0 Feb 14 19:17:25 crc kubenswrapper[4897]: I0214 19:17:25.087642 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9sszv" event={"ID":"237317f5-a472-435a-9b65-4532d5d48bf1","Type":"ContainerDied","Data":"68d5d3785acd088f3eb1d4e954cc4225f4a6e2964218584a76c20a90b451c37d"} Feb 14 19:17:26 crc kubenswrapper[4897]: I0214 19:17:26.110415 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9sszv" event={"ID":"237317f5-a472-435a-9b65-4532d5d48bf1","Type":"ContainerStarted","Data":"fc835f74aea39590ec38da5997112bbca0fa3a2bf9905daeae682725da9e9a3f"} Feb 14 19:17:26 crc kubenswrapper[4897]: I0214 19:17:26.145400 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9sszv" podStartSLOduration=2.645922385 podStartE2EDuration="5.145383595s" podCreationTimestamp="2026-02-14 19:17:21 +0000 UTC" firstStartedPulling="2026-02-14 19:17:23.062355797 +0000 UTC m=+2096.038764290" lastFinishedPulling="2026-02-14 19:17:25.561816997 +0000 UTC m=+2098.538225500" observedRunningTime="2026-02-14 19:17:26.13875238 +0000 UTC m=+2099.115160883" watchObservedRunningTime="2026-02-14 19:17:26.145383595 +0000 UTC m=+2099.121792078" Feb 14 19:17:31 crc kubenswrapper[4897]: I0214 19:17:31.726223 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:17:31 crc kubenswrapper[4897]: I0214 19:17:31.726909 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:17:31 crc kubenswrapper[4897]: I0214 19:17:31.744851 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9sszv" Feb 14 19:17:31 crc kubenswrapper[4897]: I0214 19:17:31.744890 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9sszv" Feb 14 19:17:31 crc kubenswrapper[4897]: I0214 19:17:31.820743 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9sszv" Feb 14 19:17:32 crc kubenswrapper[4897]: I0214 19:17:32.256118 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9sszv" Feb 14 19:17:32 crc kubenswrapper[4897]: I0214 19:17:32.317876 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9sszv"] Feb 14 19:17:33 crc kubenswrapper[4897]: I0214 19:17:33.044476 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-ltbnn"] Feb 14 19:17:33 crc kubenswrapper[4897]: I0214 19:17:33.056371 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-ltbnn"] Feb 14 19:17:33 crc kubenswrapper[4897]: I0214 19:17:33.815339 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="883bcca0-6930-4d70-9386-657adbf063c9" path="/var/lib/kubelet/pods/883bcca0-6930-4d70-9386-657adbf063c9/volumes" Feb 14 19:17:34 crc kubenswrapper[4897]: I0214 19:17:34.205440 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9sszv" podUID="237317f5-a472-435a-9b65-4532d5d48bf1" containerName="registry-server" containerID="cri-o://fc835f74aea39590ec38da5997112bbca0fa3a2bf9905daeae682725da9e9a3f" gracePeriod=2 Feb 14 19:17:34 crc kubenswrapper[4897]: I0214 19:17:34.959772 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9sszv" Feb 14 19:17:35 crc kubenswrapper[4897]: I0214 19:17:35.046019 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/237317f5-a472-435a-9b65-4532d5d48bf1-catalog-content\") pod \"237317f5-a472-435a-9b65-4532d5d48bf1\" (UID: \"237317f5-a472-435a-9b65-4532d5d48bf1\") " Feb 14 19:17:35 crc kubenswrapper[4897]: I0214 19:17:35.046192 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcmbn\" (UniqueName: \"kubernetes.io/projected/237317f5-a472-435a-9b65-4532d5d48bf1-kube-api-access-dcmbn\") pod \"237317f5-a472-435a-9b65-4532d5d48bf1\" (UID: \"237317f5-a472-435a-9b65-4532d5d48bf1\") " Feb 14 19:17:35 crc kubenswrapper[4897]: I0214 19:17:35.046468 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/237317f5-a472-435a-9b65-4532d5d48bf1-utilities\") pod \"237317f5-a472-435a-9b65-4532d5d48bf1\" (UID: \"237317f5-a472-435a-9b65-4532d5d48bf1\") " Feb 14 19:17:35 crc kubenswrapper[4897]: I0214 19:17:35.048559 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/237317f5-a472-435a-9b65-4532d5d48bf1-utilities" (OuterVolumeSpecName: "utilities") pod "237317f5-a472-435a-9b65-4532d5d48bf1" (UID: "237317f5-a472-435a-9b65-4532d5d48bf1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:17:35 crc kubenswrapper[4897]: I0214 19:17:35.052537 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/237317f5-a472-435a-9b65-4532d5d48bf1-kube-api-access-dcmbn" (OuterVolumeSpecName: "kube-api-access-dcmbn") pod "237317f5-a472-435a-9b65-4532d5d48bf1" (UID: "237317f5-a472-435a-9b65-4532d5d48bf1"). InnerVolumeSpecName "kube-api-access-dcmbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:17:35 crc kubenswrapper[4897]: I0214 19:17:35.058898 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcmbn\" (UniqueName: \"kubernetes.io/projected/237317f5-a472-435a-9b65-4532d5d48bf1-kube-api-access-dcmbn\") on node \"crc\" DevicePath \"\"" Feb 14 19:17:35 crc kubenswrapper[4897]: I0214 19:17:35.058948 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/237317f5-a472-435a-9b65-4532d5d48bf1-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 19:17:35 crc kubenswrapper[4897]: I0214 19:17:35.090432 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/237317f5-a472-435a-9b65-4532d5d48bf1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "237317f5-a472-435a-9b65-4532d5d48bf1" (UID: "237317f5-a472-435a-9b65-4532d5d48bf1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:17:35 crc kubenswrapper[4897]: I0214 19:17:35.161579 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/237317f5-a472-435a-9b65-4532d5d48bf1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 19:17:35 crc kubenswrapper[4897]: I0214 19:17:35.215661 4897 generic.go:334] "Generic (PLEG): container finished" podID="237317f5-a472-435a-9b65-4532d5d48bf1" containerID="fc835f74aea39590ec38da5997112bbca0fa3a2bf9905daeae682725da9e9a3f" exitCode=0 Feb 14 19:17:35 crc kubenswrapper[4897]: I0214 19:17:35.215729 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9sszv" Feb 14 19:17:35 crc kubenswrapper[4897]: I0214 19:17:35.215732 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9sszv" event={"ID":"237317f5-a472-435a-9b65-4532d5d48bf1","Type":"ContainerDied","Data":"fc835f74aea39590ec38da5997112bbca0fa3a2bf9905daeae682725da9e9a3f"} Feb 14 19:17:35 crc kubenswrapper[4897]: I0214 19:17:35.217353 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9sszv" event={"ID":"237317f5-a472-435a-9b65-4532d5d48bf1","Type":"ContainerDied","Data":"479e48e788a9aee7985ce8d7540700e8f913fa7ee32b4487b5b2a6588e2f54ec"} Feb 14 19:17:35 crc kubenswrapper[4897]: I0214 19:17:35.217375 4897 scope.go:117] "RemoveContainer" containerID="fc835f74aea39590ec38da5997112bbca0fa3a2bf9905daeae682725da9e9a3f" Feb 14 19:17:35 crc kubenswrapper[4897]: I0214 19:17:35.237829 4897 scope.go:117] "RemoveContainer" containerID="68d5d3785acd088f3eb1d4e954cc4225f4a6e2964218584a76c20a90b451c37d" Feb 14 19:17:35 crc kubenswrapper[4897]: I0214 19:17:35.261697 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9sszv"] Feb 14 19:17:35 crc kubenswrapper[4897]: I0214 19:17:35.275806 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9sszv"] Feb 14 19:17:35 crc kubenswrapper[4897]: I0214 19:17:35.290018 4897 scope.go:117] "RemoveContainer" containerID="7dd62f71c74e839d6521086adee8479819bd14b56ae2f536713147711469068f" Feb 14 19:17:35 crc kubenswrapper[4897]: I0214 19:17:35.324705 4897 scope.go:117] "RemoveContainer" containerID="fc835f74aea39590ec38da5997112bbca0fa3a2bf9905daeae682725da9e9a3f" Feb 14 19:17:35 crc kubenswrapper[4897]: E0214 19:17:35.325304 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc835f74aea39590ec38da5997112bbca0fa3a2bf9905daeae682725da9e9a3f\": container with ID starting with fc835f74aea39590ec38da5997112bbca0fa3a2bf9905daeae682725da9e9a3f not found: ID does not exist" containerID="fc835f74aea39590ec38da5997112bbca0fa3a2bf9905daeae682725da9e9a3f" Feb 14 19:17:35 crc kubenswrapper[4897]: I0214 19:17:35.325346 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc835f74aea39590ec38da5997112bbca0fa3a2bf9905daeae682725da9e9a3f"} err="failed to get container status \"fc835f74aea39590ec38da5997112bbca0fa3a2bf9905daeae682725da9e9a3f\": rpc error: code = NotFound desc = could not find container \"fc835f74aea39590ec38da5997112bbca0fa3a2bf9905daeae682725da9e9a3f\": container with ID starting with fc835f74aea39590ec38da5997112bbca0fa3a2bf9905daeae682725da9e9a3f not found: ID does not exist" Feb 14 19:17:35 crc kubenswrapper[4897]: I0214 19:17:35.325378 4897 scope.go:117] "RemoveContainer" containerID="68d5d3785acd088f3eb1d4e954cc4225f4a6e2964218584a76c20a90b451c37d" Feb 14 19:17:35 crc kubenswrapper[4897]: E0214 19:17:35.325848 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68d5d3785acd088f3eb1d4e954cc4225f4a6e2964218584a76c20a90b451c37d\": container with ID starting with 68d5d3785acd088f3eb1d4e954cc4225f4a6e2964218584a76c20a90b451c37d not found: ID does not exist" containerID="68d5d3785acd088f3eb1d4e954cc4225f4a6e2964218584a76c20a90b451c37d" Feb 14 19:17:35 crc kubenswrapper[4897]: I0214 19:17:35.325876 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68d5d3785acd088f3eb1d4e954cc4225f4a6e2964218584a76c20a90b451c37d"} err="failed to get container status \"68d5d3785acd088f3eb1d4e954cc4225f4a6e2964218584a76c20a90b451c37d\": rpc error: code = NotFound desc = could not find container \"68d5d3785acd088f3eb1d4e954cc4225f4a6e2964218584a76c20a90b451c37d\": container with ID starting with 68d5d3785acd088f3eb1d4e954cc4225f4a6e2964218584a76c20a90b451c37d not found: ID does not exist" Feb 14 19:17:35 crc kubenswrapper[4897]: I0214 19:17:35.325898 4897 scope.go:117] "RemoveContainer" containerID="7dd62f71c74e839d6521086adee8479819bd14b56ae2f536713147711469068f" Feb 14 19:17:35 crc kubenswrapper[4897]: E0214 19:17:35.326442 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dd62f71c74e839d6521086adee8479819bd14b56ae2f536713147711469068f\": container with ID starting with 7dd62f71c74e839d6521086adee8479819bd14b56ae2f536713147711469068f not found: ID does not exist" containerID="7dd62f71c74e839d6521086adee8479819bd14b56ae2f536713147711469068f" Feb 14 19:17:35 crc kubenswrapper[4897]: I0214 19:17:35.326458 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dd62f71c74e839d6521086adee8479819bd14b56ae2f536713147711469068f"} err="failed to get container status \"7dd62f71c74e839d6521086adee8479819bd14b56ae2f536713147711469068f\": rpc error: code = NotFound desc = could not find container \"7dd62f71c74e839d6521086adee8479819bd14b56ae2f536713147711469068f\": container with ID starting with 7dd62f71c74e839d6521086adee8479819bd14b56ae2f536713147711469068f not found: ID does not exist" Feb 14 19:17:35 crc kubenswrapper[4897]: I0214 19:17:35.817372 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="237317f5-a472-435a-9b65-4532d5d48bf1" path="/var/lib/kubelet/pods/237317f5-a472-435a-9b65-4532d5d48bf1/volumes" Feb 14 19:17:41 crc kubenswrapper[4897]: I0214 19:17:41.446142 4897 scope.go:117] "RemoveContainer" containerID="6d7b1cc339eda193c2cebf96994400598d71e3b6213f79b28ff307fc70256467" Feb 14 19:17:41 crc kubenswrapper[4897]: I0214 19:17:41.473527 4897 scope.go:117] "RemoveContainer" containerID="dddae43a08b2757ad4f6142d87658cdd6c6686245df43ca11144d39c9ab8ede9" Feb 14 19:17:41 crc kubenswrapper[4897]: I0214 19:17:41.541671 4897 scope.go:117] "RemoveContainer" containerID="55bb5a98301b0ab3ae3fbd80df8fa1d0991f008eac023a4c67ea9f0ca034aa77" Feb 14 19:17:41 crc kubenswrapper[4897]: I0214 19:17:41.631486 4897 scope.go:117] "RemoveContainer" containerID="7daa3e9182145db070e6ed99d9195899d85acbe4ee391f4c15f379f0c1bf3b1f" Feb 14 19:17:42 crc kubenswrapper[4897]: I0214 19:17:42.077613 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ttbmx"] Feb 14 19:17:42 crc kubenswrapper[4897]: I0214 19:17:42.097355 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ttbmx"] Feb 14 19:17:43 crc kubenswrapper[4897]: I0214 19:17:43.813277 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd" path="/var/lib/kubelet/pods/7a2e5b9c-9d4d-430e-8fbd-ae317ee1fdcd/volumes" Feb 14 19:17:49 crc kubenswrapper[4897]: I0214 19:17:49.415962 4897 generic.go:334] "Generic (PLEG): container finished" podID="b7ad74b7-7e30-4bfd-b608-a4c89a5286c1" containerID="f4bb38b6452263f208f56f524bdccf65cceed0f3435d0e942398697e3210ef41" exitCode=0 Feb 14 19:17:49 crc kubenswrapper[4897]: I0214 19:17:49.416085 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mbp8l" event={"ID":"b7ad74b7-7e30-4bfd-b608-a4c89a5286c1","Type":"ContainerDied","Data":"f4bb38b6452263f208f56f524bdccf65cceed0f3435d0e942398697e3210ef41"} Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.119217 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mbp8l" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.232796 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7ad74b7-7e30-4bfd-b608-a4c89a5286c1-inventory\") pod \"b7ad74b7-7e30-4bfd-b608-a4c89a5286c1\" (UID: \"b7ad74b7-7e30-4bfd-b608-a4c89a5286c1\") " Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.232907 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxw57\" (UniqueName: \"kubernetes.io/projected/b7ad74b7-7e30-4bfd-b608-a4c89a5286c1-kube-api-access-rxw57\") pod \"b7ad74b7-7e30-4bfd-b608-a4c89a5286c1\" (UID: \"b7ad74b7-7e30-4bfd-b608-a4c89a5286c1\") " Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.233053 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b7ad74b7-7e30-4bfd-b608-a4c89a5286c1-ssh-key-openstack-edpm-ipam\") pod \"b7ad74b7-7e30-4bfd-b608-a4c89a5286c1\" (UID: \"b7ad74b7-7e30-4bfd-b608-a4c89a5286c1\") " Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.237923 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7ad74b7-7e30-4bfd-b608-a4c89a5286c1-kube-api-access-rxw57" (OuterVolumeSpecName: "kube-api-access-rxw57") pod "b7ad74b7-7e30-4bfd-b608-a4c89a5286c1" (UID: "b7ad74b7-7e30-4bfd-b608-a4c89a5286c1"). InnerVolumeSpecName "kube-api-access-rxw57". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.265176 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7ad74b7-7e30-4bfd-b608-a4c89a5286c1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b7ad74b7-7e30-4bfd-b608-a4c89a5286c1" (UID: "b7ad74b7-7e30-4bfd-b608-a4c89a5286c1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.286560 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7ad74b7-7e30-4bfd-b608-a4c89a5286c1-inventory" (OuterVolumeSpecName: "inventory") pod "b7ad74b7-7e30-4bfd-b608-a4c89a5286c1" (UID: "b7ad74b7-7e30-4bfd-b608-a4c89a5286c1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.336996 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b7ad74b7-7e30-4bfd-b608-a4c89a5286c1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.337070 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7ad74b7-7e30-4bfd-b608-a4c89a5286c1-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.337102 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxw57\" (UniqueName: \"kubernetes.io/projected/b7ad74b7-7e30-4bfd-b608-a4c89a5286c1-kube-api-access-rxw57\") on node \"crc\" DevicePath \"\"" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.444553 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mbp8l" event={"ID":"b7ad74b7-7e30-4bfd-b608-a4c89a5286c1","Type":"ContainerDied","Data":"2509fd6158140c92728c1f93ce27f8437bbe579cd72cd80a8bb8dae89a0941c8"} Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.444605 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2509fd6158140c92728c1f93ce27f8437bbe579cd72cd80a8bb8dae89a0941c8" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.444671 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mbp8l" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.573097 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv"] Feb 14 19:17:51 crc kubenswrapper[4897]: E0214 19:17:51.573896 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237317f5-a472-435a-9b65-4532d5d48bf1" containerName="extract-content" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.573914 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="237317f5-a472-435a-9b65-4532d5d48bf1" containerName="extract-content" Feb 14 19:17:51 crc kubenswrapper[4897]: E0214 19:17:51.573949 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237317f5-a472-435a-9b65-4532d5d48bf1" containerName="registry-server" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.573970 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="237317f5-a472-435a-9b65-4532d5d48bf1" containerName="registry-server" Feb 14 19:17:51 crc kubenswrapper[4897]: E0214 19:17:51.573983 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237317f5-a472-435a-9b65-4532d5d48bf1" containerName="extract-utilities" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.573989 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="237317f5-a472-435a-9b65-4532d5d48bf1" containerName="extract-utilities" Feb 14 19:17:51 crc kubenswrapper[4897]: E0214 19:17:51.574004 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7ad74b7-7e30-4bfd-b608-a4c89a5286c1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.574012 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7ad74b7-7e30-4bfd-b608-a4c89a5286c1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.574300 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7ad74b7-7e30-4bfd-b608-a4c89a5286c1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.574330 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="237317f5-a472-435a-9b65-4532d5d48bf1" containerName="registry-server" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.575132 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.580476 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.580919 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.581193 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j869w" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.581399 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.586386 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv"] Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.755780 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4d9e4e2-d6b3-4618-9535-35fd4379f2a2-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv\" (UID: \"f4d9e4e2-d6b3-4618-9535-35fd4379f2a2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.755836 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g99d5\" (UniqueName: \"kubernetes.io/projected/f4d9e4e2-d6b3-4618-9535-35fd4379f2a2-kube-api-access-g99d5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv\" (UID: \"f4d9e4e2-d6b3-4618-9535-35fd4379f2a2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.755913 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4d9e4e2-d6b3-4618-9535-35fd4379f2a2-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv\" (UID: \"f4d9e4e2-d6b3-4618-9535-35fd4379f2a2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.858070 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4d9e4e2-d6b3-4618-9535-35fd4379f2a2-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv\" (UID: \"f4d9e4e2-d6b3-4618-9535-35fd4379f2a2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.858164 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g99d5\" (UniqueName: \"kubernetes.io/projected/f4d9e4e2-d6b3-4618-9535-35fd4379f2a2-kube-api-access-g99d5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv\" (UID: \"f4d9e4e2-d6b3-4618-9535-35fd4379f2a2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.858269 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4d9e4e2-d6b3-4618-9535-35fd4379f2a2-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv\" (UID: \"f4d9e4e2-d6b3-4618-9535-35fd4379f2a2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.864385 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4d9e4e2-d6b3-4618-9535-35fd4379f2a2-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv\" (UID: \"f4d9e4e2-d6b3-4618-9535-35fd4379f2a2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.868840 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4d9e4e2-d6b3-4618-9535-35fd4379f2a2-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv\" (UID: \"f4d9e4e2-d6b3-4618-9535-35fd4379f2a2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.879865 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g99d5\" (UniqueName: \"kubernetes.io/projected/f4d9e4e2-d6b3-4618-9535-35fd4379f2a2-kube-api-access-g99d5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv\" (UID: \"f4d9e4e2-d6b3-4618-9535-35fd4379f2a2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv" Feb 14 19:17:51 crc kubenswrapper[4897]: I0214 19:17:51.902383 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv" Feb 14 19:17:52 crc kubenswrapper[4897]: I0214 19:17:52.514967 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv"] Feb 14 19:17:53 crc kubenswrapper[4897]: I0214 19:17:53.492436 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv" event={"ID":"f4d9e4e2-d6b3-4618-9535-35fd4379f2a2","Type":"ContainerStarted","Data":"e3ccf5785ffe6d7918a0b6b736d0e61e252ba2e239bb0a0211007ddd89c562d6"} Feb 14 19:17:53 crc kubenswrapper[4897]: I0214 19:17:53.492861 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv" event={"ID":"f4d9e4e2-d6b3-4618-9535-35fd4379f2a2","Type":"ContainerStarted","Data":"c113c0cc1108c400e4182c22211babad14ce3b966c1d061b18b993b4d56779f2"} Feb 14 19:17:53 crc kubenswrapper[4897]: I0214 19:17:53.522275 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv" podStartSLOduration=2.045978775 podStartE2EDuration="2.522253487s" podCreationTimestamp="2026-02-14 19:17:51 +0000 UTC" firstStartedPulling="2026-02-14 19:17:52.52434033 +0000 UTC m=+2125.500748833" lastFinishedPulling="2026-02-14 19:17:53.000614992 +0000 UTC m=+2125.977023545" observedRunningTime="2026-02-14 19:17:53.521483073 +0000 UTC m=+2126.497891626" watchObservedRunningTime="2026-02-14 19:17:53.522253487 +0000 UTC m=+2126.498661980" Feb 14 19:18:01 crc kubenswrapper[4897]: I0214 19:18:01.725507 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:18:01 crc kubenswrapper[4897]: I0214 19:18:01.725990 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:18:24 crc kubenswrapper[4897]: I0214 19:18:24.047838 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-hz8g2"] Feb 14 19:18:24 crc kubenswrapper[4897]: I0214 19:18:24.064979 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-hz8g2"] Feb 14 19:18:25 crc kubenswrapper[4897]: I0214 19:18:25.839309 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22457187-fe82-4c9a-b565-95c7e561611f" path="/var/lib/kubelet/pods/22457187-fe82-4c9a-b565-95c7e561611f/volumes" Feb 14 19:18:31 crc kubenswrapper[4897]: I0214 19:18:31.725891 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:18:31 crc kubenswrapper[4897]: I0214 19:18:31.727557 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:18:31 crc kubenswrapper[4897]: I0214 19:18:31.727691 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 19:18:31 crc kubenswrapper[4897]: I0214 19:18:31.728729 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dd708665e8ea240d87012ffb10ef37fcbe9e649061cee70ad605f1da4f00112e"} pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 19:18:31 crc kubenswrapper[4897]: I0214 19:18:31.728900 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" containerID="cri-o://dd708665e8ea240d87012ffb10ef37fcbe9e649061cee70ad605f1da4f00112e" gracePeriod=600 Feb 14 19:18:32 crc kubenswrapper[4897]: I0214 19:18:32.027488 4897 generic.go:334] "Generic (PLEG): container finished" podID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerID="dd708665e8ea240d87012ffb10ef37fcbe9e649061cee70ad605f1da4f00112e" exitCode=0 Feb 14 19:18:32 crc kubenswrapper[4897]: I0214 19:18:32.027578 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerDied","Data":"dd708665e8ea240d87012ffb10ef37fcbe9e649061cee70ad605f1da4f00112e"} Feb 14 19:18:32 crc kubenswrapper[4897]: I0214 19:18:32.027752 4897 scope.go:117] "RemoveContainer" containerID="66cad8cd108b6a3cf9ef162cbf8969bccb5cc4309b4b2dedbb4f6654a9ac66a6" Feb 14 19:18:33 crc kubenswrapper[4897]: I0214 19:18:33.045371 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerStarted","Data":"9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5"} Feb 14 19:18:36 crc kubenswrapper[4897]: I0214 19:18:36.742408 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c2cj8"] Feb 14 19:18:36 crc kubenswrapper[4897]: I0214 19:18:36.745941 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2cj8" Feb 14 19:18:36 crc kubenswrapper[4897]: I0214 19:18:36.767785 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c2cj8"] Feb 14 19:18:36 crc kubenswrapper[4897]: I0214 19:18:36.831129 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb547\" (UniqueName: \"kubernetes.io/projected/1eb7e26a-a137-4e83-afc8-0abfc31cf411-kube-api-access-bb547\") pod \"community-operators-c2cj8\" (UID: \"1eb7e26a-a137-4e83-afc8-0abfc31cf411\") " pod="openshift-marketplace/community-operators-c2cj8" Feb 14 19:18:36 crc kubenswrapper[4897]: I0214 19:18:36.831537 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eb7e26a-a137-4e83-afc8-0abfc31cf411-utilities\") pod \"community-operators-c2cj8\" (UID: \"1eb7e26a-a137-4e83-afc8-0abfc31cf411\") " pod="openshift-marketplace/community-operators-c2cj8" Feb 14 19:18:36 crc kubenswrapper[4897]: I0214 19:18:36.831676 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eb7e26a-a137-4e83-afc8-0abfc31cf411-catalog-content\") pod \"community-operators-c2cj8\" (UID: \"1eb7e26a-a137-4e83-afc8-0abfc31cf411\") " pod="openshift-marketplace/community-operators-c2cj8" Feb 14 19:18:36 crc kubenswrapper[4897]: I0214 19:18:36.934423 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb547\" (UniqueName: \"kubernetes.io/projected/1eb7e26a-a137-4e83-afc8-0abfc31cf411-kube-api-access-bb547\") pod \"community-operators-c2cj8\" (UID: \"1eb7e26a-a137-4e83-afc8-0abfc31cf411\") " pod="openshift-marketplace/community-operators-c2cj8" Feb 14 19:18:36 crc kubenswrapper[4897]: I0214 19:18:36.934600 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eb7e26a-a137-4e83-afc8-0abfc31cf411-utilities\") pod \"community-operators-c2cj8\" (UID: \"1eb7e26a-a137-4e83-afc8-0abfc31cf411\") " pod="openshift-marketplace/community-operators-c2cj8" Feb 14 19:18:36 crc kubenswrapper[4897]: I0214 19:18:36.934679 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eb7e26a-a137-4e83-afc8-0abfc31cf411-catalog-content\") pod \"community-operators-c2cj8\" (UID: \"1eb7e26a-a137-4e83-afc8-0abfc31cf411\") " pod="openshift-marketplace/community-operators-c2cj8" Feb 14 19:18:36 crc kubenswrapper[4897]: I0214 19:18:36.935369 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eb7e26a-a137-4e83-afc8-0abfc31cf411-catalog-content\") pod \"community-operators-c2cj8\" (UID: \"1eb7e26a-a137-4e83-afc8-0abfc31cf411\") " pod="openshift-marketplace/community-operators-c2cj8" Feb 14 19:18:36 crc kubenswrapper[4897]: I0214 19:18:36.935618 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eb7e26a-a137-4e83-afc8-0abfc31cf411-utilities\") pod \"community-operators-c2cj8\" (UID: \"1eb7e26a-a137-4e83-afc8-0abfc31cf411\") " pod="openshift-marketplace/community-operators-c2cj8" Feb 14 19:18:36 crc kubenswrapper[4897]: I0214 19:18:36.956236 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb547\" (UniqueName: \"kubernetes.io/projected/1eb7e26a-a137-4e83-afc8-0abfc31cf411-kube-api-access-bb547\") pod \"community-operators-c2cj8\" (UID: \"1eb7e26a-a137-4e83-afc8-0abfc31cf411\") " pod="openshift-marketplace/community-operators-c2cj8" Feb 14 19:18:37 crc kubenswrapper[4897]: I0214 19:18:37.066010 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2cj8" Feb 14 19:18:37 crc kubenswrapper[4897]: W0214 19:18:37.719292 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1eb7e26a_a137_4e83_afc8_0abfc31cf411.slice/crio-cc73f87d20bde0b37bfecaa279b62da56d378003943fec9d840c935f0bfdf56b WatchSource:0}: Error finding container cc73f87d20bde0b37bfecaa279b62da56d378003943fec9d840c935f0bfdf56b: Status 404 returned error can't find the container with id cc73f87d20bde0b37bfecaa279b62da56d378003943fec9d840c935f0bfdf56b Feb 14 19:18:37 crc kubenswrapper[4897]: I0214 19:18:37.722462 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c2cj8"] Feb 14 19:18:38 crc kubenswrapper[4897]: I0214 19:18:38.144873 4897 generic.go:334] "Generic (PLEG): container finished" podID="1eb7e26a-a137-4e83-afc8-0abfc31cf411" containerID="63048dfa23295ba9659a3047098501e469c24bb9d208949baeda9dc718f37f83" exitCode=0 Feb 14 19:18:38 crc kubenswrapper[4897]: I0214 19:18:38.144923 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2cj8" event={"ID":"1eb7e26a-a137-4e83-afc8-0abfc31cf411","Type":"ContainerDied","Data":"63048dfa23295ba9659a3047098501e469c24bb9d208949baeda9dc718f37f83"} Feb 14 19:18:38 crc kubenswrapper[4897]: I0214 19:18:38.145190 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2cj8" event={"ID":"1eb7e26a-a137-4e83-afc8-0abfc31cf411","Type":"ContainerStarted","Data":"cc73f87d20bde0b37bfecaa279b62da56d378003943fec9d840c935f0bfdf56b"} Feb 14 19:18:39 crc kubenswrapper[4897]: I0214 19:18:39.160219 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2cj8" event={"ID":"1eb7e26a-a137-4e83-afc8-0abfc31cf411","Type":"ContainerStarted","Data":"66fa255be5d50e623aadfde0955b24786f6eb218636d33fabe105202d4173f81"} Feb 14 19:18:41 crc kubenswrapper[4897]: I0214 19:18:41.187400 4897 generic.go:334] "Generic (PLEG): container finished" podID="1eb7e26a-a137-4e83-afc8-0abfc31cf411" containerID="66fa255be5d50e623aadfde0955b24786f6eb218636d33fabe105202d4173f81" exitCode=0 Feb 14 19:18:41 crc kubenswrapper[4897]: I0214 19:18:41.187477 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2cj8" event={"ID":"1eb7e26a-a137-4e83-afc8-0abfc31cf411","Type":"ContainerDied","Data":"66fa255be5d50e623aadfde0955b24786f6eb218636d33fabe105202d4173f81"} Feb 14 19:18:41 crc kubenswrapper[4897]: I0214 19:18:41.845370 4897 scope.go:117] "RemoveContainer" containerID="fcee3462510703d0a5ac1111e66700e2e376aad9911ff3c41e21495c8b737986" Feb 14 19:18:41 crc kubenswrapper[4897]: I0214 19:18:41.882063 4897 scope.go:117] "RemoveContainer" containerID="a311d23009ed170bc872291a802e4e523b9f509d873bf0226a3762c05d5826c8" Feb 14 19:18:42 crc kubenswrapper[4897]: I0214 19:18:42.201043 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2cj8" event={"ID":"1eb7e26a-a137-4e83-afc8-0abfc31cf411","Type":"ContainerStarted","Data":"862c09f698005b5bbd274eb8af82c98df794971ffca1f4ca0c25f59056bd73ac"} Feb 14 19:18:42 crc kubenswrapper[4897]: I0214 19:18:42.234682 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c2cj8" podStartSLOduration=2.729304716 podStartE2EDuration="6.234654378s" podCreationTimestamp="2026-02-14 19:18:36 +0000 UTC" firstStartedPulling="2026-02-14 19:18:38.146929704 +0000 UTC m=+2171.123338197" lastFinishedPulling="2026-02-14 19:18:41.652279326 +0000 UTC m=+2174.628687859" observedRunningTime="2026-02-14 19:18:42.22180105 +0000 UTC m=+2175.198209563" watchObservedRunningTime="2026-02-14 19:18:42.234654378 +0000 UTC m=+2175.211062881" Feb 14 19:18:42 crc kubenswrapper[4897]: I0214 19:18:42.337085 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vzmnc"] Feb 14 19:18:42 crc kubenswrapper[4897]: I0214 19:18:42.339797 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzmnc" Feb 14 19:18:42 crc kubenswrapper[4897]: I0214 19:18:42.343455 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vzmnc"] Feb 14 19:18:42 crc kubenswrapper[4897]: I0214 19:18:42.384986 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc29n\" (UniqueName: \"kubernetes.io/projected/1f9c3f2f-937d-4587-8a70-6380440bc033-kube-api-access-wc29n\") pod \"certified-operators-vzmnc\" (UID: \"1f9c3f2f-937d-4587-8a70-6380440bc033\") " pod="openshift-marketplace/certified-operators-vzmnc" Feb 14 19:18:42 crc kubenswrapper[4897]: I0214 19:18:42.385767 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f9c3f2f-937d-4587-8a70-6380440bc033-utilities\") pod \"certified-operators-vzmnc\" (UID: \"1f9c3f2f-937d-4587-8a70-6380440bc033\") " pod="openshift-marketplace/certified-operators-vzmnc" Feb 14 19:18:42 crc kubenswrapper[4897]: I0214 19:18:42.385865 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f9c3f2f-937d-4587-8a70-6380440bc033-catalog-content\") pod \"certified-operators-vzmnc\" (UID: \"1f9c3f2f-937d-4587-8a70-6380440bc033\") " pod="openshift-marketplace/certified-operators-vzmnc" Feb 14 19:18:42 crc kubenswrapper[4897]: I0214 19:18:42.488793 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc29n\" (UniqueName: \"kubernetes.io/projected/1f9c3f2f-937d-4587-8a70-6380440bc033-kube-api-access-wc29n\") pod \"certified-operators-vzmnc\" (UID: \"1f9c3f2f-937d-4587-8a70-6380440bc033\") " pod="openshift-marketplace/certified-operators-vzmnc" Feb 14 19:18:42 crc kubenswrapper[4897]: I0214 19:18:42.488928 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f9c3f2f-937d-4587-8a70-6380440bc033-utilities\") pod \"certified-operators-vzmnc\" (UID: \"1f9c3f2f-937d-4587-8a70-6380440bc033\") " pod="openshift-marketplace/certified-operators-vzmnc" Feb 14 19:18:42 crc kubenswrapper[4897]: I0214 19:18:42.488947 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f9c3f2f-937d-4587-8a70-6380440bc033-catalog-content\") pod \"certified-operators-vzmnc\" (UID: \"1f9c3f2f-937d-4587-8a70-6380440bc033\") " pod="openshift-marketplace/certified-operators-vzmnc" Feb 14 19:18:42 crc kubenswrapper[4897]: I0214 19:18:42.489442 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f9c3f2f-937d-4587-8a70-6380440bc033-catalog-content\") pod \"certified-operators-vzmnc\" (UID: \"1f9c3f2f-937d-4587-8a70-6380440bc033\") " pod="openshift-marketplace/certified-operators-vzmnc" Feb 14 19:18:42 crc kubenswrapper[4897]: I0214 19:18:42.489924 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f9c3f2f-937d-4587-8a70-6380440bc033-utilities\") pod \"certified-operators-vzmnc\" (UID: \"1f9c3f2f-937d-4587-8a70-6380440bc033\") " pod="openshift-marketplace/certified-operators-vzmnc" Feb 14 19:18:42 crc kubenswrapper[4897]: I0214 19:18:42.517234 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc29n\" (UniqueName: \"kubernetes.io/projected/1f9c3f2f-937d-4587-8a70-6380440bc033-kube-api-access-wc29n\") pod \"certified-operators-vzmnc\" (UID: \"1f9c3f2f-937d-4587-8a70-6380440bc033\") " pod="openshift-marketplace/certified-operators-vzmnc" Feb 14 19:18:42 crc kubenswrapper[4897]: I0214 19:18:42.687688 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzmnc" Feb 14 19:18:43 crc kubenswrapper[4897]: I0214 19:18:43.251465 4897 generic.go:334] "Generic (PLEG): container finished" podID="f4d9e4e2-d6b3-4618-9535-35fd4379f2a2" containerID="e3ccf5785ffe6d7918a0b6b736d0e61e252ba2e239bb0a0211007ddd89c562d6" exitCode=0 Feb 14 19:18:43 crc kubenswrapper[4897]: I0214 19:18:43.251508 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv" event={"ID":"f4d9e4e2-d6b3-4618-9535-35fd4379f2a2","Type":"ContainerDied","Data":"e3ccf5785ffe6d7918a0b6b736d0e61e252ba2e239bb0a0211007ddd89c562d6"} Feb 14 19:18:43 crc kubenswrapper[4897]: I0214 19:18:43.308483 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vzmnc"] Feb 14 19:18:44 crc kubenswrapper[4897]: I0214 19:18:44.272505 4897 generic.go:334] "Generic (PLEG): container finished" podID="1f9c3f2f-937d-4587-8a70-6380440bc033" containerID="6bc89f248f1cc25f8ff36a26fa5a40c6a2dc9a689de423bb455ba53e9e784452" exitCode=0 Feb 14 19:18:44 crc kubenswrapper[4897]: I0214 19:18:44.273086 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzmnc" event={"ID":"1f9c3f2f-937d-4587-8a70-6380440bc033","Type":"ContainerDied","Data":"6bc89f248f1cc25f8ff36a26fa5a40c6a2dc9a689de423bb455ba53e9e784452"} Feb 14 19:18:44 crc kubenswrapper[4897]: I0214 19:18:44.273129 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzmnc" event={"ID":"1f9c3f2f-937d-4587-8a70-6380440bc033","Type":"ContainerStarted","Data":"01e392e6544cf65fa83f766ba76a34abdde10b4c4f747df8309d526cd7ce79f0"} Feb 14 19:18:44 crc kubenswrapper[4897]: I0214 19:18:44.949349 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv" Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.094974 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4d9e4e2-d6b3-4618-9535-35fd4379f2a2-inventory\") pod \"f4d9e4e2-d6b3-4618-9535-35fd4379f2a2\" (UID: \"f4d9e4e2-d6b3-4618-9535-35fd4379f2a2\") " Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.095246 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g99d5\" (UniqueName: \"kubernetes.io/projected/f4d9e4e2-d6b3-4618-9535-35fd4379f2a2-kube-api-access-g99d5\") pod \"f4d9e4e2-d6b3-4618-9535-35fd4379f2a2\" (UID: \"f4d9e4e2-d6b3-4618-9535-35fd4379f2a2\") " Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.095311 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4d9e4e2-d6b3-4618-9535-35fd4379f2a2-ssh-key-openstack-edpm-ipam\") pod \"f4d9e4e2-d6b3-4618-9535-35fd4379f2a2\" (UID: \"f4d9e4e2-d6b3-4618-9535-35fd4379f2a2\") " Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.105550 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4d9e4e2-d6b3-4618-9535-35fd4379f2a2-kube-api-access-g99d5" (OuterVolumeSpecName: "kube-api-access-g99d5") pod "f4d9e4e2-d6b3-4618-9535-35fd4379f2a2" (UID: "f4d9e4e2-d6b3-4618-9535-35fd4379f2a2"). InnerVolumeSpecName "kube-api-access-g99d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.136008 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4d9e4e2-d6b3-4618-9535-35fd4379f2a2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f4d9e4e2-d6b3-4618-9535-35fd4379f2a2" (UID: "f4d9e4e2-d6b3-4618-9535-35fd4379f2a2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.136230 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4d9e4e2-d6b3-4618-9535-35fd4379f2a2-inventory" (OuterVolumeSpecName: "inventory") pod "f4d9e4e2-d6b3-4618-9535-35fd4379f2a2" (UID: "f4d9e4e2-d6b3-4618-9535-35fd4379f2a2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.199483 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g99d5\" (UniqueName: \"kubernetes.io/projected/f4d9e4e2-d6b3-4618-9535-35fd4379f2a2-kube-api-access-g99d5\") on node \"crc\" DevicePath \"\"" Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.199554 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4d9e4e2-d6b3-4618-9535-35fd4379f2a2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.199583 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4d9e4e2-d6b3-4618-9535-35fd4379f2a2-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.290533 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzmnc" event={"ID":"1f9c3f2f-937d-4587-8a70-6380440bc033","Type":"ContainerStarted","Data":"ef312f6e02e7ea24600012170419eee9d61fcff81c1b6e37e2a6af1680c7fee8"} Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.293230 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv" Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.293300 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv" event={"ID":"f4d9e4e2-d6b3-4618-9535-35fd4379f2a2","Type":"ContainerDied","Data":"c113c0cc1108c400e4182c22211babad14ce3b966c1d061b18b993b4d56779f2"} Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.293398 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c113c0cc1108c400e4182c22211babad14ce3b966c1d061b18b993b4d56779f2" Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.440682 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-hl94t"] Feb 14 19:18:45 crc kubenswrapper[4897]: E0214 19:18:45.441406 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d9e4e2-d6b3-4618-9535-35fd4379f2a2" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.441426 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d9e4e2-d6b3-4618-9535-35fd4379f2a2" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.441675 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4d9e4e2-d6b3-4618-9535-35fd4379f2a2" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.442571 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hl94t" Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.445164 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.446286 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.446440 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.447181 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j869w" Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.454851 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-hl94t"] Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.505718 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a68fa67-8186-4606-96b5-fc7ddfd97530-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-hl94t\" (UID: \"0a68fa67-8186-4606-96b5-fc7ddfd97530\") " pod="openstack/ssh-known-hosts-edpm-deployment-hl94t" Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.506218 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd9sm\" (UniqueName: \"kubernetes.io/projected/0a68fa67-8186-4606-96b5-fc7ddfd97530-kube-api-access-rd9sm\") pod \"ssh-known-hosts-edpm-deployment-hl94t\" (UID: \"0a68fa67-8186-4606-96b5-fc7ddfd97530\") " pod="openstack/ssh-known-hosts-edpm-deployment-hl94t" Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.506284 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0a68fa67-8186-4606-96b5-fc7ddfd97530-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-hl94t\" (UID: \"0a68fa67-8186-4606-96b5-fc7ddfd97530\") " pod="openstack/ssh-known-hosts-edpm-deployment-hl94t" Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.608697 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd9sm\" (UniqueName: \"kubernetes.io/projected/0a68fa67-8186-4606-96b5-fc7ddfd97530-kube-api-access-rd9sm\") pod \"ssh-known-hosts-edpm-deployment-hl94t\" (UID: \"0a68fa67-8186-4606-96b5-fc7ddfd97530\") " pod="openstack/ssh-known-hosts-edpm-deployment-hl94t" Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.608752 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0a68fa67-8186-4606-96b5-fc7ddfd97530-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-hl94t\" (UID: \"0a68fa67-8186-4606-96b5-fc7ddfd97530\") " pod="openstack/ssh-known-hosts-edpm-deployment-hl94t" Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.608864 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a68fa67-8186-4606-96b5-fc7ddfd97530-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-hl94t\" (UID: \"0a68fa67-8186-4606-96b5-fc7ddfd97530\") " pod="openstack/ssh-known-hosts-edpm-deployment-hl94t" Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.613813 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a68fa67-8186-4606-96b5-fc7ddfd97530-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-hl94t\" (UID: \"0a68fa67-8186-4606-96b5-fc7ddfd97530\") " pod="openstack/ssh-known-hosts-edpm-deployment-hl94t" Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.614441 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0a68fa67-8186-4606-96b5-fc7ddfd97530-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-hl94t\" (UID: \"0a68fa67-8186-4606-96b5-fc7ddfd97530\") " pod="openstack/ssh-known-hosts-edpm-deployment-hl94t" Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.625491 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd9sm\" (UniqueName: \"kubernetes.io/projected/0a68fa67-8186-4606-96b5-fc7ddfd97530-kube-api-access-rd9sm\") pod \"ssh-known-hosts-edpm-deployment-hl94t\" (UID: \"0a68fa67-8186-4606-96b5-fc7ddfd97530\") " pod="openstack/ssh-known-hosts-edpm-deployment-hl94t" Feb 14 19:18:45 crc kubenswrapper[4897]: I0214 19:18:45.761644 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hl94t" Feb 14 19:18:46 crc kubenswrapper[4897]: I0214 19:18:46.432890 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-hl94t"] Feb 14 19:18:47 crc kubenswrapper[4897]: I0214 19:18:47.066887 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c2cj8" Feb 14 19:18:47 crc kubenswrapper[4897]: I0214 19:18:47.067289 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c2cj8" Feb 14 19:18:47 crc kubenswrapper[4897]: I0214 19:18:47.144086 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c2cj8" Feb 14 19:18:47 crc kubenswrapper[4897]: I0214 19:18:47.321918 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hl94t" event={"ID":"0a68fa67-8186-4606-96b5-fc7ddfd97530","Type":"ContainerStarted","Data":"75b3b7910a6b4c035c109e2acf8c9dd675995e107308193c9927fd48b261aed6"} Feb 14 19:18:47 crc kubenswrapper[4897]: I0214 19:18:47.321986 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hl94t" event={"ID":"0a68fa67-8186-4606-96b5-fc7ddfd97530","Type":"ContainerStarted","Data":"0a4f92aa9ddcbb2ff1c95c017dd0f9415a3098c7f0ec2cd2720aac2c9c6b85bc"} Feb 14 19:18:47 crc kubenswrapper[4897]: I0214 19:18:47.329289 4897 generic.go:334] "Generic (PLEG): container finished" podID="1f9c3f2f-937d-4587-8a70-6380440bc033" containerID="ef312f6e02e7ea24600012170419eee9d61fcff81c1b6e37e2a6af1680c7fee8" exitCode=0 Feb 14 19:18:47 crc kubenswrapper[4897]: I0214 19:18:47.330600 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzmnc" event={"ID":"1f9c3f2f-937d-4587-8a70-6380440bc033","Type":"ContainerDied","Data":"ef312f6e02e7ea24600012170419eee9d61fcff81c1b6e37e2a6af1680c7fee8"} Feb 14 19:18:47 crc kubenswrapper[4897]: I0214 19:18:47.345819 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-hl94t" podStartSLOduration=1.8483664690000001 podStartE2EDuration="2.345799275s" podCreationTimestamp="2026-02-14 19:18:45 +0000 UTC" firstStartedPulling="2026-02-14 19:18:46.445640891 +0000 UTC m=+2179.422049374" lastFinishedPulling="2026-02-14 19:18:46.943073677 +0000 UTC m=+2179.919482180" observedRunningTime="2026-02-14 19:18:47.34172505 +0000 UTC m=+2180.318133543" watchObservedRunningTime="2026-02-14 19:18:47.345799275 +0000 UTC m=+2180.322207768" Feb 14 19:18:47 crc kubenswrapper[4897]: I0214 19:18:47.408981 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c2cj8" Feb 14 19:18:48 crc kubenswrapper[4897]: I0214 19:18:48.342849 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzmnc" event={"ID":"1f9c3f2f-937d-4587-8a70-6380440bc033","Type":"ContainerStarted","Data":"fed0508a25f7501c700fe300467524b40f6d30619e79b1cbb68eb91001f388d3"} Feb 14 19:18:48 crc kubenswrapper[4897]: I0214 19:18:48.370332 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vzmnc" podStartSLOduration=2.8994276169999997 podStartE2EDuration="6.370309563s" podCreationTimestamp="2026-02-14 19:18:42 +0000 UTC" firstStartedPulling="2026-02-14 19:18:44.274722697 +0000 UTC m=+2177.251131190" lastFinishedPulling="2026-02-14 19:18:47.745604643 +0000 UTC m=+2180.722013136" observedRunningTime="2026-02-14 19:18:48.368815327 +0000 UTC m=+2181.345223850" watchObservedRunningTime="2026-02-14 19:18:48.370309563 +0000 UTC m=+2181.346718056" Feb 14 19:18:49 crc kubenswrapper[4897]: I0214 19:18:49.108092 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c2cj8"] Feb 14 19:18:49 crc kubenswrapper[4897]: I0214 19:18:49.351850 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c2cj8" podUID="1eb7e26a-a137-4e83-afc8-0abfc31cf411" containerName="registry-server" containerID="cri-o://862c09f698005b5bbd274eb8af82c98df794971ffca1f4ca0c25f59056bd73ac" gracePeriod=2 Feb 14 19:18:49 crc kubenswrapper[4897]: I0214 19:18:49.943591 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2cj8" Feb 14 19:18:50 crc kubenswrapper[4897]: I0214 19:18:50.126121 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eb7e26a-a137-4e83-afc8-0abfc31cf411-utilities\") pod \"1eb7e26a-a137-4e83-afc8-0abfc31cf411\" (UID: \"1eb7e26a-a137-4e83-afc8-0abfc31cf411\") " Feb 14 19:18:50 crc kubenswrapper[4897]: I0214 19:18:50.126206 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb547\" (UniqueName: \"kubernetes.io/projected/1eb7e26a-a137-4e83-afc8-0abfc31cf411-kube-api-access-bb547\") pod \"1eb7e26a-a137-4e83-afc8-0abfc31cf411\" (UID: \"1eb7e26a-a137-4e83-afc8-0abfc31cf411\") " Feb 14 19:18:50 crc kubenswrapper[4897]: I0214 19:18:50.126315 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eb7e26a-a137-4e83-afc8-0abfc31cf411-catalog-content\") pod \"1eb7e26a-a137-4e83-afc8-0abfc31cf411\" (UID: \"1eb7e26a-a137-4e83-afc8-0abfc31cf411\") " Feb 14 19:18:50 crc kubenswrapper[4897]: I0214 19:18:50.127779 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1eb7e26a-a137-4e83-afc8-0abfc31cf411-utilities" (OuterVolumeSpecName: "utilities") pod "1eb7e26a-a137-4e83-afc8-0abfc31cf411" (UID: "1eb7e26a-a137-4e83-afc8-0abfc31cf411"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:18:50 crc kubenswrapper[4897]: I0214 19:18:50.135827 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eb7e26a-a137-4e83-afc8-0abfc31cf411-kube-api-access-bb547" (OuterVolumeSpecName: "kube-api-access-bb547") pod "1eb7e26a-a137-4e83-afc8-0abfc31cf411" (UID: "1eb7e26a-a137-4e83-afc8-0abfc31cf411"). InnerVolumeSpecName "kube-api-access-bb547". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:18:50 crc kubenswrapper[4897]: I0214 19:18:50.205570 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1eb7e26a-a137-4e83-afc8-0abfc31cf411-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1eb7e26a-a137-4e83-afc8-0abfc31cf411" (UID: "1eb7e26a-a137-4e83-afc8-0abfc31cf411"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:18:50 crc kubenswrapper[4897]: I0214 19:18:50.229184 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eb7e26a-a137-4e83-afc8-0abfc31cf411-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 19:18:50 crc kubenswrapper[4897]: I0214 19:18:50.229222 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb547\" (UniqueName: \"kubernetes.io/projected/1eb7e26a-a137-4e83-afc8-0abfc31cf411-kube-api-access-bb547\") on node \"crc\" DevicePath \"\"" Feb 14 19:18:50 crc kubenswrapper[4897]: I0214 19:18:50.229241 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eb7e26a-a137-4e83-afc8-0abfc31cf411-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 19:18:50 crc kubenswrapper[4897]: I0214 19:18:50.367173 4897 generic.go:334] "Generic (PLEG): container finished" podID="1eb7e26a-a137-4e83-afc8-0abfc31cf411" containerID="862c09f698005b5bbd274eb8af82c98df794971ffca1f4ca0c25f59056bd73ac" exitCode=0 Feb 14 19:18:50 crc kubenswrapper[4897]: I0214 19:18:50.367276 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2cj8" Feb 14 19:18:50 crc kubenswrapper[4897]: I0214 19:18:50.367244 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2cj8" event={"ID":"1eb7e26a-a137-4e83-afc8-0abfc31cf411","Type":"ContainerDied","Data":"862c09f698005b5bbd274eb8af82c98df794971ffca1f4ca0c25f59056bd73ac"} Feb 14 19:18:50 crc kubenswrapper[4897]: I0214 19:18:50.367451 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2cj8" event={"ID":"1eb7e26a-a137-4e83-afc8-0abfc31cf411","Type":"ContainerDied","Data":"cc73f87d20bde0b37bfecaa279b62da56d378003943fec9d840c935f0bfdf56b"} Feb 14 19:18:50 crc kubenswrapper[4897]: I0214 19:18:50.367486 4897 scope.go:117] "RemoveContainer" containerID="862c09f698005b5bbd274eb8af82c98df794971ffca1f4ca0c25f59056bd73ac" Feb 14 19:18:50 crc kubenswrapper[4897]: I0214 19:18:50.423634 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c2cj8"] Feb 14 19:18:50 crc kubenswrapper[4897]: I0214 19:18:50.432646 4897 scope.go:117] "RemoveContainer" containerID="66fa255be5d50e623aadfde0955b24786f6eb218636d33fabe105202d4173f81" Feb 14 19:18:50 crc kubenswrapper[4897]: I0214 19:18:50.441310 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c2cj8"] Feb 14 19:18:50 crc kubenswrapper[4897]: I0214 19:18:50.469879 4897 scope.go:117] "RemoveContainer" containerID="63048dfa23295ba9659a3047098501e469c24bb9d208949baeda9dc718f37f83" Feb 14 19:18:50 crc kubenswrapper[4897]: I0214 19:18:50.519557 4897 scope.go:117] "RemoveContainer" containerID="862c09f698005b5bbd274eb8af82c98df794971ffca1f4ca0c25f59056bd73ac" Feb 14 19:18:50 crc kubenswrapper[4897]: E0214 19:18:50.519986 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"862c09f698005b5bbd274eb8af82c98df794971ffca1f4ca0c25f59056bd73ac\": container with ID starting with 862c09f698005b5bbd274eb8af82c98df794971ffca1f4ca0c25f59056bd73ac not found: ID does not exist" containerID="862c09f698005b5bbd274eb8af82c98df794971ffca1f4ca0c25f59056bd73ac" Feb 14 19:18:50 crc kubenswrapper[4897]: I0214 19:18:50.520104 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"862c09f698005b5bbd274eb8af82c98df794971ffca1f4ca0c25f59056bd73ac"} err="failed to get container status \"862c09f698005b5bbd274eb8af82c98df794971ffca1f4ca0c25f59056bd73ac\": rpc error: code = NotFound desc = could not find container \"862c09f698005b5bbd274eb8af82c98df794971ffca1f4ca0c25f59056bd73ac\": container with ID starting with 862c09f698005b5bbd274eb8af82c98df794971ffca1f4ca0c25f59056bd73ac not found: ID does not exist" Feb 14 19:18:50 crc kubenswrapper[4897]: I0214 19:18:50.520137 4897 scope.go:117] "RemoveContainer" containerID="66fa255be5d50e623aadfde0955b24786f6eb218636d33fabe105202d4173f81" Feb 14 19:18:50 crc kubenswrapper[4897]: E0214 19:18:50.520534 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66fa255be5d50e623aadfde0955b24786f6eb218636d33fabe105202d4173f81\": container with ID starting with 66fa255be5d50e623aadfde0955b24786f6eb218636d33fabe105202d4173f81 not found: ID does not exist" containerID="66fa255be5d50e623aadfde0955b24786f6eb218636d33fabe105202d4173f81" Feb 14 19:18:50 crc kubenswrapper[4897]: I0214 19:18:50.520576 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66fa255be5d50e623aadfde0955b24786f6eb218636d33fabe105202d4173f81"} err="failed to get container status \"66fa255be5d50e623aadfde0955b24786f6eb218636d33fabe105202d4173f81\": rpc error: code = NotFound desc = could not find container \"66fa255be5d50e623aadfde0955b24786f6eb218636d33fabe105202d4173f81\": container with ID starting with 66fa255be5d50e623aadfde0955b24786f6eb218636d33fabe105202d4173f81 not found: ID does not exist" Feb 14 19:18:50 crc kubenswrapper[4897]: I0214 19:18:50.520598 4897 scope.go:117] "RemoveContainer" containerID="63048dfa23295ba9659a3047098501e469c24bb9d208949baeda9dc718f37f83" Feb 14 19:18:50 crc kubenswrapper[4897]: E0214 19:18:50.520871 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63048dfa23295ba9659a3047098501e469c24bb9d208949baeda9dc718f37f83\": container with ID starting with 63048dfa23295ba9659a3047098501e469c24bb9d208949baeda9dc718f37f83 not found: ID does not exist" containerID="63048dfa23295ba9659a3047098501e469c24bb9d208949baeda9dc718f37f83" Feb 14 19:18:50 crc kubenswrapper[4897]: I0214 19:18:50.520908 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63048dfa23295ba9659a3047098501e469c24bb9d208949baeda9dc718f37f83"} err="failed to get container status \"63048dfa23295ba9659a3047098501e469c24bb9d208949baeda9dc718f37f83\": rpc error: code = NotFound desc = could not find container \"63048dfa23295ba9659a3047098501e469c24bb9d208949baeda9dc718f37f83\": container with ID starting with 63048dfa23295ba9659a3047098501e469c24bb9d208949baeda9dc718f37f83 not found: ID does not exist" Feb 14 19:18:51 crc kubenswrapper[4897]: I0214 19:18:51.814365 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eb7e26a-a137-4e83-afc8-0abfc31cf411" path="/var/lib/kubelet/pods/1eb7e26a-a137-4e83-afc8-0abfc31cf411/volumes" Feb 14 19:18:52 crc kubenswrapper[4897]: I0214 19:18:52.688165 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vzmnc" Feb 14 19:18:52 crc kubenswrapper[4897]: I0214 19:18:52.688486 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vzmnc" Feb 14 19:18:52 crc kubenswrapper[4897]: I0214 19:18:52.764287 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vzmnc" Feb 14 19:18:53 crc kubenswrapper[4897]: I0214 19:18:53.491258 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vzmnc" Feb 14 19:18:54 crc kubenswrapper[4897]: I0214 19:18:54.428135 4897 generic.go:334] "Generic (PLEG): container finished" podID="0a68fa67-8186-4606-96b5-fc7ddfd97530" containerID="75b3b7910a6b4c035c109e2acf8c9dd675995e107308193c9927fd48b261aed6" exitCode=0 Feb 14 19:18:54 crc kubenswrapper[4897]: I0214 19:18:54.428227 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hl94t" event={"ID":"0a68fa67-8186-4606-96b5-fc7ddfd97530","Type":"ContainerDied","Data":"75b3b7910a6b4c035c109e2acf8c9dd675995e107308193c9927fd48b261aed6"} Feb 14 19:18:54 crc kubenswrapper[4897]: I0214 19:18:54.515131 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vzmnc"] Feb 14 19:18:55 crc kubenswrapper[4897]: I0214 19:18:55.449656 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vzmnc" podUID="1f9c3f2f-937d-4587-8a70-6380440bc033" containerName="registry-server" containerID="cri-o://fed0508a25f7501c700fe300467524b40f6d30619e79b1cbb68eb91001f388d3" gracePeriod=2 Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.043499 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hl94t" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.062424 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzmnc" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.111854 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f9c3f2f-937d-4587-8a70-6380440bc033-catalog-content\") pod \"1f9c3f2f-937d-4587-8a70-6380440bc033\" (UID: \"1f9c3f2f-937d-4587-8a70-6380440bc033\") " Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.111961 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd9sm\" (UniqueName: \"kubernetes.io/projected/0a68fa67-8186-4606-96b5-fc7ddfd97530-kube-api-access-rd9sm\") pod \"0a68fa67-8186-4606-96b5-fc7ddfd97530\" (UID: \"0a68fa67-8186-4606-96b5-fc7ddfd97530\") " Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.112009 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0a68fa67-8186-4606-96b5-fc7ddfd97530-inventory-0\") pod \"0a68fa67-8186-4606-96b5-fc7ddfd97530\" (UID: \"0a68fa67-8186-4606-96b5-fc7ddfd97530\") " Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.112502 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc29n\" (UniqueName: \"kubernetes.io/projected/1f9c3f2f-937d-4587-8a70-6380440bc033-kube-api-access-wc29n\") pod \"1f9c3f2f-937d-4587-8a70-6380440bc033\" (UID: \"1f9c3f2f-937d-4587-8a70-6380440bc033\") " Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.112576 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a68fa67-8186-4606-96b5-fc7ddfd97530-ssh-key-openstack-edpm-ipam\") pod \"0a68fa67-8186-4606-96b5-fc7ddfd97530\" (UID: \"0a68fa67-8186-4606-96b5-fc7ddfd97530\") " Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.112802 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f9c3f2f-937d-4587-8a70-6380440bc033-utilities\") pod \"1f9c3f2f-937d-4587-8a70-6380440bc033\" (UID: \"1f9c3f2f-937d-4587-8a70-6380440bc033\") " Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.116114 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f9c3f2f-937d-4587-8a70-6380440bc033-utilities" (OuterVolumeSpecName: "utilities") pod "1f9c3f2f-937d-4587-8a70-6380440bc033" (UID: "1f9c3f2f-937d-4587-8a70-6380440bc033"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.120532 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a68fa67-8186-4606-96b5-fc7ddfd97530-kube-api-access-rd9sm" (OuterVolumeSpecName: "kube-api-access-rd9sm") pod "0a68fa67-8186-4606-96b5-fc7ddfd97530" (UID: "0a68fa67-8186-4606-96b5-fc7ddfd97530"). InnerVolumeSpecName "kube-api-access-rd9sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.123464 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f9c3f2f-937d-4587-8a70-6380440bc033-kube-api-access-wc29n" (OuterVolumeSpecName: "kube-api-access-wc29n") pod "1f9c3f2f-937d-4587-8a70-6380440bc033" (UID: "1f9c3f2f-937d-4587-8a70-6380440bc033"). InnerVolumeSpecName "kube-api-access-wc29n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.149691 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a68fa67-8186-4606-96b5-fc7ddfd97530-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0a68fa67-8186-4606-96b5-fc7ddfd97530" (UID: "0a68fa67-8186-4606-96b5-fc7ddfd97530"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.155632 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a68fa67-8186-4606-96b5-fc7ddfd97530-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "0a68fa67-8186-4606-96b5-fc7ddfd97530" (UID: "0a68fa67-8186-4606-96b5-fc7ddfd97530"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.193465 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f9c3f2f-937d-4587-8a70-6380440bc033-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f9c3f2f-937d-4587-8a70-6380440bc033" (UID: "1f9c3f2f-937d-4587-8a70-6380440bc033"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.216092 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc29n\" (UniqueName: \"kubernetes.io/projected/1f9c3f2f-937d-4587-8a70-6380440bc033-kube-api-access-wc29n\") on node \"crc\" DevicePath \"\"" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.216124 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a68fa67-8186-4606-96b5-fc7ddfd97530-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.216135 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f9c3f2f-937d-4587-8a70-6380440bc033-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.216144 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f9c3f2f-937d-4587-8a70-6380440bc033-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.216152 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd9sm\" (UniqueName: \"kubernetes.io/projected/0a68fa67-8186-4606-96b5-fc7ddfd97530-kube-api-access-rd9sm\") on node \"crc\" DevicePath \"\"" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.216160 4897 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0a68fa67-8186-4606-96b5-fc7ddfd97530-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.466500 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hl94t" event={"ID":"0a68fa67-8186-4606-96b5-fc7ddfd97530","Type":"ContainerDied","Data":"0a4f92aa9ddcbb2ff1c95c017dd0f9415a3098c7f0ec2cd2720aac2c9c6b85bc"} Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.466572 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a4f92aa9ddcbb2ff1c95c017dd0f9415a3098c7f0ec2cd2720aac2c9c6b85bc" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.466665 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hl94t" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.475266 4897 generic.go:334] "Generic (PLEG): container finished" podID="1f9c3f2f-937d-4587-8a70-6380440bc033" containerID="fed0508a25f7501c700fe300467524b40f6d30619e79b1cbb68eb91001f388d3" exitCode=0 Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.475307 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzmnc" event={"ID":"1f9c3f2f-937d-4587-8a70-6380440bc033","Type":"ContainerDied","Data":"fed0508a25f7501c700fe300467524b40f6d30619e79b1cbb68eb91001f388d3"} Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.475339 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzmnc" event={"ID":"1f9c3f2f-937d-4587-8a70-6380440bc033","Type":"ContainerDied","Data":"01e392e6544cf65fa83f766ba76a34abdde10b4c4f747df8309d526cd7ce79f0"} Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.475360 4897 scope.go:117] "RemoveContainer" containerID="fed0508a25f7501c700fe300467524b40f6d30619e79b1cbb68eb91001f388d3" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.475421 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzmnc" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.520899 4897 scope.go:117] "RemoveContainer" containerID="ef312f6e02e7ea24600012170419eee9d61fcff81c1b6e37e2a6af1680c7fee8" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.536681 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vzmnc"] Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.566231 4897 scope.go:117] "RemoveContainer" containerID="6bc89f248f1cc25f8ff36a26fa5a40c6a2dc9a689de423bb455ba53e9e784452" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.566395 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vzmnc"] Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.576523 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-446bh"] Feb 14 19:18:56 crc kubenswrapper[4897]: E0214 19:18:56.577134 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f9c3f2f-937d-4587-8a70-6380440bc033" containerName="extract-utilities" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.577152 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f9c3f2f-937d-4587-8a70-6380440bc033" containerName="extract-utilities" Feb 14 19:18:56 crc kubenswrapper[4897]: E0214 19:18:56.577164 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eb7e26a-a137-4e83-afc8-0abfc31cf411" containerName="extract-utilities" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.577170 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eb7e26a-a137-4e83-afc8-0abfc31cf411" containerName="extract-utilities" Feb 14 19:18:56 crc kubenswrapper[4897]: E0214 19:18:56.577187 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f9c3f2f-937d-4587-8a70-6380440bc033" containerName="registry-server" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.577195 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f9c3f2f-937d-4587-8a70-6380440bc033" containerName="registry-server" Feb 14 19:18:56 crc kubenswrapper[4897]: E0214 19:18:56.577219 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eb7e26a-a137-4e83-afc8-0abfc31cf411" containerName="extract-content" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.577224 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eb7e26a-a137-4e83-afc8-0abfc31cf411" containerName="extract-content" Feb 14 19:18:56 crc kubenswrapper[4897]: E0214 19:18:56.577234 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f9c3f2f-937d-4587-8a70-6380440bc033" containerName="extract-content" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.577240 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f9c3f2f-937d-4587-8a70-6380440bc033" containerName="extract-content" Feb 14 19:18:56 crc kubenswrapper[4897]: E0214 19:18:56.577253 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eb7e26a-a137-4e83-afc8-0abfc31cf411" containerName="registry-server" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.577259 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eb7e26a-a137-4e83-afc8-0abfc31cf411" containerName="registry-server" Feb 14 19:18:56 crc kubenswrapper[4897]: E0214 19:18:56.577271 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a68fa67-8186-4606-96b5-fc7ddfd97530" containerName="ssh-known-hosts-edpm-deployment" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.577277 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a68fa67-8186-4606-96b5-fc7ddfd97530" containerName="ssh-known-hosts-edpm-deployment" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.577489 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eb7e26a-a137-4e83-afc8-0abfc31cf411" containerName="registry-server" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.577515 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a68fa67-8186-4606-96b5-fc7ddfd97530" containerName="ssh-known-hosts-edpm-deployment" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.577539 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f9c3f2f-937d-4587-8a70-6380440bc033" containerName="registry-server" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.578395 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-446bh" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.581742 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.582180 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.582488 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.588976 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j869w" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.599216 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-446bh"] Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.625429 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvb42\" (UniqueName: \"kubernetes.io/projected/7f79cb40-76f5-40cd-9af1-82758f503ae7-kube-api-access-pvb42\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-446bh\" (UID: \"7f79cb40-76f5-40cd-9af1-82758f503ae7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-446bh" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.625532 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f79cb40-76f5-40cd-9af1-82758f503ae7-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-446bh\" (UID: \"7f79cb40-76f5-40cd-9af1-82758f503ae7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-446bh" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.625605 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f79cb40-76f5-40cd-9af1-82758f503ae7-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-446bh\" (UID: \"7f79cb40-76f5-40cd-9af1-82758f503ae7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-446bh" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.666423 4897 scope.go:117] "RemoveContainer" containerID="fed0508a25f7501c700fe300467524b40f6d30619e79b1cbb68eb91001f388d3" Feb 14 19:18:56 crc kubenswrapper[4897]: E0214 19:18:56.666977 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fed0508a25f7501c700fe300467524b40f6d30619e79b1cbb68eb91001f388d3\": container with ID starting with fed0508a25f7501c700fe300467524b40f6d30619e79b1cbb68eb91001f388d3 not found: ID does not exist" containerID="fed0508a25f7501c700fe300467524b40f6d30619e79b1cbb68eb91001f388d3" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.667008 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fed0508a25f7501c700fe300467524b40f6d30619e79b1cbb68eb91001f388d3"} err="failed to get container status \"fed0508a25f7501c700fe300467524b40f6d30619e79b1cbb68eb91001f388d3\": rpc error: code = NotFound desc = could not find container \"fed0508a25f7501c700fe300467524b40f6d30619e79b1cbb68eb91001f388d3\": container with ID starting with fed0508a25f7501c700fe300467524b40f6d30619e79b1cbb68eb91001f388d3 not found: ID does not exist" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.667047 4897 scope.go:117] "RemoveContainer" containerID="ef312f6e02e7ea24600012170419eee9d61fcff81c1b6e37e2a6af1680c7fee8" Feb 14 19:18:56 crc kubenswrapper[4897]: E0214 19:18:56.667802 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef312f6e02e7ea24600012170419eee9d61fcff81c1b6e37e2a6af1680c7fee8\": container with ID starting with ef312f6e02e7ea24600012170419eee9d61fcff81c1b6e37e2a6af1680c7fee8 not found: ID does not exist" containerID="ef312f6e02e7ea24600012170419eee9d61fcff81c1b6e37e2a6af1680c7fee8" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.667859 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef312f6e02e7ea24600012170419eee9d61fcff81c1b6e37e2a6af1680c7fee8"} err="failed to get container status \"ef312f6e02e7ea24600012170419eee9d61fcff81c1b6e37e2a6af1680c7fee8\": rpc error: code = NotFound desc = could not find container \"ef312f6e02e7ea24600012170419eee9d61fcff81c1b6e37e2a6af1680c7fee8\": container with ID starting with ef312f6e02e7ea24600012170419eee9d61fcff81c1b6e37e2a6af1680c7fee8 not found: ID does not exist" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.667889 4897 scope.go:117] "RemoveContainer" containerID="6bc89f248f1cc25f8ff36a26fa5a40c6a2dc9a689de423bb455ba53e9e784452" Feb 14 19:18:56 crc kubenswrapper[4897]: E0214 19:18:56.668430 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bc89f248f1cc25f8ff36a26fa5a40c6a2dc9a689de423bb455ba53e9e784452\": container with ID starting with 6bc89f248f1cc25f8ff36a26fa5a40c6a2dc9a689de423bb455ba53e9e784452 not found: ID does not exist" containerID="6bc89f248f1cc25f8ff36a26fa5a40c6a2dc9a689de423bb455ba53e9e784452" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.668467 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bc89f248f1cc25f8ff36a26fa5a40c6a2dc9a689de423bb455ba53e9e784452"} err="failed to get container status \"6bc89f248f1cc25f8ff36a26fa5a40c6a2dc9a689de423bb455ba53e9e784452\": rpc error: code = NotFound desc = could not find container \"6bc89f248f1cc25f8ff36a26fa5a40c6a2dc9a689de423bb455ba53e9e784452\": container with ID starting with 6bc89f248f1cc25f8ff36a26fa5a40c6a2dc9a689de423bb455ba53e9e784452 not found: ID does not exist" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.726582 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f79cb40-76f5-40cd-9af1-82758f503ae7-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-446bh\" (UID: \"7f79cb40-76f5-40cd-9af1-82758f503ae7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-446bh" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.727099 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f79cb40-76f5-40cd-9af1-82758f503ae7-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-446bh\" (UID: \"7f79cb40-76f5-40cd-9af1-82758f503ae7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-446bh" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.727308 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvb42\" (UniqueName: \"kubernetes.io/projected/7f79cb40-76f5-40cd-9af1-82758f503ae7-kube-api-access-pvb42\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-446bh\" (UID: \"7f79cb40-76f5-40cd-9af1-82758f503ae7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-446bh" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.732954 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f79cb40-76f5-40cd-9af1-82758f503ae7-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-446bh\" (UID: \"7f79cb40-76f5-40cd-9af1-82758f503ae7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-446bh" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.733247 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f79cb40-76f5-40cd-9af1-82758f503ae7-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-446bh\" (UID: \"7f79cb40-76f5-40cd-9af1-82758f503ae7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-446bh" Feb 14 19:18:56 crc kubenswrapper[4897]: I0214 19:18:56.745133 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvb42\" (UniqueName: \"kubernetes.io/projected/7f79cb40-76f5-40cd-9af1-82758f503ae7-kube-api-access-pvb42\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-446bh\" (UID: \"7f79cb40-76f5-40cd-9af1-82758f503ae7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-446bh" Feb 14 19:18:57 crc kubenswrapper[4897]: I0214 19:18:57.009220 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-446bh" Feb 14 19:18:57 crc kubenswrapper[4897]: I0214 19:18:57.661758 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-446bh"] Feb 14 19:18:57 crc kubenswrapper[4897]: I0214 19:18:57.806828 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f9c3f2f-937d-4587-8a70-6380440bc033" path="/var/lib/kubelet/pods/1f9c3f2f-937d-4587-8a70-6380440bc033/volumes" Feb 14 19:18:58 crc kubenswrapper[4897]: I0214 19:18:58.498878 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-446bh" event={"ID":"7f79cb40-76f5-40cd-9af1-82758f503ae7","Type":"ContainerStarted","Data":"37773124d9e1307a2d4cd3ea5599fb0486325218d47ad3353217fe9afa840b21"} Feb 14 19:18:58 crc kubenswrapper[4897]: I0214 19:18:58.499277 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-446bh" event={"ID":"7f79cb40-76f5-40cd-9af1-82758f503ae7","Type":"ContainerStarted","Data":"ad375f3e9b90db3bf5918e5c49209f84e8cfef585c3e8e52312b57c3b6f74317"} Feb 14 19:18:58 crc kubenswrapper[4897]: I0214 19:18:58.522481 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-446bh" podStartSLOduration=2.154354572 podStartE2EDuration="2.52245035s" podCreationTimestamp="2026-02-14 19:18:56 +0000 UTC" firstStartedPulling="2026-02-14 19:18:57.656472262 +0000 UTC m=+2190.632880745" lastFinishedPulling="2026-02-14 19:18:58.02456803 +0000 UTC m=+2191.000976523" observedRunningTime="2026-02-14 19:18:58.516849187 +0000 UTC m=+2191.493257710" watchObservedRunningTime="2026-02-14 19:18:58.52245035 +0000 UTC m=+2191.498858883" Feb 14 19:19:06 crc kubenswrapper[4897]: I0214 19:19:06.606777 4897 generic.go:334] "Generic (PLEG): container finished" podID="7f79cb40-76f5-40cd-9af1-82758f503ae7" containerID="37773124d9e1307a2d4cd3ea5599fb0486325218d47ad3353217fe9afa840b21" exitCode=0 Feb 14 19:19:06 crc kubenswrapper[4897]: I0214 19:19:06.606897 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-446bh" event={"ID":"7f79cb40-76f5-40cd-9af1-82758f503ae7","Type":"ContainerDied","Data":"37773124d9e1307a2d4cd3ea5599fb0486325218d47ad3353217fe9afa840b21"} Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.159995 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-446bh" Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.265655 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f79cb40-76f5-40cd-9af1-82758f503ae7-ssh-key-openstack-edpm-ipam\") pod \"7f79cb40-76f5-40cd-9af1-82758f503ae7\" (UID: \"7f79cb40-76f5-40cd-9af1-82758f503ae7\") " Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.265877 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f79cb40-76f5-40cd-9af1-82758f503ae7-inventory\") pod \"7f79cb40-76f5-40cd-9af1-82758f503ae7\" (UID: \"7f79cb40-76f5-40cd-9af1-82758f503ae7\") " Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.266066 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvb42\" (UniqueName: \"kubernetes.io/projected/7f79cb40-76f5-40cd-9af1-82758f503ae7-kube-api-access-pvb42\") pod \"7f79cb40-76f5-40cd-9af1-82758f503ae7\" (UID: \"7f79cb40-76f5-40cd-9af1-82758f503ae7\") " Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.270831 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f79cb40-76f5-40cd-9af1-82758f503ae7-kube-api-access-pvb42" (OuterVolumeSpecName: "kube-api-access-pvb42") pod "7f79cb40-76f5-40cd-9af1-82758f503ae7" (UID: "7f79cb40-76f5-40cd-9af1-82758f503ae7"). InnerVolumeSpecName "kube-api-access-pvb42". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.294594 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f79cb40-76f5-40cd-9af1-82758f503ae7-inventory" (OuterVolumeSpecName: "inventory") pod "7f79cb40-76f5-40cd-9af1-82758f503ae7" (UID: "7f79cb40-76f5-40cd-9af1-82758f503ae7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.297745 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f79cb40-76f5-40cd-9af1-82758f503ae7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7f79cb40-76f5-40cd-9af1-82758f503ae7" (UID: "7f79cb40-76f5-40cd-9af1-82758f503ae7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.369207 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f79cb40-76f5-40cd-9af1-82758f503ae7-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.369238 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvb42\" (UniqueName: \"kubernetes.io/projected/7f79cb40-76f5-40cd-9af1-82758f503ae7-kube-api-access-pvb42\") on node \"crc\" DevicePath \"\"" Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.369251 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f79cb40-76f5-40cd-9af1-82758f503ae7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.643917 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-446bh" event={"ID":"7f79cb40-76f5-40cd-9af1-82758f503ae7","Type":"ContainerDied","Data":"ad375f3e9b90db3bf5918e5c49209f84e8cfef585c3e8e52312b57c3b6f74317"} Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.643981 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad375f3e9b90db3bf5918e5c49209f84e8cfef585c3e8e52312b57c3b6f74317" Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.644088 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-446bh" Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.754782 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s"] Feb 14 19:19:08 crc kubenswrapper[4897]: E0214 19:19:08.755508 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f79cb40-76f5-40cd-9af1-82758f503ae7" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.755538 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f79cb40-76f5-40cd-9af1-82758f503ae7" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.755896 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f79cb40-76f5-40cd-9af1-82758f503ae7" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.757074 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s" Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.761922 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j869w" Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.766135 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.766238 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.769083 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s"] Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.769382 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.883178 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1552b46-d09d-4156-97e3-0887c5071664-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s\" (UID: \"e1552b46-d09d-4156-97e3-0887c5071664\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s" Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.884378 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1552b46-d09d-4156-97e3-0887c5071664-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s\" (UID: \"e1552b46-d09d-4156-97e3-0887c5071664\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s" Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.884474 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btxhc\" (UniqueName: \"kubernetes.io/projected/e1552b46-d09d-4156-97e3-0887c5071664-kube-api-access-btxhc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s\" (UID: \"e1552b46-d09d-4156-97e3-0887c5071664\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s" Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.987799 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1552b46-d09d-4156-97e3-0887c5071664-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s\" (UID: \"e1552b46-d09d-4156-97e3-0887c5071664\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s" Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.987884 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btxhc\" (UniqueName: \"kubernetes.io/projected/e1552b46-d09d-4156-97e3-0887c5071664-kube-api-access-btxhc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s\" (UID: \"e1552b46-d09d-4156-97e3-0887c5071664\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s" Feb 14 19:19:08 crc kubenswrapper[4897]: I0214 19:19:08.988288 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1552b46-d09d-4156-97e3-0887c5071664-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s\" (UID: \"e1552b46-d09d-4156-97e3-0887c5071664\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s" Feb 14 19:19:09 crc kubenswrapper[4897]: I0214 19:19:09.000925 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1552b46-d09d-4156-97e3-0887c5071664-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s\" (UID: \"e1552b46-d09d-4156-97e3-0887c5071664\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s" Feb 14 19:19:09 crc kubenswrapper[4897]: I0214 19:19:09.001517 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1552b46-d09d-4156-97e3-0887c5071664-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s\" (UID: \"e1552b46-d09d-4156-97e3-0887c5071664\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s" Feb 14 19:19:09 crc kubenswrapper[4897]: I0214 19:19:09.010944 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btxhc\" (UniqueName: \"kubernetes.io/projected/e1552b46-d09d-4156-97e3-0887c5071664-kube-api-access-btxhc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s\" (UID: \"e1552b46-d09d-4156-97e3-0887c5071664\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s" Feb 14 19:19:09 crc kubenswrapper[4897]: I0214 19:19:09.083291 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s" Feb 14 19:19:09 crc kubenswrapper[4897]: I0214 19:19:09.728260 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s"] Feb 14 19:19:10 crc kubenswrapper[4897]: I0214 19:19:10.693269 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s" event={"ID":"e1552b46-d09d-4156-97e3-0887c5071664","Type":"ContainerStarted","Data":"dca91f9f4839081220797ba9a2bdebf6798999fd4d429e9dd611bd4b1142537c"} Feb 14 19:19:10 crc kubenswrapper[4897]: I0214 19:19:10.693687 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s" event={"ID":"e1552b46-d09d-4156-97e3-0887c5071664","Type":"ContainerStarted","Data":"01026a403879c2129868215553c2dcf7f6345eefec9b1b3e0957fb3edc61e769"} Feb 14 19:19:10 crc kubenswrapper[4897]: I0214 19:19:10.721047 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s" podStartSLOduration=2.316278481 podStartE2EDuration="2.721010552s" podCreationTimestamp="2026-02-14 19:19:08 +0000 UTC" firstStartedPulling="2026-02-14 19:19:09.737920254 +0000 UTC m=+2202.714328737" lastFinishedPulling="2026-02-14 19:19:10.142652295 +0000 UTC m=+2203.119060808" observedRunningTime="2026-02-14 19:19:10.713642334 +0000 UTC m=+2203.690050807" watchObservedRunningTime="2026-02-14 19:19:10.721010552 +0000 UTC m=+2203.697419045" Feb 14 19:19:19 crc kubenswrapper[4897]: I0214 19:19:19.814478 4897 generic.go:334] "Generic (PLEG): container finished" podID="e1552b46-d09d-4156-97e3-0887c5071664" containerID="dca91f9f4839081220797ba9a2bdebf6798999fd4d429e9dd611bd4b1142537c" exitCode=0 Feb 14 19:19:19 crc kubenswrapper[4897]: I0214 19:19:19.814566 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s" event={"ID":"e1552b46-d09d-4156-97e3-0887c5071664","Type":"ContainerDied","Data":"dca91f9f4839081220797ba9a2bdebf6798999fd4d429e9dd611bd4b1142537c"} Feb 14 19:19:21 crc kubenswrapper[4897]: E0214 19:19:21.224495 4897 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Feb 14 19:19:21 crc kubenswrapper[4897]: I0214 19:19:21.491319 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s" Feb 14 19:19:21 crc kubenswrapper[4897]: I0214 19:19:21.562059 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btxhc\" (UniqueName: \"kubernetes.io/projected/e1552b46-d09d-4156-97e3-0887c5071664-kube-api-access-btxhc\") pod \"e1552b46-d09d-4156-97e3-0887c5071664\" (UID: \"e1552b46-d09d-4156-97e3-0887c5071664\") " Feb 14 19:19:21 crc kubenswrapper[4897]: I0214 19:19:21.562209 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1552b46-d09d-4156-97e3-0887c5071664-ssh-key-openstack-edpm-ipam\") pod \"e1552b46-d09d-4156-97e3-0887c5071664\" (UID: \"e1552b46-d09d-4156-97e3-0887c5071664\") " Feb 14 19:19:21 crc kubenswrapper[4897]: I0214 19:19:21.562238 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1552b46-d09d-4156-97e3-0887c5071664-inventory\") pod \"e1552b46-d09d-4156-97e3-0887c5071664\" (UID: \"e1552b46-d09d-4156-97e3-0887c5071664\") " Feb 14 19:19:21 crc kubenswrapper[4897]: I0214 19:19:21.573978 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1552b46-d09d-4156-97e3-0887c5071664-kube-api-access-btxhc" (OuterVolumeSpecName: "kube-api-access-btxhc") pod "e1552b46-d09d-4156-97e3-0887c5071664" (UID: "e1552b46-d09d-4156-97e3-0887c5071664"). InnerVolumeSpecName "kube-api-access-btxhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:19:21 crc kubenswrapper[4897]: I0214 19:19:21.618582 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1552b46-d09d-4156-97e3-0887c5071664-inventory" (OuterVolumeSpecName: "inventory") pod "e1552b46-d09d-4156-97e3-0887c5071664" (UID: "e1552b46-d09d-4156-97e3-0887c5071664"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:19:21 crc kubenswrapper[4897]: I0214 19:19:21.633332 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1552b46-d09d-4156-97e3-0887c5071664-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e1552b46-d09d-4156-97e3-0887c5071664" (UID: "e1552b46-d09d-4156-97e3-0887c5071664"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:19:21 crc kubenswrapper[4897]: I0214 19:19:21.665741 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1552b46-d09d-4156-97e3-0887c5071664-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 19:19:21 crc kubenswrapper[4897]: I0214 19:19:21.665793 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1552b46-d09d-4156-97e3-0887c5071664-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 19:19:21 crc kubenswrapper[4897]: I0214 19:19:21.665805 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btxhc\" (UniqueName: \"kubernetes.io/projected/e1552b46-d09d-4156-97e3-0887c5071664-kube-api-access-btxhc\") on node \"crc\" DevicePath \"\"" Feb 14 19:19:21 crc kubenswrapper[4897]: I0214 19:19:21.850195 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s" event={"ID":"e1552b46-d09d-4156-97e3-0887c5071664","Type":"ContainerDied","Data":"01026a403879c2129868215553c2dcf7f6345eefec9b1b3e0957fb3edc61e769"} Feb 14 19:19:21 crc kubenswrapper[4897]: I0214 19:19:21.850234 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01026a403879c2129868215553c2dcf7f6345eefec9b1b3e0957fb3edc61e769" Feb 14 19:19:21 crc kubenswrapper[4897]: I0214 19:19:21.850318 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s" Feb 14 19:19:21 crc kubenswrapper[4897]: I0214 19:19:21.954155 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff"] Feb 14 19:19:21 crc kubenswrapper[4897]: E0214 19:19:21.955275 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1552b46-d09d-4156-97e3-0887c5071664" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 14 19:19:21 crc kubenswrapper[4897]: I0214 19:19:21.955322 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1552b46-d09d-4156-97e3-0887c5071664" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 14 19:19:21 crc kubenswrapper[4897]: I0214 19:19:21.955761 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1552b46-d09d-4156-97e3-0887c5071664" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 14 19:19:21 crc kubenswrapper[4897]: I0214 19:19:21.957277 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:21 crc kubenswrapper[4897]: I0214 19:19:21.961374 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 19:19:21 crc kubenswrapper[4897]: I0214 19:19:21.961925 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 19:19:21 crc kubenswrapper[4897]: I0214 19:19:21.962666 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 14 19:19:21 crc kubenswrapper[4897]: I0214 19:19:21.962856 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 19:19:21 crc kubenswrapper[4897]: I0214 19:19:21.962864 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Feb 14 19:19:21 crc kubenswrapper[4897]: I0214 19:19:21.962899 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 14 19:19:21 crc kubenswrapper[4897]: I0214 19:19:21.962961 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j869w" Feb 14 19:19:21 crc kubenswrapper[4897]: I0214 19:19:21.962990 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 14 19:19:21 crc kubenswrapper[4897]: I0214 19:19:21.963077 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 14 19:19:21 crc kubenswrapper[4897]: I0214 19:19:21.968464 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff"] Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.077341 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.077451 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.077486 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.077533 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.077678 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.077738 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.077817 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.077850 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.078134 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.078193 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.078433 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.078634 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djzhv\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-kube-api-access-djzhv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.078730 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.078789 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.078880 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.078942 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.181548 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.181899 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djzhv\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-kube-api-access-djzhv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.182202 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.182368 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.182540 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.182715 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.182931 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.183233 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.183426 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.183631 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.183987 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.184180 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.184370 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.184520 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.184801 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.185169 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.189571 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.190159 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.190250 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.190922 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.192117 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.192683 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.192975 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.193133 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.193172 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.193589 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.193984 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.194557 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.195744 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.196804 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.200202 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.204696 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djzhv\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-kube-api-access-djzhv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sffff\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.288316 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:19:22 crc kubenswrapper[4897]: I0214 19:19:22.917966 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff"] Feb 14 19:19:23 crc kubenswrapper[4897]: I0214 19:19:23.872992 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" event={"ID":"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85","Type":"ContainerStarted","Data":"f71c4a4ee4e99a12aaf5dec5a3e2f64ff26a2d98ab0de31570f3e8afb830cd68"} Feb 14 19:19:23 crc kubenswrapper[4897]: I0214 19:19:23.873349 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" event={"ID":"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85","Type":"ContainerStarted","Data":"1bd48a67f6b2327c2cc43a8905f9bdfce9dcd3fa18f7e6905e15d0b0785ab2c5"} Feb 14 19:19:23 crc kubenswrapper[4897]: I0214 19:19:23.898821 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" podStartSLOduration=2.467791251 podStartE2EDuration="2.898797102s" podCreationTimestamp="2026-02-14 19:19:21 +0000 UTC" firstStartedPulling="2026-02-14 19:19:22.934744601 +0000 UTC m=+2215.911153094" lastFinishedPulling="2026-02-14 19:19:23.365750462 +0000 UTC m=+2216.342158945" observedRunningTime="2026-02-14 19:19:23.895204581 +0000 UTC m=+2216.871613104" watchObservedRunningTime="2026-02-14 19:19:23.898797102 +0000 UTC m=+2216.875205595" Feb 14 19:20:09 crc kubenswrapper[4897]: I0214 19:20:09.439765 4897 generic.go:334] "Generic (PLEG): container finished" podID="39b7fda9-b6bc-4834-97ce-fc21c8fa6b85" containerID="f71c4a4ee4e99a12aaf5dec5a3e2f64ff26a2d98ab0de31570f3e8afb830cd68" exitCode=0 Feb 14 19:20:09 crc kubenswrapper[4897]: I0214 19:20:09.439836 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" event={"ID":"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85","Type":"ContainerDied","Data":"f71c4a4ee4e99a12aaf5dec5a3e2f64ff26a2d98ab0de31570f3e8afb830cd68"} Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.036794 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.095632 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-repo-setup-combined-ca-bundle\") pod \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.095739 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-bootstrap-combined-ca-bundle\") pod \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.095870 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-neutron-metadata-combined-ca-bundle\") pod \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.095912 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-nova-combined-ca-bundle\") pod \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.096301 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-telemetry-power-monitoring-combined-ca-bundle\") pod \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.096371 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-ovn-combined-ca-bundle\") pod \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.096452 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-ovn-default-certs-0\") pod \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.096553 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.096623 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djzhv\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-kube-api-access-djzhv\") pod \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.096695 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.096785 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.096880 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-telemetry-combined-ca-bundle\") pod \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.096981 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-inventory\") pod \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.097104 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-libvirt-combined-ca-bundle\") pod \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.097162 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-ssh-key-openstack-edpm-ipam\") pod \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.097243 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\" (UID: \"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85\") " Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.104653 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85" (UID: "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.106768 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85" (UID: "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.107607 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85" (UID: "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.107925 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85" (UID: "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.113213 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-kube-api-access-djzhv" (OuterVolumeSpecName: "kube-api-access-djzhv") pod "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85" (UID: "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85"). InnerVolumeSpecName "kube-api-access-djzhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.114299 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85" (UID: "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.114435 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85" (UID: "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.115011 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85" (UID: "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.115107 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85" (UID: "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.115404 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85" (UID: "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.115520 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85" (UID: "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.115981 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85" (UID: "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.117797 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85" (UID: "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.119764 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85" (UID: "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.143742 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-inventory" (OuterVolumeSpecName: "inventory") pod "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85" (UID: "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.163610 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85" (UID: "39b7fda9-b6bc-4834-97ce-fc21c8fa6b85"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.204109 4897 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.204152 4897 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.204170 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.204181 4897 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.204191 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.204200 4897 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.204211 4897 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.204221 4897 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.204233 4897 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.204244 4897 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.204258 4897 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.204270 4897 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.204284 4897 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.204302 4897 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.204313 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djzhv\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-kube-api-access-djzhv\") on node \"crc\" DevicePath \"\"" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.204328 4897 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/39b7fda9-b6bc-4834-97ce-fc21c8fa6b85-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.468680 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" event={"ID":"39b7fda9-b6bc-4834-97ce-fc21c8fa6b85","Type":"ContainerDied","Data":"1bd48a67f6b2327c2cc43a8905f9bdfce9dcd3fa18f7e6905e15d0b0785ab2c5"} Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.468732 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bd48a67f6b2327c2cc43a8905f9bdfce9dcd3fa18f7e6905e15d0b0785ab2c5" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.468756 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sffff" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.573523 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrdns"] Feb 14 19:20:11 crc kubenswrapper[4897]: E0214 19:20:11.573995 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b7fda9-b6bc-4834-97ce-fc21c8fa6b85" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.574015 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b7fda9-b6bc-4834-97ce-fc21c8fa6b85" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.574261 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b7fda9-b6bc-4834-97ce-fc21c8fa6b85" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.575494 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrdns" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.587699 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.588083 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.588198 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.588385 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j869w" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.588466 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.597997 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrdns"] Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.720935 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/690f39d6-bd85-4b27-97f5-148d4976aebb-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrdns\" (UID: \"690f39d6-bd85-4b27-97f5-148d4976aebb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrdns" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.721013 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/690f39d6-bd85-4b27-97f5-148d4976aebb-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrdns\" (UID: \"690f39d6-bd85-4b27-97f5-148d4976aebb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrdns" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.721249 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/690f39d6-bd85-4b27-97f5-148d4976aebb-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrdns\" (UID: \"690f39d6-bd85-4b27-97f5-148d4976aebb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrdns" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.721328 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/690f39d6-bd85-4b27-97f5-148d4976aebb-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrdns\" (UID: \"690f39d6-bd85-4b27-97f5-148d4976aebb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrdns" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.721354 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwlgg\" (UniqueName: \"kubernetes.io/projected/690f39d6-bd85-4b27-97f5-148d4976aebb-kube-api-access-zwlgg\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrdns\" (UID: \"690f39d6-bd85-4b27-97f5-148d4976aebb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrdns" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.824115 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/690f39d6-bd85-4b27-97f5-148d4976aebb-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrdns\" (UID: \"690f39d6-bd85-4b27-97f5-148d4976aebb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrdns" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.824273 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/690f39d6-bd85-4b27-97f5-148d4976aebb-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrdns\" (UID: \"690f39d6-bd85-4b27-97f5-148d4976aebb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrdns" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.824321 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwlgg\" (UniqueName: \"kubernetes.io/projected/690f39d6-bd85-4b27-97f5-148d4976aebb-kube-api-access-zwlgg\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrdns\" (UID: \"690f39d6-bd85-4b27-97f5-148d4976aebb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrdns" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.824489 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/690f39d6-bd85-4b27-97f5-148d4976aebb-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrdns\" (UID: \"690f39d6-bd85-4b27-97f5-148d4976aebb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrdns" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.824583 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/690f39d6-bd85-4b27-97f5-148d4976aebb-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrdns\" (UID: \"690f39d6-bd85-4b27-97f5-148d4976aebb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrdns" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.826194 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/690f39d6-bd85-4b27-97f5-148d4976aebb-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrdns\" (UID: \"690f39d6-bd85-4b27-97f5-148d4976aebb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrdns" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.829296 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/690f39d6-bd85-4b27-97f5-148d4976aebb-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrdns\" (UID: \"690f39d6-bd85-4b27-97f5-148d4976aebb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrdns" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.829734 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/690f39d6-bd85-4b27-97f5-148d4976aebb-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrdns\" (UID: \"690f39d6-bd85-4b27-97f5-148d4976aebb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrdns" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.830683 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/690f39d6-bd85-4b27-97f5-148d4976aebb-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrdns\" (UID: \"690f39d6-bd85-4b27-97f5-148d4976aebb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrdns" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.846464 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwlgg\" (UniqueName: \"kubernetes.io/projected/690f39d6-bd85-4b27-97f5-148d4976aebb-kube-api-access-zwlgg\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mrdns\" (UID: \"690f39d6-bd85-4b27-97f5-148d4976aebb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrdns" Feb 14 19:20:11 crc kubenswrapper[4897]: I0214 19:20:11.894725 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrdns" Feb 14 19:20:12 crc kubenswrapper[4897]: I0214 19:20:12.540522 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrdns"] Feb 14 19:20:13 crc kubenswrapper[4897]: I0214 19:20:13.521756 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrdns" event={"ID":"690f39d6-bd85-4b27-97f5-148d4976aebb","Type":"ContainerStarted","Data":"e48f18c593a6de5c80da5d312a73bd75e8f0956dd92b247ed28be7ec486ed0b4"} Feb 14 19:20:13 crc kubenswrapper[4897]: I0214 19:20:13.522433 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrdns" event={"ID":"690f39d6-bd85-4b27-97f5-148d4976aebb","Type":"ContainerStarted","Data":"659497de5a85e86f625765b82310b521d43b325e505ba5fcec96ff7f3a83d494"} Feb 14 19:20:13 crc kubenswrapper[4897]: I0214 19:20:13.546465 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrdns" podStartSLOduration=1.967627728 podStartE2EDuration="2.54644728s" podCreationTimestamp="2026-02-14 19:20:11 +0000 UTC" firstStartedPulling="2026-02-14 19:20:12.54123053 +0000 UTC m=+2265.517639003" lastFinishedPulling="2026-02-14 19:20:13.120050072 +0000 UTC m=+2266.096458555" observedRunningTime="2026-02-14 19:20:13.538136334 +0000 UTC m=+2266.514544837" watchObservedRunningTime="2026-02-14 19:20:13.54644728 +0000 UTC m=+2266.522855763" Feb 14 19:20:14 crc kubenswrapper[4897]: I0214 19:20:14.060134 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-cqk8v"] Feb 14 19:20:14 crc kubenswrapper[4897]: I0214 19:20:14.078531 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-cqk8v"] Feb 14 19:20:15 crc kubenswrapper[4897]: I0214 19:20:15.807438 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b49610a6-b99e-432f-9d5f-271cec21d2e6" path="/var/lib/kubelet/pods/b49610a6-b99e-432f-9d5f-271cec21d2e6/volumes" Feb 14 19:20:42 crc kubenswrapper[4897]: I0214 19:20:42.095208 4897 scope.go:117] "RemoveContainer" containerID="c1c2073afa58c1a74aad53a2ba7a7ddfc453057ab3e9c8cd8870c6a483dbab2d" Feb 14 19:21:00 crc kubenswrapper[4897]: I0214 19:21:00.057135 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-vxthf"] Feb 14 19:21:00 crc kubenswrapper[4897]: I0214 19:21:00.073304 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-vxthf"] Feb 14 19:21:01 crc kubenswrapper[4897]: I0214 19:21:01.726212 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:21:01 crc kubenswrapper[4897]: I0214 19:21:01.726549 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:21:01 crc kubenswrapper[4897]: I0214 19:21:01.811520 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20144c84-5098-42ee-9c62-576ed65ac421" path="/var/lib/kubelet/pods/20144c84-5098-42ee-9c62-576ed65ac421/volumes" Feb 14 19:21:18 crc kubenswrapper[4897]: I0214 19:21:18.305957 4897 generic.go:334] "Generic (PLEG): container finished" podID="690f39d6-bd85-4b27-97f5-148d4976aebb" containerID="e48f18c593a6de5c80da5d312a73bd75e8f0956dd92b247ed28be7ec486ed0b4" exitCode=0 Feb 14 19:21:18 crc kubenswrapper[4897]: I0214 19:21:18.306077 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrdns" event={"ID":"690f39d6-bd85-4b27-97f5-148d4976aebb","Type":"ContainerDied","Data":"e48f18c593a6de5c80da5d312a73bd75e8f0956dd92b247ed28be7ec486ed0b4"} Feb 14 19:21:19 crc kubenswrapper[4897]: I0214 19:21:19.863727 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrdns" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.066334 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwlgg\" (UniqueName: \"kubernetes.io/projected/690f39d6-bd85-4b27-97f5-148d4976aebb-kube-api-access-zwlgg\") pod \"690f39d6-bd85-4b27-97f5-148d4976aebb\" (UID: \"690f39d6-bd85-4b27-97f5-148d4976aebb\") " Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.066623 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/690f39d6-bd85-4b27-97f5-148d4976aebb-inventory\") pod \"690f39d6-bd85-4b27-97f5-148d4976aebb\" (UID: \"690f39d6-bd85-4b27-97f5-148d4976aebb\") " Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.066857 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/690f39d6-bd85-4b27-97f5-148d4976aebb-ovn-combined-ca-bundle\") pod \"690f39d6-bd85-4b27-97f5-148d4976aebb\" (UID: \"690f39d6-bd85-4b27-97f5-148d4976aebb\") " Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.067000 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/690f39d6-bd85-4b27-97f5-148d4976aebb-ovncontroller-config-0\") pod \"690f39d6-bd85-4b27-97f5-148d4976aebb\" (UID: \"690f39d6-bd85-4b27-97f5-148d4976aebb\") " Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.067223 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/690f39d6-bd85-4b27-97f5-148d4976aebb-ssh-key-openstack-edpm-ipam\") pod \"690f39d6-bd85-4b27-97f5-148d4976aebb\" (UID: \"690f39d6-bd85-4b27-97f5-148d4976aebb\") " Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.075076 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/690f39d6-bd85-4b27-97f5-148d4976aebb-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "690f39d6-bd85-4b27-97f5-148d4976aebb" (UID: "690f39d6-bd85-4b27-97f5-148d4976aebb"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.077895 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/690f39d6-bd85-4b27-97f5-148d4976aebb-kube-api-access-zwlgg" (OuterVolumeSpecName: "kube-api-access-zwlgg") pod "690f39d6-bd85-4b27-97f5-148d4976aebb" (UID: "690f39d6-bd85-4b27-97f5-148d4976aebb"). InnerVolumeSpecName "kube-api-access-zwlgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.102279 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/690f39d6-bd85-4b27-97f5-148d4976aebb-inventory" (OuterVolumeSpecName: "inventory") pod "690f39d6-bd85-4b27-97f5-148d4976aebb" (UID: "690f39d6-bd85-4b27-97f5-148d4976aebb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.112505 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/690f39d6-bd85-4b27-97f5-148d4976aebb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "690f39d6-bd85-4b27-97f5-148d4976aebb" (UID: "690f39d6-bd85-4b27-97f5-148d4976aebb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.131912 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/690f39d6-bd85-4b27-97f5-148d4976aebb-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "690f39d6-bd85-4b27-97f5-148d4976aebb" (UID: "690f39d6-bd85-4b27-97f5-148d4976aebb"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.171107 4897 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/690f39d6-bd85-4b27-97f5-148d4976aebb-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.171160 4897 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/690f39d6-bd85-4b27-97f5-148d4976aebb-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.171173 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/690f39d6-bd85-4b27-97f5-148d4976aebb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.171184 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwlgg\" (UniqueName: \"kubernetes.io/projected/690f39d6-bd85-4b27-97f5-148d4976aebb-kube-api-access-zwlgg\") on node \"crc\" DevicePath \"\"" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.171198 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/690f39d6-bd85-4b27-97f5-148d4976aebb-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.336628 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrdns" event={"ID":"690f39d6-bd85-4b27-97f5-148d4976aebb","Type":"ContainerDied","Data":"659497de5a85e86f625765b82310b521d43b325e505ba5fcec96ff7f3a83d494"} Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.336680 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="659497de5a85e86f625765b82310b521d43b325e505ba5fcec96ff7f3a83d494" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.336895 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mrdns" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.435098 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs"] Feb 14 19:21:20 crc kubenswrapper[4897]: E0214 19:21:20.436066 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690f39d6-bd85-4b27-97f5-148d4976aebb" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.436087 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="690f39d6-bd85-4b27-97f5-148d4976aebb" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.436333 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="690f39d6-bd85-4b27-97f5-148d4976aebb" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.437206 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.442532 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.442586 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.442898 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j869w" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.443940 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.444134 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.445049 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.450997 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs"] Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.477545 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4jls\" (UniqueName: \"kubernetes.io/projected/dc59b218-0f6d-4dcf-8809-74df47d30b47-kube-api-access-l4jls\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs\" (UID: \"dc59b218-0f6d-4dcf-8809-74df47d30b47\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.477595 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs\" (UID: \"dc59b218-0f6d-4dcf-8809-74df47d30b47\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.477787 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs\" (UID: \"dc59b218-0f6d-4dcf-8809-74df47d30b47\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.477857 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs\" (UID: \"dc59b218-0f6d-4dcf-8809-74df47d30b47\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.477887 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs\" (UID: \"dc59b218-0f6d-4dcf-8809-74df47d30b47\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.477914 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs\" (UID: \"dc59b218-0f6d-4dcf-8809-74df47d30b47\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.579617 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs\" (UID: \"dc59b218-0f6d-4dcf-8809-74df47d30b47\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.579728 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs\" (UID: \"dc59b218-0f6d-4dcf-8809-74df47d30b47\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.579771 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs\" (UID: \"dc59b218-0f6d-4dcf-8809-74df47d30b47\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.579798 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs\" (UID: \"dc59b218-0f6d-4dcf-8809-74df47d30b47\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.579850 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4jls\" (UniqueName: \"kubernetes.io/projected/dc59b218-0f6d-4dcf-8809-74df47d30b47-kube-api-access-l4jls\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs\" (UID: \"dc59b218-0f6d-4dcf-8809-74df47d30b47\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.579881 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs\" (UID: \"dc59b218-0f6d-4dcf-8809-74df47d30b47\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.585789 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs\" (UID: \"dc59b218-0f6d-4dcf-8809-74df47d30b47\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.586007 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs\" (UID: \"dc59b218-0f6d-4dcf-8809-74df47d30b47\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.586501 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs\" (UID: \"dc59b218-0f6d-4dcf-8809-74df47d30b47\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.593070 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs\" (UID: \"dc59b218-0f6d-4dcf-8809-74df47d30b47\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.598728 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs\" (UID: \"dc59b218-0f6d-4dcf-8809-74df47d30b47\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.601096 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4jls\" (UniqueName: \"kubernetes.io/projected/dc59b218-0f6d-4dcf-8809-74df47d30b47-kube-api-access-l4jls\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs\" (UID: \"dc59b218-0f6d-4dcf-8809-74df47d30b47\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs" Feb 14 19:21:20 crc kubenswrapper[4897]: I0214 19:21:20.757833 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs" Feb 14 19:21:21 crc kubenswrapper[4897]: I0214 19:21:21.332451 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs"] Feb 14 19:21:21 crc kubenswrapper[4897]: I0214 19:21:21.342627 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 19:21:22 crc kubenswrapper[4897]: I0214 19:21:22.356835 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs" event={"ID":"dc59b218-0f6d-4dcf-8809-74df47d30b47","Type":"ContainerStarted","Data":"46f2073c3ea7b285915066e55b60f759da74578ee7239795c8e5d5a01313ee04"} Feb 14 19:21:23 crc kubenswrapper[4897]: I0214 19:21:23.369111 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs" event={"ID":"dc59b218-0f6d-4dcf-8809-74df47d30b47","Type":"ContainerStarted","Data":"d5bedeb7b5a3557496c0d5bee1ca1158ebad80679b654e492510213ba9041ce9"} Feb 14 19:21:23 crc kubenswrapper[4897]: I0214 19:21:23.393777 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs" podStartSLOduration=2.214463925 podStartE2EDuration="3.393759335s" podCreationTimestamp="2026-02-14 19:21:20 +0000 UTC" firstStartedPulling="2026-02-14 19:21:21.342361825 +0000 UTC m=+2334.318770308" lastFinishedPulling="2026-02-14 19:21:22.521657225 +0000 UTC m=+2335.498065718" observedRunningTime="2026-02-14 19:21:23.389802893 +0000 UTC m=+2336.366211396" watchObservedRunningTime="2026-02-14 19:21:23.393759335 +0000 UTC m=+2336.370167818" Feb 14 19:21:31 crc kubenswrapper[4897]: I0214 19:21:31.726458 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:21:31 crc kubenswrapper[4897]: I0214 19:21:31.727104 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:21:42 crc kubenswrapper[4897]: I0214 19:21:42.216520 4897 scope.go:117] "RemoveContainer" containerID="ce8e06f868cc33d4d8d6e6a625e0df406ac0bda4a3373304f4aaafadda3adb1e" Feb 14 19:22:01 crc kubenswrapper[4897]: I0214 19:22:01.725710 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:22:01 crc kubenswrapper[4897]: I0214 19:22:01.726421 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:22:01 crc kubenswrapper[4897]: I0214 19:22:01.726483 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 19:22:01 crc kubenswrapper[4897]: I0214 19:22:01.727641 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5"} pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 19:22:01 crc kubenswrapper[4897]: I0214 19:22:01.727729 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" containerID="cri-o://9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" gracePeriod=600 Feb 14 19:22:01 crc kubenswrapper[4897]: E0214 19:22:01.853415 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:22:01 crc kubenswrapper[4897]: I0214 19:22:01.888792 4897 generic.go:334] "Generic (PLEG): container finished" podID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerID="9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" exitCode=0 Feb 14 19:22:01 crc kubenswrapper[4897]: I0214 19:22:01.888835 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerDied","Data":"9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5"} Feb 14 19:22:01 crc kubenswrapper[4897]: I0214 19:22:01.888901 4897 scope.go:117] "RemoveContainer" containerID="dd708665e8ea240d87012ffb10ef37fcbe9e649061cee70ad605f1da4f00112e" Feb 14 19:22:01 crc kubenswrapper[4897]: I0214 19:22:01.890059 4897 scope.go:117] "RemoveContainer" containerID="9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" Feb 14 19:22:01 crc kubenswrapper[4897]: E0214 19:22:01.890926 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:22:01 crc kubenswrapper[4897]: E0214 19:22:01.954523 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f885c6c_b913_48e3_93fc_abf932515ea9.slice/crio-9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5.scope\": RecentStats: unable to find data in memory cache]" Feb 14 19:22:12 crc kubenswrapper[4897]: I0214 19:22:12.008439 4897 generic.go:334] "Generic (PLEG): container finished" podID="dc59b218-0f6d-4dcf-8809-74df47d30b47" containerID="d5bedeb7b5a3557496c0d5bee1ca1158ebad80679b654e492510213ba9041ce9" exitCode=0 Feb 14 19:22:12 crc kubenswrapper[4897]: I0214 19:22:12.008538 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs" event={"ID":"dc59b218-0f6d-4dcf-8809-74df47d30b47","Type":"ContainerDied","Data":"d5bedeb7b5a3557496c0d5bee1ca1158ebad80679b654e492510213ba9041ce9"} Feb 14 19:22:13 crc kubenswrapper[4897]: I0214 19:22:13.548232 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs" Feb 14 19:22:13 crc kubenswrapper[4897]: I0214 19:22:13.621448 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-nova-metadata-neutron-config-0\") pod \"dc59b218-0f6d-4dcf-8809-74df47d30b47\" (UID: \"dc59b218-0f6d-4dcf-8809-74df47d30b47\") " Feb 14 19:22:13 crc kubenswrapper[4897]: I0214 19:22:13.621577 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-neutron-ovn-metadata-agent-neutron-config-0\") pod \"dc59b218-0f6d-4dcf-8809-74df47d30b47\" (UID: \"dc59b218-0f6d-4dcf-8809-74df47d30b47\") " Feb 14 19:22:13 crc kubenswrapper[4897]: I0214 19:22:13.621610 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4jls\" (UniqueName: \"kubernetes.io/projected/dc59b218-0f6d-4dcf-8809-74df47d30b47-kube-api-access-l4jls\") pod \"dc59b218-0f6d-4dcf-8809-74df47d30b47\" (UID: \"dc59b218-0f6d-4dcf-8809-74df47d30b47\") " Feb 14 19:22:13 crc kubenswrapper[4897]: I0214 19:22:13.622780 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-ssh-key-openstack-edpm-ipam\") pod \"dc59b218-0f6d-4dcf-8809-74df47d30b47\" (UID: \"dc59b218-0f6d-4dcf-8809-74df47d30b47\") " Feb 14 19:22:13 crc kubenswrapper[4897]: I0214 19:22:13.622912 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-neutron-metadata-combined-ca-bundle\") pod \"dc59b218-0f6d-4dcf-8809-74df47d30b47\" (UID: \"dc59b218-0f6d-4dcf-8809-74df47d30b47\") " Feb 14 19:22:13 crc kubenswrapper[4897]: I0214 19:22:13.622958 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-inventory\") pod \"dc59b218-0f6d-4dcf-8809-74df47d30b47\" (UID: \"dc59b218-0f6d-4dcf-8809-74df47d30b47\") " Feb 14 19:22:13 crc kubenswrapper[4897]: I0214 19:22:13.628274 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc59b218-0f6d-4dcf-8809-74df47d30b47-kube-api-access-l4jls" (OuterVolumeSpecName: "kube-api-access-l4jls") pod "dc59b218-0f6d-4dcf-8809-74df47d30b47" (UID: "dc59b218-0f6d-4dcf-8809-74df47d30b47"). InnerVolumeSpecName "kube-api-access-l4jls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:22:13 crc kubenswrapper[4897]: I0214 19:22:13.628601 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "dc59b218-0f6d-4dcf-8809-74df47d30b47" (UID: "dc59b218-0f6d-4dcf-8809-74df47d30b47"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:22:13 crc kubenswrapper[4897]: I0214 19:22:13.654207 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "dc59b218-0f6d-4dcf-8809-74df47d30b47" (UID: "dc59b218-0f6d-4dcf-8809-74df47d30b47"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:22:13 crc kubenswrapper[4897]: I0214 19:22:13.656093 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dc59b218-0f6d-4dcf-8809-74df47d30b47" (UID: "dc59b218-0f6d-4dcf-8809-74df47d30b47"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:22:13 crc kubenswrapper[4897]: I0214 19:22:13.662959 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-inventory" (OuterVolumeSpecName: "inventory") pod "dc59b218-0f6d-4dcf-8809-74df47d30b47" (UID: "dc59b218-0f6d-4dcf-8809-74df47d30b47"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:22:13 crc kubenswrapper[4897]: I0214 19:22:13.678067 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "dc59b218-0f6d-4dcf-8809-74df47d30b47" (UID: "dc59b218-0f6d-4dcf-8809-74df47d30b47"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:22:13 crc kubenswrapper[4897]: I0214 19:22:13.725722 4897 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:22:13 crc kubenswrapper[4897]: I0214 19:22:13.725750 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 19:22:13 crc kubenswrapper[4897]: I0214 19:22:13.725765 4897 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:22:13 crc kubenswrapper[4897]: I0214 19:22:13.725775 4897 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:22:13 crc kubenswrapper[4897]: I0214 19:22:13.725786 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4jls\" (UniqueName: \"kubernetes.io/projected/dc59b218-0f6d-4dcf-8809-74df47d30b47-kube-api-access-l4jls\") on node \"crc\" DevicePath \"\"" Feb 14 19:22:13 crc kubenswrapper[4897]: I0214 19:22:13.725794 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dc59b218-0f6d-4dcf-8809-74df47d30b47-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.037847 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs" event={"ID":"dc59b218-0f6d-4dcf-8809-74df47d30b47","Type":"ContainerDied","Data":"46f2073c3ea7b285915066e55b60f759da74578ee7239795c8e5d5a01313ee04"} Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.037890 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46f2073c3ea7b285915066e55b60f759da74578ee7239795c8e5d5a01313ee04" Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.037938 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs" Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.158855 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-68lkb"] Feb 14 19:22:14 crc kubenswrapper[4897]: E0214 19:22:14.159311 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc59b218-0f6d-4dcf-8809-74df47d30b47" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.159332 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc59b218-0f6d-4dcf-8809-74df47d30b47" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.159582 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc59b218-0f6d-4dcf-8809-74df47d30b47" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.160381 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-68lkb" Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.164061 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.164177 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.164216 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j869w" Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.164288 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.165134 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.183438 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-68lkb"] Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.235993 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzb4j\" (UniqueName: \"kubernetes.io/projected/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-kube-api-access-tzb4j\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-68lkb\" (UID: \"f9698ab0-7eea-4fe4-be5a-b864ed73c28f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-68lkb" Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.236576 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-68lkb\" (UID: \"f9698ab0-7eea-4fe4-be5a-b864ed73c28f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-68lkb" Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.236610 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-68lkb\" (UID: \"f9698ab0-7eea-4fe4-be5a-b864ed73c28f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-68lkb" Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.236658 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-68lkb\" (UID: \"f9698ab0-7eea-4fe4-be5a-b864ed73c28f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-68lkb" Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.236701 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-68lkb\" (UID: \"f9698ab0-7eea-4fe4-be5a-b864ed73c28f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-68lkb" Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.339393 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-68lkb\" (UID: \"f9698ab0-7eea-4fe4-be5a-b864ed73c28f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-68lkb" Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.339452 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-68lkb\" (UID: \"f9698ab0-7eea-4fe4-be5a-b864ed73c28f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-68lkb" Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.339514 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-68lkb\" (UID: \"f9698ab0-7eea-4fe4-be5a-b864ed73c28f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-68lkb" Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.339550 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-68lkb\" (UID: \"f9698ab0-7eea-4fe4-be5a-b864ed73c28f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-68lkb" Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.339654 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzb4j\" (UniqueName: \"kubernetes.io/projected/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-kube-api-access-tzb4j\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-68lkb\" (UID: \"f9698ab0-7eea-4fe4-be5a-b864ed73c28f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-68lkb" Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.345394 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-68lkb\" (UID: \"f9698ab0-7eea-4fe4-be5a-b864ed73c28f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-68lkb" Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.345878 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-68lkb\" (UID: \"f9698ab0-7eea-4fe4-be5a-b864ed73c28f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-68lkb" Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.345909 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-68lkb\" (UID: \"f9698ab0-7eea-4fe4-be5a-b864ed73c28f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-68lkb" Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.356725 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-68lkb\" (UID: \"f9698ab0-7eea-4fe4-be5a-b864ed73c28f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-68lkb" Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.369476 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzb4j\" (UniqueName: \"kubernetes.io/projected/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-kube-api-access-tzb4j\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-68lkb\" (UID: \"f9698ab0-7eea-4fe4-be5a-b864ed73c28f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-68lkb" Feb 14 19:22:14 crc kubenswrapper[4897]: I0214 19:22:14.526437 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-68lkb" Feb 14 19:22:15 crc kubenswrapper[4897]: I0214 19:22:15.124586 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-68lkb"] Feb 14 19:22:16 crc kubenswrapper[4897]: I0214 19:22:16.064545 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-68lkb" event={"ID":"f9698ab0-7eea-4fe4-be5a-b864ed73c28f","Type":"ContainerStarted","Data":"268c0d886cc827b2121e8a7b04de35ad6abe2d0d811ef99e02187eadb740e0b9"} Feb 14 19:22:16 crc kubenswrapper[4897]: I0214 19:22:16.065478 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-68lkb" event={"ID":"f9698ab0-7eea-4fe4-be5a-b864ed73c28f","Type":"ContainerStarted","Data":"f594ef78575c3ec3cd3302a91eca870d5dbff4485704bb1ded12a1b93df86701"} Feb 14 19:22:16 crc kubenswrapper[4897]: I0214 19:22:16.096349 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-68lkb" podStartSLOduration=1.595659938 podStartE2EDuration="2.096331708s" podCreationTimestamp="2026-02-14 19:22:14 +0000 UTC" firstStartedPulling="2026-02-14 19:22:15.140279913 +0000 UTC m=+2388.116688396" lastFinishedPulling="2026-02-14 19:22:15.640951683 +0000 UTC m=+2388.617360166" observedRunningTime="2026-02-14 19:22:16.089796176 +0000 UTC m=+2389.066204649" watchObservedRunningTime="2026-02-14 19:22:16.096331708 +0000 UTC m=+2389.072740191" Feb 14 19:22:16 crc kubenswrapper[4897]: I0214 19:22:16.794692 4897 scope.go:117] "RemoveContainer" containerID="9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" Feb 14 19:22:16 crc kubenswrapper[4897]: E0214 19:22:16.795101 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:22:28 crc kubenswrapper[4897]: I0214 19:22:28.794214 4897 scope.go:117] "RemoveContainer" containerID="9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" Feb 14 19:22:28 crc kubenswrapper[4897]: E0214 19:22:28.795114 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:22:40 crc kubenswrapper[4897]: I0214 19:22:40.794751 4897 scope.go:117] "RemoveContainer" containerID="9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" Feb 14 19:22:40 crc kubenswrapper[4897]: E0214 19:22:40.796167 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:22:54 crc kubenswrapper[4897]: I0214 19:22:54.794842 4897 scope.go:117] "RemoveContainer" containerID="9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" Feb 14 19:22:54 crc kubenswrapper[4897]: E0214 19:22:54.796111 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:23:07 crc kubenswrapper[4897]: I0214 19:23:07.809125 4897 scope.go:117] "RemoveContainer" containerID="9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" Feb 14 19:23:07 crc kubenswrapper[4897]: E0214 19:23:07.810338 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:23:15 crc kubenswrapper[4897]: I0214 19:23:15.620189 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n879h"] Feb 14 19:23:15 crc kubenswrapper[4897]: I0214 19:23:15.627769 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n879h" Feb 14 19:23:15 crc kubenswrapper[4897]: I0214 19:23:15.640557 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n879h"] Feb 14 19:23:15 crc kubenswrapper[4897]: I0214 19:23:15.762331 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e523c68-da9c-491a-a07a-ec8e3d1a9696-utilities\") pod \"redhat-operators-n879h\" (UID: \"8e523c68-da9c-491a-a07a-ec8e3d1a9696\") " pod="openshift-marketplace/redhat-operators-n879h" Feb 14 19:23:15 crc kubenswrapper[4897]: I0214 19:23:15.762767 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5kbf\" (UniqueName: \"kubernetes.io/projected/8e523c68-da9c-491a-a07a-ec8e3d1a9696-kube-api-access-h5kbf\") pod \"redhat-operators-n879h\" (UID: \"8e523c68-da9c-491a-a07a-ec8e3d1a9696\") " pod="openshift-marketplace/redhat-operators-n879h" Feb 14 19:23:15 crc kubenswrapper[4897]: I0214 19:23:15.763007 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e523c68-da9c-491a-a07a-ec8e3d1a9696-catalog-content\") pod \"redhat-operators-n879h\" (UID: \"8e523c68-da9c-491a-a07a-ec8e3d1a9696\") " pod="openshift-marketplace/redhat-operators-n879h" Feb 14 19:23:15 crc kubenswrapper[4897]: I0214 19:23:15.866126 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e523c68-da9c-491a-a07a-ec8e3d1a9696-catalog-content\") pod \"redhat-operators-n879h\" (UID: \"8e523c68-da9c-491a-a07a-ec8e3d1a9696\") " pod="openshift-marketplace/redhat-operators-n879h" Feb 14 19:23:15 crc kubenswrapper[4897]: I0214 19:23:15.866279 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e523c68-da9c-491a-a07a-ec8e3d1a9696-utilities\") pod \"redhat-operators-n879h\" (UID: \"8e523c68-da9c-491a-a07a-ec8e3d1a9696\") " pod="openshift-marketplace/redhat-operators-n879h" Feb 14 19:23:15 crc kubenswrapper[4897]: I0214 19:23:15.866504 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5kbf\" (UniqueName: \"kubernetes.io/projected/8e523c68-da9c-491a-a07a-ec8e3d1a9696-kube-api-access-h5kbf\") pod \"redhat-operators-n879h\" (UID: \"8e523c68-da9c-491a-a07a-ec8e3d1a9696\") " pod="openshift-marketplace/redhat-operators-n879h" Feb 14 19:23:15 crc kubenswrapper[4897]: I0214 19:23:15.868336 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e523c68-da9c-491a-a07a-ec8e3d1a9696-catalog-content\") pod \"redhat-operators-n879h\" (UID: \"8e523c68-da9c-491a-a07a-ec8e3d1a9696\") " pod="openshift-marketplace/redhat-operators-n879h" Feb 14 19:23:15 crc kubenswrapper[4897]: I0214 19:23:15.868614 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e523c68-da9c-491a-a07a-ec8e3d1a9696-utilities\") pod \"redhat-operators-n879h\" (UID: \"8e523c68-da9c-491a-a07a-ec8e3d1a9696\") " pod="openshift-marketplace/redhat-operators-n879h" Feb 14 19:23:15 crc kubenswrapper[4897]: I0214 19:23:15.886499 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5kbf\" (UniqueName: \"kubernetes.io/projected/8e523c68-da9c-491a-a07a-ec8e3d1a9696-kube-api-access-h5kbf\") pod \"redhat-operators-n879h\" (UID: \"8e523c68-da9c-491a-a07a-ec8e3d1a9696\") " pod="openshift-marketplace/redhat-operators-n879h" Feb 14 19:23:15 crc kubenswrapper[4897]: I0214 19:23:15.962896 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n879h" Feb 14 19:23:16 crc kubenswrapper[4897]: I0214 19:23:16.445004 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n879h"] Feb 14 19:23:16 crc kubenswrapper[4897]: I0214 19:23:16.979238 4897 generic.go:334] "Generic (PLEG): container finished" podID="8e523c68-da9c-491a-a07a-ec8e3d1a9696" containerID="0913ceed1a47cf8b01912be315ab0c135390ba8a3d0796d0eb8a9dcfe3392d32" exitCode=0 Feb 14 19:23:16 crc kubenswrapper[4897]: I0214 19:23:16.979294 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n879h" event={"ID":"8e523c68-da9c-491a-a07a-ec8e3d1a9696","Type":"ContainerDied","Data":"0913ceed1a47cf8b01912be315ab0c135390ba8a3d0796d0eb8a9dcfe3392d32"} Feb 14 19:23:16 crc kubenswrapper[4897]: I0214 19:23:16.979323 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n879h" event={"ID":"8e523c68-da9c-491a-a07a-ec8e3d1a9696","Type":"ContainerStarted","Data":"d5f8d68f549d4353e806769fd3d7a1cc7e41fc75db54ab6573006c9909b3d99f"} Feb 14 19:23:17 crc kubenswrapper[4897]: I0214 19:23:17.995542 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n879h" event={"ID":"8e523c68-da9c-491a-a07a-ec8e3d1a9696","Type":"ContainerStarted","Data":"54965211271dd516deca9925896501bc2a0c2f84f116d5d1116120c79f276fd0"} Feb 14 19:23:19 crc kubenswrapper[4897]: I0214 19:23:19.795064 4897 scope.go:117] "RemoveContainer" containerID="9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" Feb 14 19:23:19 crc kubenswrapper[4897]: E0214 19:23:19.795640 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:23:23 crc kubenswrapper[4897]: I0214 19:23:23.069862 4897 generic.go:334] "Generic (PLEG): container finished" podID="8e523c68-da9c-491a-a07a-ec8e3d1a9696" containerID="54965211271dd516deca9925896501bc2a0c2f84f116d5d1116120c79f276fd0" exitCode=0 Feb 14 19:23:23 crc kubenswrapper[4897]: I0214 19:23:23.069937 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n879h" event={"ID":"8e523c68-da9c-491a-a07a-ec8e3d1a9696","Type":"ContainerDied","Data":"54965211271dd516deca9925896501bc2a0c2f84f116d5d1116120c79f276fd0"} Feb 14 19:23:24 crc kubenswrapper[4897]: I0214 19:23:24.089307 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n879h" event={"ID":"8e523c68-da9c-491a-a07a-ec8e3d1a9696","Type":"ContainerStarted","Data":"7279c013cbe790c2c9b9d0d0c893f72f39bed6dc2ed3b421ad1f9b3531ac1eb4"} Feb 14 19:23:24 crc kubenswrapper[4897]: I0214 19:23:24.135603 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n879h" podStartSLOduration=2.609431342 podStartE2EDuration="9.135583873s" podCreationTimestamp="2026-02-14 19:23:15 +0000 UTC" firstStartedPulling="2026-02-14 19:23:16.981438103 +0000 UTC m=+2449.957846586" lastFinishedPulling="2026-02-14 19:23:23.507590624 +0000 UTC m=+2456.483999117" observedRunningTime="2026-02-14 19:23:24.122248613 +0000 UTC m=+2457.098657136" watchObservedRunningTime="2026-02-14 19:23:24.135583873 +0000 UTC m=+2457.111992356" Feb 14 19:23:25 crc kubenswrapper[4897]: I0214 19:23:25.964726 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n879h" Feb 14 19:23:25 crc kubenswrapper[4897]: I0214 19:23:25.965125 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n879h" Feb 14 19:23:27 crc kubenswrapper[4897]: I0214 19:23:27.016003 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n879h" podUID="8e523c68-da9c-491a-a07a-ec8e3d1a9696" containerName="registry-server" probeResult="failure" output=< Feb 14 19:23:27 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 19:23:27 crc kubenswrapper[4897]: > Feb 14 19:23:32 crc kubenswrapper[4897]: I0214 19:23:32.794366 4897 scope.go:117] "RemoveContainer" containerID="9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" Feb 14 19:23:32 crc kubenswrapper[4897]: E0214 19:23:32.795475 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:23:36 crc kubenswrapper[4897]: I0214 19:23:36.048598 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n879h" Feb 14 19:23:36 crc kubenswrapper[4897]: I0214 19:23:36.117013 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n879h" Feb 14 19:23:37 crc kubenswrapper[4897]: I0214 19:23:37.307065 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n879h"] Feb 14 19:23:37 crc kubenswrapper[4897]: I0214 19:23:37.307590 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n879h" podUID="8e523c68-da9c-491a-a07a-ec8e3d1a9696" containerName="registry-server" containerID="cri-o://7279c013cbe790c2c9b9d0d0c893f72f39bed6dc2ed3b421ad1f9b3531ac1eb4" gracePeriod=2 Feb 14 19:23:37 crc kubenswrapper[4897]: I0214 19:23:37.804472 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n879h" Feb 14 19:23:37 crc kubenswrapper[4897]: I0214 19:23:37.967615 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5kbf\" (UniqueName: \"kubernetes.io/projected/8e523c68-da9c-491a-a07a-ec8e3d1a9696-kube-api-access-h5kbf\") pod \"8e523c68-da9c-491a-a07a-ec8e3d1a9696\" (UID: \"8e523c68-da9c-491a-a07a-ec8e3d1a9696\") " Feb 14 19:23:37 crc kubenswrapper[4897]: I0214 19:23:37.967826 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e523c68-da9c-491a-a07a-ec8e3d1a9696-utilities\") pod \"8e523c68-da9c-491a-a07a-ec8e3d1a9696\" (UID: \"8e523c68-da9c-491a-a07a-ec8e3d1a9696\") " Feb 14 19:23:37 crc kubenswrapper[4897]: I0214 19:23:37.968017 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e523c68-da9c-491a-a07a-ec8e3d1a9696-catalog-content\") pod \"8e523c68-da9c-491a-a07a-ec8e3d1a9696\" (UID: \"8e523c68-da9c-491a-a07a-ec8e3d1a9696\") " Feb 14 19:23:37 crc kubenswrapper[4897]: I0214 19:23:37.971357 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e523c68-da9c-491a-a07a-ec8e3d1a9696-utilities" (OuterVolumeSpecName: "utilities") pod "8e523c68-da9c-491a-a07a-ec8e3d1a9696" (UID: "8e523c68-da9c-491a-a07a-ec8e3d1a9696"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:23:37 crc kubenswrapper[4897]: I0214 19:23:37.980265 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e523c68-da9c-491a-a07a-ec8e3d1a9696-kube-api-access-h5kbf" (OuterVolumeSpecName: "kube-api-access-h5kbf") pod "8e523c68-da9c-491a-a07a-ec8e3d1a9696" (UID: "8e523c68-da9c-491a-a07a-ec8e3d1a9696"). InnerVolumeSpecName "kube-api-access-h5kbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:23:38 crc kubenswrapper[4897]: I0214 19:23:38.073378 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5kbf\" (UniqueName: \"kubernetes.io/projected/8e523c68-da9c-491a-a07a-ec8e3d1a9696-kube-api-access-h5kbf\") on node \"crc\" DevicePath \"\"" Feb 14 19:23:38 crc kubenswrapper[4897]: I0214 19:23:38.073449 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e523c68-da9c-491a-a07a-ec8e3d1a9696-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 19:23:38 crc kubenswrapper[4897]: I0214 19:23:38.087334 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e523c68-da9c-491a-a07a-ec8e3d1a9696-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e523c68-da9c-491a-a07a-ec8e3d1a9696" (UID: "8e523c68-da9c-491a-a07a-ec8e3d1a9696"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:23:38 crc kubenswrapper[4897]: I0214 19:23:38.175414 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e523c68-da9c-491a-a07a-ec8e3d1a9696-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 19:23:38 crc kubenswrapper[4897]: I0214 19:23:38.276304 4897 generic.go:334] "Generic (PLEG): container finished" podID="8e523c68-da9c-491a-a07a-ec8e3d1a9696" containerID="7279c013cbe790c2c9b9d0d0c893f72f39bed6dc2ed3b421ad1f9b3531ac1eb4" exitCode=0 Feb 14 19:23:38 crc kubenswrapper[4897]: I0214 19:23:38.276426 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n879h" event={"ID":"8e523c68-da9c-491a-a07a-ec8e3d1a9696","Type":"ContainerDied","Data":"7279c013cbe790c2c9b9d0d0c893f72f39bed6dc2ed3b421ad1f9b3531ac1eb4"} Feb 14 19:23:38 crc kubenswrapper[4897]: I0214 19:23:38.276471 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n879h" event={"ID":"8e523c68-da9c-491a-a07a-ec8e3d1a9696","Type":"ContainerDied","Data":"d5f8d68f549d4353e806769fd3d7a1cc7e41fc75db54ab6573006c9909b3d99f"} Feb 14 19:23:38 crc kubenswrapper[4897]: I0214 19:23:38.276503 4897 scope.go:117] "RemoveContainer" containerID="7279c013cbe790c2c9b9d0d0c893f72f39bed6dc2ed3b421ad1f9b3531ac1eb4" Feb 14 19:23:38 crc kubenswrapper[4897]: I0214 19:23:38.276737 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n879h" Feb 14 19:23:38 crc kubenswrapper[4897]: I0214 19:23:38.324702 4897 scope.go:117] "RemoveContainer" containerID="54965211271dd516deca9925896501bc2a0c2f84f116d5d1116120c79f276fd0" Feb 14 19:23:38 crc kubenswrapper[4897]: I0214 19:23:38.361645 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n879h"] Feb 14 19:23:38 crc kubenswrapper[4897]: I0214 19:23:38.368243 4897 scope.go:117] "RemoveContainer" containerID="0913ceed1a47cf8b01912be315ab0c135390ba8a3d0796d0eb8a9dcfe3392d32" Feb 14 19:23:38 crc kubenswrapper[4897]: I0214 19:23:38.376993 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n879h"] Feb 14 19:23:38 crc kubenswrapper[4897]: I0214 19:23:38.416479 4897 scope.go:117] "RemoveContainer" containerID="7279c013cbe790c2c9b9d0d0c893f72f39bed6dc2ed3b421ad1f9b3531ac1eb4" Feb 14 19:23:38 crc kubenswrapper[4897]: E0214 19:23:38.417198 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7279c013cbe790c2c9b9d0d0c893f72f39bed6dc2ed3b421ad1f9b3531ac1eb4\": container with ID starting with 7279c013cbe790c2c9b9d0d0c893f72f39bed6dc2ed3b421ad1f9b3531ac1eb4 not found: ID does not exist" containerID="7279c013cbe790c2c9b9d0d0c893f72f39bed6dc2ed3b421ad1f9b3531ac1eb4" Feb 14 19:23:38 crc kubenswrapper[4897]: I0214 19:23:38.417246 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7279c013cbe790c2c9b9d0d0c893f72f39bed6dc2ed3b421ad1f9b3531ac1eb4"} err="failed to get container status \"7279c013cbe790c2c9b9d0d0c893f72f39bed6dc2ed3b421ad1f9b3531ac1eb4\": rpc error: code = NotFound desc = could not find container \"7279c013cbe790c2c9b9d0d0c893f72f39bed6dc2ed3b421ad1f9b3531ac1eb4\": container with ID starting with 7279c013cbe790c2c9b9d0d0c893f72f39bed6dc2ed3b421ad1f9b3531ac1eb4 not found: ID does not exist" Feb 14 19:23:38 crc kubenswrapper[4897]: I0214 19:23:38.417274 4897 scope.go:117] "RemoveContainer" containerID="54965211271dd516deca9925896501bc2a0c2f84f116d5d1116120c79f276fd0" Feb 14 19:23:38 crc kubenswrapper[4897]: E0214 19:23:38.417848 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54965211271dd516deca9925896501bc2a0c2f84f116d5d1116120c79f276fd0\": container with ID starting with 54965211271dd516deca9925896501bc2a0c2f84f116d5d1116120c79f276fd0 not found: ID does not exist" containerID="54965211271dd516deca9925896501bc2a0c2f84f116d5d1116120c79f276fd0" Feb 14 19:23:38 crc kubenswrapper[4897]: I0214 19:23:38.417886 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54965211271dd516deca9925896501bc2a0c2f84f116d5d1116120c79f276fd0"} err="failed to get container status \"54965211271dd516deca9925896501bc2a0c2f84f116d5d1116120c79f276fd0\": rpc error: code = NotFound desc = could not find container \"54965211271dd516deca9925896501bc2a0c2f84f116d5d1116120c79f276fd0\": container with ID starting with 54965211271dd516deca9925896501bc2a0c2f84f116d5d1116120c79f276fd0 not found: ID does not exist" Feb 14 19:23:38 crc kubenswrapper[4897]: I0214 19:23:38.417931 4897 scope.go:117] "RemoveContainer" containerID="0913ceed1a47cf8b01912be315ab0c135390ba8a3d0796d0eb8a9dcfe3392d32" Feb 14 19:23:38 crc kubenswrapper[4897]: E0214 19:23:38.418266 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0913ceed1a47cf8b01912be315ab0c135390ba8a3d0796d0eb8a9dcfe3392d32\": container with ID starting with 0913ceed1a47cf8b01912be315ab0c135390ba8a3d0796d0eb8a9dcfe3392d32 not found: ID does not exist" containerID="0913ceed1a47cf8b01912be315ab0c135390ba8a3d0796d0eb8a9dcfe3392d32" Feb 14 19:23:38 crc kubenswrapper[4897]: I0214 19:23:38.418310 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0913ceed1a47cf8b01912be315ab0c135390ba8a3d0796d0eb8a9dcfe3392d32"} err="failed to get container status \"0913ceed1a47cf8b01912be315ab0c135390ba8a3d0796d0eb8a9dcfe3392d32\": rpc error: code = NotFound desc = could not find container \"0913ceed1a47cf8b01912be315ab0c135390ba8a3d0796d0eb8a9dcfe3392d32\": container with ID starting with 0913ceed1a47cf8b01912be315ab0c135390ba8a3d0796d0eb8a9dcfe3392d32 not found: ID does not exist" Feb 14 19:23:39 crc kubenswrapper[4897]: I0214 19:23:39.811818 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e523c68-da9c-491a-a07a-ec8e3d1a9696" path="/var/lib/kubelet/pods/8e523c68-da9c-491a-a07a-ec8e3d1a9696/volumes" Feb 14 19:23:44 crc kubenswrapper[4897]: I0214 19:23:44.794660 4897 scope.go:117] "RemoveContainer" containerID="9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" Feb 14 19:23:44 crc kubenswrapper[4897]: E0214 19:23:44.795500 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:23:58 crc kubenswrapper[4897]: I0214 19:23:58.795941 4897 scope.go:117] "RemoveContainer" containerID="9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" Feb 14 19:23:58 crc kubenswrapper[4897]: E0214 19:23:58.796841 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:24:12 crc kubenswrapper[4897]: I0214 19:24:12.793974 4897 scope.go:117] "RemoveContainer" containerID="9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" Feb 14 19:24:12 crc kubenswrapper[4897]: E0214 19:24:12.794914 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:24:24 crc kubenswrapper[4897]: I0214 19:24:24.795276 4897 scope.go:117] "RemoveContainer" containerID="9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" Feb 14 19:24:24 crc kubenswrapper[4897]: E0214 19:24:24.797882 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:24:38 crc kubenswrapper[4897]: I0214 19:24:38.794748 4897 scope.go:117] "RemoveContainer" containerID="9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" Feb 14 19:24:38 crc kubenswrapper[4897]: E0214 19:24:38.795618 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:24:50 crc kubenswrapper[4897]: I0214 19:24:50.794169 4897 scope.go:117] "RemoveContainer" containerID="9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" Feb 14 19:24:50 crc kubenswrapper[4897]: E0214 19:24:50.795353 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:25:04 crc kubenswrapper[4897]: I0214 19:25:04.794347 4897 scope.go:117] "RemoveContainer" containerID="9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" Feb 14 19:25:04 crc kubenswrapper[4897]: E0214 19:25:04.795258 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:25:15 crc kubenswrapper[4897]: I0214 19:25:15.794120 4897 scope.go:117] "RemoveContainer" containerID="9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" Feb 14 19:25:15 crc kubenswrapper[4897]: E0214 19:25:15.795059 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:25:28 crc kubenswrapper[4897]: I0214 19:25:28.794233 4897 scope.go:117] "RemoveContainer" containerID="9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" Feb 14 19:25:28 crc kubenswrapper[4897]: E0214 19:25:28.795202 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:25:42 crc kubenswrapper[4897]: I0214 19:25:42.794657 4897 scope.go:117] "RemoveContainer" containerID="9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" Feb 14 19:25:42 crc kubenswrapper[4897]: E0214 19:25:42.795950 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:25:53 crc kubenswrapper[4897]: I0214 19:25:53.795000 4897 scope.go:117] "RemoveContainer" containerID="9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" Feb 14 19:25:53 crc kubenswrapper[4897]: E0214 19:25:53.796724 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:26:04 crc kubenswrapper[4897]: I0214 19:26:04.794067 4897 scope.go:117] "RemoveContainer" containerID="9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" Feb 14 19:26:04 crc kubenswrapper[4897]: E0214 19:26:04.794915 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:26:11 crc kubenswrapper[4897]: I0214 19:26:11.440134 4897 generic.go:334] "Generic (PLEG): container finished" podID="f9698ab0-7eea-4fe4-be5a-b864ed73c28f" containerID="268c0d886cc827b2121e8a7b04de35ad6abe2d0d811ef99e02187eadb740e0b9" exitCode=0 Feb 14 19:26:11 crc kubenswrapper[4897]: I0214 19:26:11.440284 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-68lkb" event={"ID":"f9698ab0-7eea-4fe4-be5a-b864ed73c28f","Type":"ContainerDied","Data":"268c0d886cc827b2121e8a7b04de35ad6abe2d0d811ef99e02187eadb740e0b9"} Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.010062 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-68lkb" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.075967 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-libvirt-secret-0\") pod \"f9698ab0-7eea-4fe4-be5a-b864ed73c28f\" (UID: \"f9698ab0-7eea-4fe4-be5a-b864ed73c28f\") " Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.076015 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-inventory\") pod \"f9698ab0-7eea-4fe4-be5a-b864ed73c28f\" (UID: \"f9698ab0-7eea-4fe4-be5a-b864ed73c28f\") " Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.076196 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-ssh-key-openstack-edpm-ipam\") pod \"f9698ab0-7eea-4fe4-be5a-b864ed73c28f\" (UID: \"f9698ab0-7eea-4fe4-be5a-b864ed73c28f\") " Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.076340 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-libvirt-combined-ca-bundle\") pod \"f9698ab0-7eea-4fe4-be5a-b864ed73c28f\" (UID: \"f9698ab0-7eea-4fe4-be5a-b864ed73c28f\") " Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.076380 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzb4j\" (UniqueName: \"kubernetes.io/projected/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-kube-api-access-tzb4j\") pod \"f9698ab0-7eea-4fe4-be5a-b864ed73c28f\" (UID: \"f9698ab0-7eea-4fe4-be5a-b864ed73c28f\") " Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.082748 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "f9698ab0-7eea-4fe4-be5a-b864ed73c28f" (UID: "f9698ab0-7eea-4fe4-be5a-b864ed73c28f"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.083421 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-kube-api-access-tzb4j" (OuterVolumeSpecName: "kube-api-access-tzb4j") pod "f9698ab0-7eea-4fe4-be5a-b864ed73c28f" (UID: "f9698ab0-7eea-4fe4-be5a-b864ed73c28f"). InnerVolumeSpecName "kube-api-access-tzb4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.113244 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f9698ab0-7eea-4fe4-be5a-b864ed73c28f" (UID: "f9698ab0-7eea-4fe4-be5a-b864ed73c28f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.125714 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-inventory" (OuterVolumeSpecName: "inventory") pod "f9698ab0-7eea-4fe4-be5a-b864ed73c28f" (UID: "f9698ab0-7eea-4fe4-be5a-b864ed73c28f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.133801 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "f9698ab0-7eea-4fe4-be5a-b864ed73c28f" (UID: "f9698ab0-7eea-4fe4-be5a-b864ed73c28f"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.181189 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.181531 4897 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.181544 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzb4j\" (UniqueName: \"kubernetes.io/projected/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-kube-api-access-tzb4j\") on node \"crc\" DevicePath \"\"" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.181558 4897 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.181570 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9698ab0-7eea-4fe4-be5a-b864ed73c28f-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.472141 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-68lkb" event={"ID":"f9698ab0-7eea-4fe4-be5a-b864ed73c28f","Type":"ContainerDied","Data":"f594ef78575c3ec3cd3302a91eca870d5dbff4485704bb1ded12a1b93df86701"} Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.472212 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f594ef78575c3ec3cd3302a91eca870d5dbff4485704bb1ded12a1b93df86701" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.472215 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-68lkb" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.603377 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx"] Feb 14 19:26:13 crc kubenswrapper[4897]: E0214 19:26:13.604222 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e523c68-da9c-491a-a07a-ec8e3d1a9696" containerName="registry-server" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.604248 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e523c68-da9c-491a-a07a-ec8e3d1a9696" containerName="registry-server" Feb 14 19:26:13 crc kubenswrapper[4897]: E0214 19:26:13.604284 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9698ab0-7eea-4fe4-be5a-b864ed73c28f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.604297 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9698ab0-7eea-4fe4-be5a-b864ed73c28f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 14 19:26:13 crc kubenswrapper[4897]: E0214 19:26:13.604322 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e523c68-da9c-491a-a07a-ec8e3d1a9696" containerName="extract-utilities" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.604334 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e523c68-da9c-491a-a07a-ec8e3d1a9696" containerName="extract-utilities" Feb 14 19:26:13 crc kubenswrapper[4897]: E0214 19:26:13.604353 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e523c68-da9c-491a-a07a-ec8e3d1a9696" containerName="extract-content" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.604365 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e523c68-da9c-491a-a07a-ec8e3d1a9696" containerName="extract-content" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.604760 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e523c68-da9c-491a-a07a-ec8e3d1a9696" containerName="registry-server" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.604812 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9698ab0-7eea-4fe4-be5a-b864ed73c28f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.606231 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.609722 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.610098 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.610203 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.610427 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.610503 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.610561 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j869w" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.611061 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.619998 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx"] Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.696452 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.696556 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.696615 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.696674 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.696802 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.696880 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86ccc\" (UniqueName: \"kubernetes.io/projected/f5be1414-fd81-4c71-80b7-94a96048bd6b-kube-api-access-86ccc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.696927 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.696982 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.697102 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.697214 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.697292 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.799314 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.799434 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.799492 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86ccc\" (UniqueName: \"kubernetes.io/projected/f5be1414-fd81-4c71-80b7-94a96048bd6b-kube-api-access-86ccc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.799524 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.799572 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.799665 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.799745 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.799811 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.799919 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.799971 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.800066 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.802856 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.805382 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.807113 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.807116 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.808452 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.808650 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.808483 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.808759 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.808982 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.811039 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.821456 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86ccc\" (UniqueName: \"kubernetes.io/projected/f5be1414-fd81-4c71-80b7-94a96048bd6b-kube-api-access-86ccc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2h9bx\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:13 crc kubenswrapper[4897]: I0214 19:26:13.965061 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:26:14 crc kubenswrapper[4897]: I0214 19:26:14.503395 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx"] Feb 14 19:26:15 crc kubenswrapper[4897]: I0214 19:26:15.504495 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" event={"ID":"f5be1414-fd81-4c71-80b7-94a96048bd6b","Type":"ContainerStarted","Data":"5ea72e182945c2eae7c4ac10b4235f6a0c044080d91183cefc566e47eadec282"} Feb 14 19:26:15 crc kubenswrapper[4897]: I0214 19:26:15.506315 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" event={"ID":"f5be1414-fd81-4c71-80b7-94a96048bd6b","Type":"ContainerStarted","Data":"8ffa70e2a4d031e5628d1e7d9ad821f6318bb790c4a79f162c5b2d525f7e77a8"} Feb 14 19:26:15 crc kubenswrapper[4897]: I0214 19:26:15.531307 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" podStartSLOduration=1.9722306920000001 podStartE2EDuration="2.531281886s" podCreationTimestamp="2026-02-14 19:26:13 +0000 UTC" firstStartedPulling="2026-02-14 19:26:14.502703756 +0000 UTC m=+2627.479112239" lastFinishedPulling="2026-02-14 19:26:15.06175491 +0000 UTC m=+2628.038163433" observedRunningTime="2026-02-14 19:26:15.526737595 +0000 UTC m=+2628.503146128" watchObservedRunningTime="2026-02-14 19:26:15.531281886 +0000 UTC m=+2628.507690399" Feb 14 19:26:17 crc kubenswrapper[4897]: I0214 19:26:17.794901 4897 scope.go:117] "RemoveContainer" containerID="9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" Feb 14 19:26:17 crc kubenswrapper[4897]: E0214 19:26:17.796440 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:26:29 crc kubenswrapper[4897]: I0214 19:26:29.798157 4897 scope.go:117] "RemoveContainer" containerID="9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" Feb 14 19:26:29 crc kubenswrapper[4897]: E0214 19:26:29.799283 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:26:41 crc kubenswrapper[4897]: I0214 19:26:41.795776 4897 scope.go:117] "RemoveContainer" containerID="9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" Feb 14 19:26:41 crc kubenswrapper[4897]: E0214 19:26:41.796682 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:26:55 crc kubenswrapper[4897]: I0214 19:26:55.794508 4897 scope.go:117] "RemoveContainer" containerID="9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" Feb 14 19:26:55 crc kubenswrapper[4897]: E0214 19:26:55.795219 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:27:07 crc kubenswrapper[4897]: I0214 19:27:07.807508 4897 scope.go:117] "RemoveContainer" containerID="9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" Feb 14 19:27:08 crc kubenswrapper[4897]: I0214 19:27:08.159644 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerStarted","Data":"c6626430ea6c421b822bccb52e029bf5b509b0b28bfac88869c30b1dbfcb44a8"} Feb 14 19:28:44 crc kubenswrapper[4897]: I0214 19:28:44.387657 4897 generic.go:334] "Generic (PLEG): container finished" podID="f5be1414-fd81-4c71-80b7-94a96048bd6b" containerID="5ea72e182945c2eae7c4ac10b4235f6a0c044080d91183cefc566e47eadec282" exitCode=0 Feb 14 19:28:44 crc kubenswrapper[4897]: I0214 19:28:44.387733 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" event={"ID":"f5be1414-fd81-4c71-80b7-94a96048bd6b","Type":"ContainerDied","Data":"5ea72e182945c2eae7c4ac10b4235f6a0c044080d91183cefc566e47eadec282"} Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.123511 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.308417 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86ccc\" (UniqueName: \"kubernetes.io/projected/f5be1414-fd81-4c71-80b7-94a96048bd6b-kube-api-access-86ccc\") pod \"f5be1414-fd81-4c71-80b7-94a96048bd6b\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.308522 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-cell1-compute-config-1\") pod \"f5be1414-fd81-4c71-80b7-94a96048bd6b\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.308558 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-combined-ca-bundle\") pod \"f5be1414-fd81-4c71-80b7-94a96048bd6b\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.308616 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-ssh-key-openstack-edpm-ipam\") pod \"f5be1414-fd81-4c71-80b7-94a96048bd6b\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.308697 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-cell1-compute-config-2\") pod \"f5be1414-fd81-4c71-80b7-94a96048bd6b\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.308732 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-migration-ssh-key-0\") pod \"f5be1414-fd81-4c71-80b7-94a96048bd6b\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.308812 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-cell1-compute-config-0\") pod \"f5be1414-fd81-4c71-80b7-94a96048bd6b\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.308888 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-cell1-compute-config-3\") pod \"f5be1414-fd81-4c71-80b7-94a96048bd6b\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.309064 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-extra-config-0\") pod \"f5be1414-fd81-4c71-80b7-94a96048bd6b\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.309111 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-migration-ssh-key-1\") pod \"f5be1414-fd81-4c71-80b7-94a96048bd6b\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.309174 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-inventory\") pod \"f5be1414-fd81-4c71-80b7-94a96048bd6b\" (UID: \"f5be1414-fd81-4c71-80b7-94a96048bd6b\") " Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.316189 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5be1414-fd81-4c71-80b7-94a96048bd6b-kube-api-access-86ccc" (OuterVolumeSpecName: "kube-api-access-86ccc") pod "f5be1414-fd81-4c71-80b7-94a96048bd6b" (UID: "f5be1414-fd81-4c71-80b7-94a96048bd6b"). InnerVolumeSpecName "kube-api-access-86ccc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.318008 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "f5be1414-fd81-4c71-80b7-94a96048bd6b" (UID: "f5be1414-fd81-4c71-80b7-94a96048bd6b"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.352472 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "f5be1414-fd81-4c71-80b7-94a96048bd6b" (UID: "f5be1414-fd81-4c71-80b7-94a96048bd6b"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.356756 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "f5be1414-fd81-4c71-80b7-94a96048bd6b" (UID: "f5be1414-fd81-4c71-80b7-94a96048bd6b"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.358255 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f5be1414-fd81-4c71-80b7-94a96048bd6b" (UID: "f5be1414-fd81-4c71-80b7-94a96048bd6b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.362634 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "f5be1414-fd81-4c71-80b7-94a96048bd6b" (UID: "f5be1414-fd81-4c71-80b7-94a96048bd6b"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.368807 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-inventory" (OuterVolumeSpecName: "inventory") pod "f5be1414-fd81-4c71-80b7-94a96048bd6b" (UID: "f5be1414-fd81-4c71-80b7-94a96048bd6b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.377277 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "f5be1414-fd81-4c71-80b7-94a96048bd6b" (UID: "f5be1414-fd81-4c71-80b7-94a96048bd6b"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.378459 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "f5be1414-fd81-4c71-80b7-94a96048bd6b" (UID: "f5be1414-fd81-4c71-80b7-94a96048bd6b"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.382125 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "f5be1414-fd81-4c71-80b7-94a96048bd6b" (UID: "f5be1414-fd81-4c71-80b7-94a96048bd6b"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.386321 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "f5be1414-fd81-4c71-80b7-94a96048bd6b" (UID: "f5be1414-fd81-4c71-80b7-94a96048bd6b"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.412552 4897 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.412583 4897 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.412593 4897 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.412603 4897 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.412612 4897 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.412622 4897 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.412633 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.412641 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86ccc\" (UniqueName: \"kubernetes.io/projected/f5be1414-fd81-4c71-80b7-94a96048bd6b-kube-api-access-86ccc\") on node \"crc\" DevicePath \"\"" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.412649 4897 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.412658 4897 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.412666 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5be1414-fd81-4c71-80b7-94a96048bd6b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.440458 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" event={"ID":"f5be1414-fd81-4c71-80b7-94a96048bd6b","Type":"ContainerDied","Data":"8ffa70e2a4d031e5628d1e7d9ad821f6318bb790c4a79f162c5b2d525f7e77a8"} Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.440569 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ffa70e2a4d031e5628d1e7d9ad821f6318bb790c4a79f162c5b2d525f7e77a8" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.440566 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2h9bx" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.540147 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx"] Feb 14 19:28:46 crc kubenswrapper[4897]: E0214 19:28:46.540864 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5be1414-fd81-4c71-80b7-94a96048bd6b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.540890 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5be1414-fd81-4c71-80b7-94a96048bd6b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.541286 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5be1414-fd81-4c71-80b7-94a96048bd6b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.542365 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.544415 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.545013 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.545067 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.545159 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.545445 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j869w" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.551543 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx"] Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.718411 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-76ncx\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.718518 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-76ncx\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.718558 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-76ncx\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.718590 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-76ncx\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.718617 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-76ncx\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.718925 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx9lz\" (UniqueName: \"kubernetes.io/projected/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-kube-api-access-jx9lz\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-76ncx\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.719207 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-76ncx\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.821975 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-76ncx\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.822140 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-76ncx\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.822266 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-76ncx\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.822319 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-76ncx\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.822360 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-76ncx\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.822400 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-76ncx\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.822547 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx9lz\" (UniqueName: \"kubernetes.io/projected/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-kube-api-access-jx9lz\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-76ncx\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.827822 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-76ncx\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.828474 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-76ncx\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.829053 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-76ncx\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.829163 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-76ncx\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.830075 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-76ncx\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.833258 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-76ncx\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.840637 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx9lz\" (UniqueName: \"kubernetes.io/projected/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-kube-api-access-jx9lz\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-76ncx\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" Feb 14 19:28:46 crc kubenswrapper[4897]: I0214 19:28:46.869333 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" Feb 14 19:28:47 crc kubenswrapper[4897]: I0214 19:28:47.484208 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx"] Feb 14 19:28:47 crc kubenswrapper[4897]: I0214 19:28:47.494720 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 19:28:48 crc kubenswrapper[4897]: I0214 19:28:48.462415 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" event={"ID":"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00","Type":"ContainerStarted","Data":"e6c19c1c83a7eb9d1c09e056bfc94f7b6490696a370dc179db240d263bec12be"} Feb 14 19:28:48 crc kubenswrapper[4897]: I0214 19:28:48.463049 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" event={"ID":"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00","Type":"ContainerStarted","Data":"4c5fd59c2a076fa63c41005421ed74d8ae3976a25782c608b5500270a2bc7afa"} Feb 14 19:28:48 crc kubenswrapper[4897]: I0214 19:28:48.487410 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" podStartSLOduration=2.015509706 podStartE2EDuration="2.487387651s" podCreationTimestamp="2026-02-14 19:28:46 +0000 UTC" firstStartedPulling="2026-02-14 19:28:47.494379635 +0000 UTC m=+2780.470788128" lastFinishedPulling="2026-02-14 19:28:47.96625755 +0000 UTC m=+2780.942666073" observedRunningTime="2026-02-14 19:28:48.484821212 +0000 UTC m=+2781.461229735" watchObservedRunningTime="2026-02-14 19:28:48.487387651 +0000 UTC m=+2781.463796154" Feb 14 19:29:05 crc kubenswrapper[4897]: I0214 19:29:05.083251 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-78b6r"] Feb 14 19:29:05 crc kubenswrapper[4897]: I0214 19:29:05.086493 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-78b6r" Feb 14 19:29:05 crc kubenswrapper[4897]: I0214 19:29:05.130217 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-78b6r"] Feb 14 19:29:05 crc kubenswrapper[4897]: I0214 19:29:05.209976 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88ce4164-7f76-4e44-b018-19f081557efd-utilities\") pod \"community-operators-78b6r\" (UID: \"88ce4164-7f76-4e44-b018-19f081557efd\") " pod="openshift-marketplace/community-operators-78b6r" Feb 14 19:29:05 crc kubenswrapper[4897]: I0214 19:29:05.210155 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88ce4164-7f76-4e44-b018-19f081557efd-catalog-content\") pod \"community-operators-78b6r\" (UID: \"88ce4164-7f76-4e44-b018-19f081557efd\") " pod="openshift-marketplace/community-operators-78b6r" Feb 14 19:29:05 crc kubenswrapper[4897]: I0214 19:29:05.210277 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmk89\" (UniqueName: \"kubernetes.io/projected/88ce4164-7f76-4e44-b018-19f081557efd-kube-api-access-xmk89\") pod \"community-operators-78b6r\" (UID: \"88ce4164-7f76-4e44-b018-19f081557efd\") " pod="openshift-marketplace/community-operators-78b6r" Feb 14 19:29:05 crc kubenswrapper[4897]: I0214 19:29:05.314632 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmk89\" (UniqueName: \"kubernetes.io/projected/88ce4164-7f76-4e44-b018-19f081557efd-kube-api-access-xmk89\") pod \"community-operators-78b6r\" (UID: \"88ce4164-7f76-4e44-b018-19f081557efd\") " pod="openshift-marketplace/community-operators-78b6r" Feb 14 19:29:05 crc kubenswrapper[4897]: I0214 19:29:05.314825 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88ce4164-7f76-4e44-b018-19f081557efd-utilities\") pod \"community-operators-78b6r\" (UID: \"88ce4164-7f76-4e44-b018-19f081557efd\") " pod="openshift-marketplace/community-operators-78b6r" Feb 14 19:29:05 crc kubenswrapper[4897]: I0214 19:29:05.314942 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88ce4164-7f76-4e44-b018-19f081557efd-catalog-content\") pod \"community-operators-78b6r\" (UID: \"88ce4164-7f76-4e44-b018-19f081557efd\") " pod="openshift-marketplace/community-operators-78b6r" Feb 14 19:29:05 crc kubenswrapper[4897]: I0214 19:29:05.315748 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88ce4164-7f76-4e44-b018-19f081557efd-catalog-content\") pod \"community-operators-78b6r\" (UID: \"88ce4164-7f76-4e44-b018-19f081557efd\") " pod="openshift-marketplace/community-operators-78b6r" Feb 14 19:29:05 crc kubenswrapper[4897]: I0214 19:29:05.316439 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88ce4164-7f76-4e44-b018-19f081557efd-utilities\") pod \"community-operators-78b6r\" (UID: \"88ce4164-7f76-4e44-b018-19f081557efd\") " pod="openshift-marketplace/community-operators-78b6r" Feb 14 19:29:05 crc kubenswrapper[4897]: I0214 19:29:05.341117 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmk89\" (UniqueName: \"kubernetes.io/projected/88ce4164-7f76-4e44-b018-19f081557efd-kube-api-access-xmk89\") pod \"community-operators-78b6r\" (UID: \"88ce4164-7f76-4e44-b018-19f081557efd\") " pod="openshift-marketplace/community-operators-78b6r" Feb 14 19:29:05 crc kubenswrapper[4897]: I0214 19:29:05.418939 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-78b6r" Feb 14 19:29:05 crc kubenswrapper[4897]: I0214 19:29:05.923211 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-78b6r"] Feb 14 19:29:05 crc kubenswrapper[4897]: W0214 19:29:05.935632 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88ce4164_7f76_4e44_b018_19f081557efd.slice/crio-dc2256f54098a3ab238d728a2c9f4eaa5a57aacafac4652812ddb59e592686a9 WatchSource:0}: Error finding container dc2256f54098a3ab238d728a2c9f4eaa5a57aacafac4652812ddb59e592686a9: Status 404 returned error can't find the container with id dc2256f54098a3ab238d728a2c9f4eaa5a57aacafac4652812ddb59e592686a9 Feb 14 19:29:06 crc kubenswrapper[4897]: I0214 19:29:06.705872 4897 generic.go:334] "Generic (PLEG): container finished" podID="88ce4164-7f76-4e44-b018-19f081557efd" containerID="0f6551606b1e84e7f4aa1f3e79704f05e7f1d9f9809078e24ede70f76f5c3b06" exitCode=0 Feb 14 19:29:06 crc kubenswrapper[4897]: I0214 19:29:06.705949 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78b6r" event={"ID":"88ce4164-7f76-4e44-b018-19f081557efd","Type":"ContainerDied","Data":"0f6551606b1e84e7f4aa1f3e79704f05e7f1d9f9809078e24ede70f76f5c3b06"} Feb 14 19:29:06 crc kubenswrapper[4897]: I0214 19:29:06.706169 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78b6r" event={"ID":"88ce4164-7f76-4e44-b018-19f081557efd","Type":"ContainerStarted","Data":"dc2256f54098a3ab238d728a2c9f4eaa5a57aacafac4652812ddb59e592686a9"} Feb 14 19:29:07 crc kubenswrapper[4897]: I0214 19:29:07.720170 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78b6r" event={"ID":"88ce4164-7f76-4e44-b018-19f081557efd","Type":"ContainerStarted","Data":"b65dffa088767561d574efb74171a6f41a91c75f304fc021123ebf95049ba127"} Feb 14 19:29:09 crc kubenswrapper[4897]: I0214 19:29:09.747580 4897 generic.go:334] "Generic (PLEG): container finished" podID="88ce4164-7f76-4e44-b018-19f081557efd" containerID="b65dffa088767561d574efb74171a6f41a91c75f304fc021123ebf95049ba127" exitCode=0 Feb 14 19:29:09 crc kubenswrapper[4897]: I0214 19:29:09.748127 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78b6r" event={"ID":"88ce4164-7f76-4e44-b018-19f081557efd","Type":"ContainerDied","Data":"b65dffa088767561d574efb74171a6f41a91c75f304fc021123ebf95049ba127"} Feb 14 19:29:10 crc kubenswrapper[4897]: I0214 19:29:10.761255 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78b6r" event={"ID":"88ce4164-7f76-4e44-b018-19f081557efd","Type":"ContainerStarted","Data":"c05e154af91efde8efa1937d830211484bc74afcce3f415c0b99729575eea9a0"} Feb 14 19:29:10 crc kubenswrapper[4897]: I0214 19:29:10.798866 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-78b6r" podStartSLOduration=2.360908035 podStartE2EDuration="5.798846977s" podCreationTimestamp="2026-02-14 19:29:05 +0000 UTC" firstStartedPulling="2026-02-14 19:29:06.708687255 +0000 UTC m=+2799.685095738" lastFinishedPulling="2026-02-14 19:29:10.146626187 +0000 UTC m=+2803.123034680" observedRunningTime="2026-02-14 19:29:10.787275008 +0000 UTC m=+2803.763683511" watchObservedRunningTime="2026-02-14 19:29:10.798846977 +0000 UTC m=+2803.775255460" Feb 14 19:29:15 crc kubenswrapper[4897]: I0214 19:29:15.419730 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-78b6r" Feb 14 19:29:15 crc kubenswrapper[4897]: I0214 19:29:15.420478 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-78b6r" Feb 14 19:29:15 crc kubenswrapper[4897]: I0214 19:29:15.496325 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-78b6r" Feb 14 19:29:15 crc kubenswrapper[4897]: I0214 19:29:15.921896 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-78b6r" Feb 14 19:29:15 crc kubenswrapper[4897]: I0214 19:29:15.972168 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-78b6r"] Feb 14 19:29:17 crc kubenswrapper[4897]: I0214 19:29:17.843236 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-78b6r" podUID="88ce4164-7f76-4e44-b018-19f081557efd" containerName="registry-server" containerID="cri-o://c05e154af91efde8efa1937d830211484bc74afcce3f415c0b99729575eea9a0" gracePeriod=2 Feb 14 19:29:18 crc kubenswrapper[4897]: I0214 19:29:18.434885 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-78b6r" Feb 14 19:29:18 crc kubenswrapper[4897]: I0214 19:29:18.497552 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmk89\" (UniqueName: \"kubernetes.io/projected/88ce4164-7f76-4e44-b018-19f081557efd-kube-api-access-xmk89\") pod \"88ce4164-7f76-4e44-b018-19f081557efd\" (UID: \"88ce4164-7f76-4e44-b018-19f081557efd\") " Feb 14 19:29:18 crc kubenswrapper[4897]: I0214 19:29:18.497788 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88ce4164-7f76-4e44-b018-19f081557efd-utilities\") pod \"88ce4164-7f76-4e44-b018-19f081557efd\" (UID: \"88ce4164-7f76-4e44-b018-19f081557efd\") " Feb 14 19:29:18 crc kubenswrapper[4897]: I0214 19:29:18.497848 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88ce4164-7f76-4e44-b018-19f081557efd-catalog-content\") pod \"88ce4164-7f76-4e44-b018-19f081557efd\" (UID: \"88ce4164-7f76-4e44-b018-19f081557efd\") " Feb 14 19:29:18 crc kubenswrapper[4897]: I0214 19:29:18.498638 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88ce4164-7f76-4e44-b018-19f081557efd-utilities" (OuterVolumeSpecName: "utilities") pod "88ce4164-7f76-4e44-b018-19f081557efd" (UID: "88ce4164-7f76-4e44-b018-19f081557efd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:29:18 crc kubenswrapper[4897]: I0214 19:29:18.517254 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88ce4164-7f76-4e44-b018-19f081557efd-kube-api-access-xmk89" (OuterVolumeSpecName: "kube-api-access-xmk89") pod "88ce4164-7f76-4e44-b018-19f081557efd" (UID: "88ce4164-7f76-4e44-b018-19f081557efd"). InnerVolumeSpecName "kube-api-access-xmk89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:29:18 crc kubenswrapper[4897]: I0214 19:29:18.577384 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88ce4164-7f76-4e44-b018-19f081557efd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88ce4164-7f76-4e44-b018-19f081557efd" (UID: "88ce4164-7f76-4e44-b018-19f081557efd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:29:18 crc kubenswrapper[4897]: I0214 19:29:18.600994 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmk89\" (UniqueName: \"kubernetes.io/projected/88ce4164-7f76-4e44-b018-19f081557efd-kube-api-access-xmk89\") on node \"crc\" DevicePath \"\"" Feb 14 19:29:18 crc kubenswrapper[4897]: I0214 19:29:18.601048 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88ce4164-7f76-4e44-b018-19f081557efd-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 19:29:18 crc kubenswrapper[4897]: I0214 19:29:18.601058 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88ce4164-7f76-4e44-b018-19f081557efd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 19:29:18 crc kubenswrapper[4897]: I0214 19:29:18.856537 4897 generic.go:334] "Generic (PLEG): container finished" podID="88ce4164-7f76-4e44-b018-19f081557efd" containerID="c05e154af91efde8efa1937d830211484bc74afcce3f415c0b99729575eea9a0" exitCode=0 Feb 14 19:29:18 crc kubenswrapper[4897]: I0214 19:29:18.856583 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78b6r" event={"ID":"88ce4164-7f76-4e44-b018-19f081557efd","Type":"ContainerDied","Data":"c05e154af91efde8efa1937d830211484bc74afcce3f415c0b99729575eea9a0"} Feb 14 19:29:18 crc kubenswrapper[4897]: I0214 19:29:18.856610 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78b6r" event={"ID":"88ce4164-7f76-4e44-b018-19f081557efd","Type":"ContainerDied","Data":"dc2256f54098a3ab238d728a2c9f4eaa5a57aacafac4652812ddb59e592686a9"} Feb 14 19:29:18 crc kubenswrapper[4897]: I0214 19:29:18.856629 4897 scope.go:117] "RemoveContainer" containerID="c05e154af91efde8efa1937d830211484bc74afcce3f415c0b99729575eea9a0" Feb 14 19:29:18 crc kubenswrapper[4897]: I0214 19:29:18.856635 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-78b6r" Feb 14 19:29:18 crc kubenswrapper[4897]: I0214 19:29:18.886781 4897 scope.go:117] "RemoveContainer" containerID="b65dffa088767561d574efb74171a6f41a91c75f304fc021123ebf95049ba127" Feb 14 19:29:18 crc kubenswrapper[4897]: I0214 19:29:18.910391 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-78b6r"] Feb 14 19:29:18 crc kubenswrapper[4897]: I0214 19:29:18.925232 4897 scope.go:117] "RemoveContainer" containerID="0f6551606b1e84e7f4aa1f3e79704f05e7f1d9f9809078e24ede70f76f5c3b06" Feb 14 19:29:18 crc kubenswrapper[4897]: I0214 19:29:18.926559 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-78b6r"] Feb 14 19:29:18 crc kubenswrapper[4897]: I0214 19:29:18.985937 4897 scope.go:117] "RemoveContainer" containerID="c05e154af91efde8efa1937d830211484bc74afcce3f415c0b99729575eea9a0" Feb 14 19:29:18 crc kubenswrapper[4897]: E0214 19:29:18.986848 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c05e154af91efde8efa1937d830211484bc74afcce3f415c0b99729575eea9a0\": container with ID starting with c05e154af91efde8efa1937d830211484bc74afcce3f415c0b99729575eea9a0 not found: ID does not exist" containerID="c05e154af91efde8efa1937d830211484bc74afcce3f415c0b99729575eea9a0" Feb 14 19:29:18 crc kubenswrapper[4897]: I0214 19:29:18.986876 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c05e154af91efde8efa1937d830211484bc74afcce3f415c0b99729575eea9a0"} err="failed to get container status \"c05e154af91efde8efa1937d830211484bc74afcce3f415c0b99729575eea9a0\": rpc error: code = NotFound desc = could not find container \"c05e154af91efde8efa1937d830211484bc74afcce3f415c0b99729575eea9a0\": container with ID starting with c05e154af91efde8efa1937d830211484bc74afcce3f415c0b99729575eea9a0 not found: ID does not exist" Feb 14 19:29:18 crc kubenswrapper[4897]: I0214 19:29:18.986895 4897 scope.go:117] "RemoveContainer" containerID="b65dffa088767561d574efb74171a6f41a91c75f304fc021123ebf95049ba127" Feb 14 19:29:18 crc kubenswrapper[4897]: E0214 19:29:18.987157 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b65dffa088767561d574efb74171a6f41a91c75f304fc021123ebf95049ba127\": container with ID starting with b65dffa088767561d574efb74171a6f41a91c75f304fc021123ebf95049ba127 not found: ID does not exist" containerID="b65dffa088767561d574efb74171a6f41a91c75f304fc021123ebf95049ba127" Feb 14 19:29:18 crc kubenswrapper[4897]: I0214 19:29:18.987180 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b65dffa088767561d574efb74171a6f41a91c75f304fc021123ebf95049ba127"} err="failed to get container status \"b65dffa088767561d574efb74171a6f41a91c75f304fc021123ebf95049ba127\": rpc error: code = NotFound desc = could not find container \"b65dffa088767561d574efb74171a6f41a91c75f304fc021123ebf95049ba127\": container with ID starting with b65dffa088767561d574efb74171a6f41a91c75f304fc021123ebf95049ba127 not found: ID does not exist" Feb 14 19:29:18 crc kubenswrapper[4897]: I0214 19:29:18.987193 4897 scope.go:117] "RemoveContainer" containerID="0f6551606b1e84e7f4aa1f3e79704f05e7f1d9f9809078e24ede70f76f5c3b06" Feb 14 19:29:18 crc kubenswrapper[4897]: E0214 19:29:18.987465 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f6551606b1e84e7f4aa1f3e79704f05e7f1d9f9809078e24ede70f76f5c3b06\": container with ID starting with 0f6551606b1e84e7f4aa1f3e79704f05e7f1d9f9809078e24ede70f76f5c3b06 not found: ID does not exist" containerID="0f6551606b1e84e7f4aa1f3e79704f05e7f1d9f9809078e24ede70f76f5c3b06" Feb 14 19:29:18 crc kubenswrapper[4897]: I0214 19:29:18.987490 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f6551606b1e84e7f4aa1f3e79704f05e7f1d9f9809078e24ede70f76f5c3b06"} err="failed to get container status \"0f6551606b1e84e7f4aa1f3e79704f05e7f1d9f9809078e24ede70f76f5c3b06\": rpc error: code = NotFound desc = could not find container \"0f6551606b1e84e7f4aa1f3e79704f05e7f1d9f9809078e24ede70f76f5c3b06\": container with ID starting with 0f6551606b1e84e7f4aa1f3e79704f05e7f1d9f9809078e24ede70f76f5c3b06 not found: ID does not exist" Feb 14 19:29:19 crc kubenswrapper[4897]: I0214 19:29:19.815795 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88ce4164-7f76-4e44-b018-19f081557efd" path="/var/lib/kubelet/pods/88ce4164-7f76-4e44-b018-19f081557efd/volumes" Feb 14 19:29:31 crc kubenswrapper[4897]: I0214 19:29:31.725721 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:29:31 crc kubenswrapper[4897]: I0214 19:29:31.726281 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:29:39 crc kubenswrapper[4897]: I0214 19:29:39.048130 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9x6kz"] Feb 14 19:29:39 crc kubenswrapper[4897]: E0214 19:29:39.049266 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88ce4164-7f76-4e44-b018-19f081557efd" containerName="extract-content" Feb 14 19:29:39 crc kubenswrapper[4897]: I0214 19:29:39.049280 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="88ce4164-7f76-4e44-b018-19f081557efd" containerName="extract-content" Feb 14 19:29:39 crc kubenswrapper[4897]: E0214 19:29:39.049300 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88ce4164-7f76-4e44-b018-19f081557efd" containerName="extract-utilities" Feb 14 19:29:39 crc kubenswrapper[4897]: I0214 19:29:39.049308 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="88ce4164-7f76-4e44-b018-19f081557efd" containerName="extract-utilities" Feb 14 19:29:39 crc kubenswrapper[4897]: E0214 19:29:39.049321 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88ce4164-7f76-4e44-b018-19f081557efd" containerName="registry-server" Feb 14 19:29:39 crc kubenswrapper[4897]: I0214 19:29:39.049329 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="88ce4164-7f76-4e44-b018-19f081557efd" containerName="registry-server" Feb 14 19:29:39 crc kubenswrapper[4897]: I0214 19:29:39.049592 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="88ce4164-7f76-4e44-b018-19f081557efd" containerName="registry-server" Feb 14 19:29:39 crc kubenswrapper[4897]: I0214 19:29:39.051331 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9x6kz" Feb 14 19:29:39 crc kubenswrapper[4897]: I0214 19:29:39.058300 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9x6kz"] Feb 14 19:29:39 crc kubenswrapper[4897]: I0214 19:29:39.084057 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fba6e75c-94be-4cf6-95b6-d8742e8e2ce5-catalog-content\") pod \"certified-operators-9x6kz\" (UID: \"fba6e75c-94be-4cf6-95b6-d8742e8e2ce5\") " pod="openshift-marketplace/certified-operators-9x6kz" Feb 14 19:29:39 crc kubenswrapper[4897]: I0214 19:29:39.084342 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fba6e75c-94be-4cf6-95b6-d8742e8e2ce5-utilities\") pod \"certified-operators-9x6kz\" (UID: \"fba6e75c-94be-4cf6-95b6-d8742e8e2ce5\") " pod="openshift-marketplace/certified-operators-9x6kz" Feb 14 19:29:39 crc kubenswrapper[4897]: I0214 19:29:39.084600 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m74cc\" (UniqueName: \"kubernetes.io/projected/fba6e75c-94be-4cf6-95b6-d8742e8e2ce5-kube-api-access-m74cc\") pod \"certified-operators-9x6kz\" (UID: \"fba6e75c-94be-4cf6-95b6-d8742e8e2ce5\") " pod="openshift-marketplace/certified-operators-9x6kz" Feb 14 19:29:39 crc kubenswrapper[4897]: I0214 19:29:39.186662 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fba6e75c-94be-4cf6-95b6-d8742e8e2ce5-catalog-content\") pod \"certified-operators-9x6kz\" (UID: \"fba6e75c-94be-4cf6-95b6-d8742e8e2ce5\") " pod="openshift-marketplace/certified-operators-9x6kz" Feb 14 19:29:39 crc kubenswrapper[4897]: I0214 19:29:39.186754 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fba6e75c-94be-4cf6-95b6-d8742e8e2ce5-utilities\") pod \"certified-operators-9x6kz\" (UID: \"fba6e75c-94be-4cf6-95b6-d8742e8e2ce5\") " pod="openshift-marketplace/certified-operators-9x6kz" Feb 14 19:29:39 crc kubenswrapper[4897]: I0214 19:29:39.186972 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m74cc\" (UniqueName: \"kubernetes.io/projected/fba6e75c-94be-4cf6-95b6-d8742e8e2ce5-kube-api-access-m74cc\") pod \"certified-operators-9x6kz\" (UID: \"fba6e75c-94be-4cf6-95b6-d8742e8e2ce5\") " pod="openshift-marketplace/certified-operators-9x6kz" Feb 14 19:29:39 crc kubenswrapper[4897]: I0214 19:29:39.187115 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fba6e75c-94be-4cf6-95b6-d8742e8e2ce5-catalog-content\") pod \"certified-operators-9x6kz\" (UID: \"fba6e75c-94be-4cf6-95b6-d8742e8e2ce5\") " pod="openshift-marketplace/certified-operators-9x6kz" Feb 14 19:29:39 crc kubenswrapper[4897]: I0214 19:29:39.187683 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fba6e75c-94be-4cf6-95b6-d8742e8e2ce5-utilities\") pod \"certified-operators-9x6kz\" (UID: \"fba6e75c-94be-4cf6-95b6-d8742e8e2ce5\") " pod="openshift-marketplace/certified-operators-9x6kz" Feb 14 19:29:39 crc kubenswrapper[4897]: I0214 19:29:39.213707 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m74cc\" (UniqueName: \"kubernetes.io/projected/fba6e75c-94be-4cf6-95b6-d8742e8e2ce5-kube-api-access-m74cc\") pod \"certified-operators-9x6kz\" (UID: \"fba6e75c-94be-4cf6-95b6-d8742e8e2ce5\") " pod="openshift-marketplace/certified-operators-9x6kz" Feb 14 19:29:39 crc kubenswrapper[4897]: I0214 19:29:39.384801 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9x6kz" Feb 14 19:29:39 crc kubenswrapper[4897]: I0214 19:29:39.954015 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9x6kz"] Feb 14 19:29:40 crc kubenswrapper[4897]: I0214 19:29:40.138460 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9x6kz" event={"ID":"fba6e75c-94be-4cf6-95b6-d8742e8e2ce5","Type":"ContainerStarted","Data":"6785e599026172986f7ae34d389ba67525bd4cae1ba6ab628aa0065c8c904768"} Feb 14 19:29:41 crc kubenswrapper[4897]: I0214 19:29:41.150949 4897 generic.go:334] "Generic (PLEG): container finished" podID="fba6e75c-94be-4cf6-95b6-d8742e8e2ce5" containerID="7b3410f3a54d66b6d8712d751547e8b68fff42119a26b28dc2ad890600d4947f" exitCode=0 Feb 14 19:29:41 crc kubenswrapper[4897]: I0214 19:29:41.151078 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9x6kz" event={"ID":"fba6e75c-94be-4cf6-95b6-d8742e8e2ce5","Type":"ContainerDied","Data":"7b3410f3a54d66b6d8712d751547e8b68fff42119a26b28dc2ad890600d4947f"} Feb 14 19:29:43 crc kubenswrapper[4897]: I0214 19:29:43.179080 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9x6kz" event={"ID":"fba6e75c-94be-4cf6-95b6-d8742e8e2ce5","Type":"ContainerStarted","Data":"2308523ccb429655aaaba6828ea9273ca2a6a86c59d5bbb06f6d5271e47a950c"} Feb 14 19:29:44 crc kubenswrapper[4897]: I0214 19:29:44.193326 4897 generic.go:334] "Generic (PLEG): container finished" podID="fba6e75c-94be-4cf6-95b6-d8742e8e2ce5" containerID="2308523ccb429655aaaba6828ea9273ca2a6a86c59d5bbb06f6d5271e47a950c" exitCode=0 Feb 14 19:29:44 crc kubenswrapper[4897]: I0214 19:29:44.193946 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9x6kz" event={"ID":"fba6e75c-94be-4cf6-95b6-d8742e8e2ce5","Type":"ContainerDied","Data":"2308523ccb429655aaaba6828ea9273ca2a6a86c59d5bbb06f6d5271e47a950c"} Feb 14 19:29:45 crc kubenswrapper[4897]: I0214 19:29:45.208190 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9x6kz" event={"ID":"fba6e75c-94be-4cf6-95b6-d8742e8e2ce5","Type":"ContainerStarted","Data":"7ef4f57d749ad97b3e9983f6870afc99fc52b0fb473e770b8e4c97f40093205a"} Feb 14 19:29:45 crc kubenswrapper[4897]: I0214 19:29:45.240864 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9x6kz" podStartSLOduration=3.7899370919999997 podStartE2EDuration="7.240832966s" podCreationTimestamp="2026-02-14 19:29:38 +0000 UTC" firstStartedPulling="2026-02-14 19:29:41.15345168 +0000 UTC m=+2834.129860173" lastFinishedPulling="2026-02-14 19:29:44.604347524 +0000 UTC m=+2837.580756047" observedRunningTime="2026-02-14 19:29:45.226418639 +0000 UTC m=+2838.202827142" watchObservedRunningTime="2026-02-14 19:29:45.240832966 +0000 UTC m=+2838.217241489" Feb 14 19:29:49 crc kubenswrapper[4897]: I0214 19:29:49.385354 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9x6kz" Feb 14 19:29:49 crc kubenswrapper[4897]: I0214 19:29:49.386009 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9x6kz" Feb 14 19:29:49 crc kubenswrapper[4897]: I0214 19:29:49.448141 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9x6kz" Feb 14 19:29:50 crc kubenswrapper[4897]: I0214 19:29:50.348009 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9x6kz" Feb 14 19:29:50 crc kubenswrapper[4897]: I0214 19:29:50.414405 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9x6kz"] Feb 14 19:29:52 crc kubenswrapper[4897]: I0214 19:29:52.290874 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9x6kz" podUID="fba6e75c-94be-4cf6-95b6-d8742e8e2ce5" containerName="registry-server" containerID="cri-o://7ef4f57d749ad97b3e9983f6870afc99fc52b0fb473e770b8e4c97f40093205a" gracePeriod=2 Feb 14 19:29:52 crc kubenswrapper[4897]: I0214 19:29:52.879074 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9x6kz" Feb 14 19:29:52 crc kubenswrapper[4897]: I0214 19:29:52.883128 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m74cc\" (UniqueName: \"kubernetes.io/projected/fba6e75c-94be-4cf6-95b6-d8742e8e2ce5-kube-api-access-m74cc\") pod \"fba6e75c-94be-4cf6-95b6-d8742e8e2ce5\" (UID: \"fba6e75c-94be-4cf6-95b6-d8742e8e2ce5\") " Feb 14 19:29:52 crc kubenswrapper[4897]: I0214 19:29:52.883222 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fba6e75c-94be-4cf6-95b6-d8742e8e2ce5-utilities\") pod \"fba6e75c-94be-4cf6-95b6-d8742e8e2ce5\" (UID: \"fba6e75c-94be-4cf6-95b6-d8742e8e2ce5\") " Feb 14 19:29:52 crc kubenswrapper[4897]: I0214 19:29:52.883422 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fba6e75c-94be-4cf6-95b6-d8742e8e2ce5-catalog-content\") pod \"fba6e75c-94be-4cf6-95b6-d8742e8e2ce5\" (UID: \"fba6e75c-94be-4cf6-95b6-d8742e8e2ce5\") " Feb 14 19:29:52 crc kubenswrapper[4897]: I0214 19:29:52.884377 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fba6e75c-94be-4cf6-95b6-d8742e8e2ce5-utilities" (OuterVolumeSpecName: "utilities") pod "fba6e75c-94be-4cf6-95b6-d8742e8e2ce5" (UID: "fba6e75c-94be-4cf6-95b6-d8742e8e2ce5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:29:52 crc kubenswrapper[4897]: I0214 19:29:52.887902 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fba6e75c-94be-4cf6-95b6-d8742e8e2ce5-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 19:29:52 crc kubenswrapper[4897]: I0214 19:29:52.893187 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fba6e75c-94be-4cf6-95b6-d8742e8e2ce5-kube-api-access-m74cc" (OuterVolumeSpecName: "kube-api-access-m74cc") pod "fba6e75c-94be-4cf6-95b6-d8742e8e2ce5" (UID: "fba6e75c-94be-4cf6-95b6-d8742e8e2ce5"). InnerVolumeSpecName "kube-api-access-m74cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:29:52 crc kubenswrapper[4897]: I0214 19:29:52.969687 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fba6e75c-94be-4cf6-95b6-d8742e8e2ce5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fba6e75c-94be-4cf6-95b6-d8742e8e2ce5" (UID: "fba6e75c-94be-4cf6-95b6-d8742e8e2ce5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:29:52 crc kubenswrapper[4897]: I0214 19:29:52.989728 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m74cc\" (UniqueName: \"kubernetes.io/projected/fba6e75c-94be-4cf6-95b6-d8742e8e2ce5-kube-api-access-m74cc\") on node \"crc\" DevicePath \"\"" Feb 14 19:29:52 crc kubenswrapper[4897]: I0214 19:29:52.989763 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fba6e75c-94be-4cf6-95b6-d8742e8e2ce5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 19:29:53 crc kubenswrapper[4897]: I0214 19:29:53.303201 4897 generic.go:334] "Generic (PLEG): container finished" podID="fba6e75c-94be-4cf6-95b6-d8742e8e2ce5" containerID="7ef4f57d749ad97b3e9983f6870afc99fc52b0fb473e770b8e4c97f40093205a" exitCode=0 Feb 14 19:29:53 crc kubenswrapper[4897]: I0214 19:29:53.303254 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9x6kz" Feb 14 19:29:53 crc kubenswrapper[4897]: I0214 19:29:53.303284 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9x6kz" event={"ID":"fba6e75c-94be-4cf6-95b6-d8742e8e2ce5","Type":"ContainerDied","Data":"7ef4f57d749ad97b3e9983f6870afc99fc52b0fb473e770b8e4c97f40093205a"} Feb 14 19:29:53 crc kubenswrapper[4897]: I0214 19:29:53.303373 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9x6kz" event={"ID":"fba6e75c-94be-4cf6-95b6-d8742e8e2ce5","Type":"ContainerDied","Data":"6785e599026172986f7ae34d389ba67525bd4cae1ba6ab628aa0065c8c904768"} Feb 14 19:29:53 crc kubenswrapper[4897]: I0214 19:29:53.303409 4897 scope.go:117] "RemoveContainer" containerID="7ef4f57d749ad97b3e9983f6870afc99fc52b0fb473e770b8e4c97f40093205a" Feb 14 19:29:53 crc kubenswrapper[4897]: I0214 19:29:53.341406 4897 scope.go:117] "RemoveContainer" containerID="2308523ccb429655aaaba6828ea9273ca2a6a86c59d5bbb06f6d5271e47a950c" Feb 14 19:29:53 crc kubenswrapper[4897]: I0214 19:29:53.362075 4897 scope.go:117] "RemoveContainer" containerID="7b3410f3a54d66b6d8712d751547e8b68fff42119a26b28dc2ad890600d4947f" Feb 14 19:29:53 crc kubenswrapper[4897]: I0214 19:29:53.367011 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9x6kz"] Feb 14 19:29:53 crc kubenswrapper[4897]: I0214 19:29:53.380337 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9x6kz"] Feb 14 19:29:53 crc kubenswrapper[4897]: E0214 19:29:53.420208 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfba6e75c_94be_4cf6_95b6_d8742e8e2ce5.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfba6e75c_94be_4cf6_95b6_d8742e8e2ce5.slice/crio-6785e599026172986f7ae34d389ba67525bd4cae1ba6ab628aa0065c8c904768\": RecentStats: unable to find data in memory cache]" Feb 14 19:29:53 crc kubenswrapper[4897]: I0214 19:29:53.447428 4897 scope.go:117] "RemoveContainer" containerID="7ef4f57d749ad97b3e9983f6870afc99fc52b0fb473e770b8e4c97f40093205a" Feb 14 19:29:53 crc kubenswrapper[4897]: E0214 19:29:53.451608 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ef4f57d749ad97b3e9983f6870afc99fc52b0fb473e770b8e4c97f40093205a\": container with ID starting with 7ef4f57d749ad97b3e9983f6870afc99fc52b0fb473e770b8e4c97f40093205a not found: ID does not exist" containerID="7ef4f57d749ad97b3e9983f6870afc99fc52b0fb473e770b8e4c97f40093205a" Feb 14 19:29:53 crc kubenswrapper[4897]: I0214 19:29:53.451756 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ef4f57d749ad97b3e9983f6870afc99fc52b0fb473e770b8e4c97f40093205a"} err="failed to get container status \"7ef4f57d749ad97b3e9983f6870afc99fc52b0fb473e770b8e4c97f40093205a\": rpc error: code = NotFound desc = could not find container \"7ef4f57d749ad97b3e9983f6870afc99fc52b0fb473e770b8e4c97f40093205a\": container with ID starting with 7ef4f57d749ad97b3e9983f6870afc99fc52b0fb473e770b8e4c97f40093205a not found: ID does not exist" Feb 14 19:29:53 crc kubenswrapper[4897]: I0214 19:29:53.451853 4897 scope.go:117] "RemoveContainer" containerID="2308523ccb429655aaaba6828ea9273ca2a6a86c59d5bbb06f6d5271e47a950c" Feb 14 19:29:53 crc kubenswrapper[4897]: E0214 19:29:53.452930 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2308523ccb429655aaaba6828ea9273ca2a6a86c59d5bbb06f6d5271e47a950c\": container with ID starting with 2308523ccb429655aaaba6828ea9273ca2a6a86c59d5bbb06f6d5271e47a950c not found: ID does not exist" containerID="2308523ccb429655aaaba6828ea9273ca2a6a86c59d5bbb06f6d5271e47a950c" Feb 14 19:29:53 crc kubenswrapper[4897]: I0214 19:29:53.452991 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2308523ccb429655aaaba6828ea9273ca2a6a86c59d5bbb06f6d5271e47a950c"} err="failed to get container status \"2308523ccb429655aaaba6828ea9273ca2a6a86c59d5bbb06f6d5271e47a950c\": rpc error: code = NotFound desc = could not find container \"2308523ccb429655aaaba6828ea9273ca2a6a86c59d5bbb06f6d5271e47a950c\": container with ID starting with 2308523ccb429655aaaba6828ea9273ca2a6a86c59d5bbb06f6d5271e47a950c not found: ID does not exist" Feb 14 19:29:53 crc kubenswrapper[4897]: I0214 19:29:53.453042 4897 scope.go:117] "RemoveContainer" containerID="7b3410f3a54d66b6d8712d751547e8b68fff42119a26b28dc2ad890600d4947f" Feb 14 19:29:53 crc kubenswrapper[4897]: E0214 19:29:53.453522 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b3410f3a54d66b6d8712d751547e8b68fff42119a26b28dc2ad890600d4947f\": container with ID starting with 7b3410f3a54d66b6d8712d751547e8b68fff42119a26b28dc2ad890600d4947f not found: ID does not exist" containerID="7b3410f3a54d66b6d8712d751547e8b68fff42119a26b28dc2ad890600d4947f" Feb 14 19:29:53 crc kubenswrapper[4897]: I0214 19:29:53.453549 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b3410f3a54d66b6d8712d751547e8b68fff42119a26b28dc2ad890600d4947f"} err="failed to get container status \"7b3410f3a54d66b6d8712d751547e8b68fff42119a26b28dc2ad890600d4947f\": rpc error: code = NotFound desc = could not find container \"7b3410f3a54d66b6d8712d751547e8b68fff42119a26b28dc2ad890600d4947f\": container with ID starting with 7b3410f3a54d66b6d8712d751547e8b68fff42119a26b28dc2ad890600d4947f not found: ID does not exist" Feb 14 19:29:53 crc kubenswrapper[4897]: I0214 19:29:53.813221 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fba6e75c-94be-4cf6-95b6-d8742e8e2ce5" path="/var/lib/kubelet/pods/fba6e75c-94be-4cf6-95b6-d8742e8e2ce5/volumes" Feb 14 19:30:00 crc kubenswrapper[4897]: I0214 19:30:00.177305 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518290-9j2mr"] Feb 14 19:30:00 crc kubenswrapper[4897]: E0214 19:30:00.179172 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba6e75c-94be-4cf6-95b6-d8742e8e2ce5" containerName="registry-server" Feb 14 19:30:00 crc kubenswrapper[4897]: I0214 19:30:00.179211 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba6e75c-94be-4cf6-95b6-d8742e8e2ce5" containerName="registry-server" Feb 14 19:30:00 crc kubenswrapper[4897]: E0214 19:30:00.179293 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba6e75c-94be-4cf6-95b6-d8742e8e2ce5" containerName="extract-utilities" Feb 14 19:30:00 crc kubenswrapper[4897]: I0214 19:30:00.179311 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba6e75c-94be-4cf6-95b6-d8742e8e2ce5" containerName="extract-utilities" Feb 14 19:30:00 crc kubenswrapper[4897]: E0214 19:30:00.179357 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba6e75c-94be-4cf6-95b6-d8742e8e2ce5" containerName="extract-content" Feb 14 19:30:00 crc kubenswrapper[4897]: I0214 19:30:00.179377 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba6e75c-94be-4cf6-95b6-d8742e8e2ce5" containerName="extract-content" Feb 14 19:30:00 crc kubenswrapper[4897]: I0214 19:30:00.180111 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba6e75c-94be-4cf6-95b6-d8742e8e2ce5" containerName="registry-server" Feb 14 19:30:00 crc kubenswrapper[4897]: I0214 19:30:00.182095 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518290-9j2mr" Feb 14 19:30:00 crc kubenswrapper[4897]: I0214 19:30:00.184891 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 19:30:00 crc kubenswrapper[4897]: I0214 19:30:00.184896 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 19:30:00 crc kubenswrapper[4897]: I0214 19:30:00.205371 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518290-9j2mr"] Feb 14 19:30:00 crc kubenswrapper[4897]: I0214 19:30:00.283751 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e5a598f-2e95-4f4e-a9ba-993823b16b86-config-volume\") pod \"collect-profiles-29518290-9j2mr\" (UID: \"1e5a598f-2e95-4f4e-a9ba-993823b16b86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518290-9j2mr" Feb 14 19:30:00 crc kubenswrapper[4897]: I0214 19:30:00.284387 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pph4\" (UniqueName: \"kubernetes.io/projected/1e5a598f-2e95-4f4e-a9ba-993823b16b86-kube-api-access-4pph4\") pod \"collect-profiles-29518290-9j2mr\" (UID: \"1e5a598f-2e95-4f4e-a9ba-993823b16b86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518290-9j2mr" Feb 14 19:30:00 crc kubenswrapper[4897]: I0214 19:30:00.284522 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1e5a598f-2e95-4f4e-a9ba-993823b16b86-secret-volume\") pod \"collect-profiles-29518290-9j2mr\" (UID: \"1e5a598f-2e95-4f4e-a9ba-993823b16b86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518290-9j2mr" Feb 14 19:30:00 crc kubenswrapper[4897]: I0214 19:30:00.386377 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pph4\" (UniqueName: \"kubernetes.io/projected/1e5a598f-2e95-4f4e-a9ba-993823b16b86-kube-api-access-4pph4\") pod \"collect-profiles-29518290-9j2mr\" (UID: \"1e5a598f-2e95-4f4e-a9ba-993823b16b86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518290-9j2mr" Feb 14 19:30:00 crc kubenswrapper[4897]: I0214 19:30:00.386421 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1e5a598f-2e95-4f4e-a9ba-993823b16b86-secret-volume\") pod \"collect-profiles-29518290-9j2mr\" (UID: \"1e5a598f-2e95-4f4e-a9ba-993823b16b86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518290-9j2mr" Feb 14 19:30:00 crc kubenswrapper[4897]: I0214 19:30:00.386648 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e5a598f-2e95-4f4e-a9ba-993823b16b86-config-volume\") pod \"collect-profiles-29518290-9j2mr\" (UID: \"1e5a598f-2e95-4f4e-a9ba-993823b16b86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518290-9j2mr" Feb 14 19:30:00 crc kubenswrapper[4897]: I0214 19:30:00.387617 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e5a598f-2e95-4f4e-a9ba-993823b16b86-config-volume\") pod \"collect-profiles-29518290-9j2mr\" (UID: \"1e5a598f-2e95-4f4e-a9ba-993823b16b86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518290-9j2mr" Feb 14 19:30:00 crc kubenswrapper[4897]: I0214 19:30:00.395366 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1e5a598f-2e95-4f4e-a9ba-993823b16b86-secret-volume\") pod \"collect-profiles-29518290-9j2mr\" (UID: \"1e5a598f-2e95-4f4e-a9ba-993823b16b86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518290-9j2mr" Feb 14 19:30:00 crc kubenswrapper[4897]: I0214 19:30:00.402932 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pph4\" (UniqueName: \"kubernetes.io/projected/1e5a598f-2e95-4f4e-a9ba-993823b16b86-kube-api-access-4pph4\") pod \"collect-profiles-29518290-9j2mr\" (UID: \"1e5a598f-2e95-4f4e-a9ba-993823b16b86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518290-9j2mr" Feb 14 19:30:00 crc kubenswrapper[4897]: I0214 19:30:00.503983 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518290-9j2mr" Feb 14 19:30:01 crc kubenswrapper[4897]: I0214 19:30:01.002559 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518290-9j2mr"] Feb 14 19:30:01 crc kubenswrapper[4897]: I0214 19:30:01.430549 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518290-9j2mr" event={"ID":"1e5a598f-2e95-4f4e-a9ba-993823b16b86","Type":"ContainerStarted","Data":"0ffa39deab5f1c975694c70ab24e37d401b0639833c29092b10f130760c2bcc9"} Feb 14 19:30:01 crc kubenswrapper[4897]: I0214 19:30:01.430866 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518290-9j2mr" event={"ID":"1e5a598f-2e95-4f4e-a9ba-993823b16b86","Type":"ContainerStarted","Data":"f53e3008a3f32639e58d19c5834315a39f4f3d13cad4c91274b350607eec9d27"} Feb 14 19:30:01 crc kubenswrapper[4897]: I0214 19:30:01.448104 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29518290-9j2mr" podStartSLOduration=1.448089347 podStartE2EDuration="1.448089347s" podCreationTimestamp="2026-02-14 19:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:30:01.446397735 +0000 UTC m=+2854.422806228" watchObservedRunningTime="2026-02-14 19:30:01.448089347 +0000 UTC m=+2854.424497840" Feb 14 19:30:01 crc kubenswrapper[4897]: I0214 19:30:01.725830 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:30:01 crc kubenswrapper[4897]: I0214 19:30:01.726527 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:30:02 crc kubenswrapper[4897]: I0214 19:30:02.449628 4897 generic.go:334] "Generic (PLEG): container finished" podID="1e5a598f-2e95-4f4e-a9ba-993823b16b86" containerID="0ffa39deab5f1c975694c70ab24e37d401b0639833c29092b10f130760c2bcc9" exitCode=0 Feb 14 19:30:02 crc kubenswrapper[4897]: I0214 19:30:02.449689 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518290-9j2mr" event={"ID":"1e5a598f-2e95-4f4e-a9ba-993823b16b86","Type":"ContainerDied","Data":"0ffa39deab5f1c975694c70ab24e37d401b0639833c29092b10f130760c2bcc9"} Feb 14 19:30:03 crc kubenswrapper[4897]: I0214 19:30:03.969314 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518290-9j2mr" Feb 14 19:30:04 crc kubenswrapper[4897]: I0214 19:30:04.082219 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pph4\" (UniqueName: \"kubernetes.io/projected/1e5a598f-2e95-4f4e-a9ba-993823b16b86-kube-api-access-4pph4\") pod \"1e5a598f-2e95-4f4e-a9ba-993823b16b86\" (UID: \"1e5a598f-2e95-4f4e-a9ba-993823b16b86\") " Feb 14 19:30:04 crc kubenswrapper[4897]: I0214 19:30:04.082452 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e5a598f-2e95-4f4e-a9ba-993823b16b86-config-volume\") pod \"1e5a598f-2e95-4f4e-a9ba-993823b16b86\" (UID: \"1e5a598f-2e95-4f4e-a9ba-993823b16b86\") " Feb 14 19:30:04 crc kubenswrapper[4897]: I0214 19:30:04.082521 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1e5a598f-2e95-4f4e-a9ba-993823b16b86-secret-volume\") pod \"1e5a598f-2e95-4f4e-a9ba-993823b16b86\" (UID: \"1e5a598f-2e95-4f4e-a9ba-993823b16b86\") " Feb 14 19:30:04 crc kubenswrapper[4897]: I0214 19:30:04.083656 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e5a598f-2e95-4f4e-a9ba-993823b16b86-config-volume" (OuterVolumeSpecName: "config-volume") pod "1e5a598f-2e95-4f4e-a9ba-993823b16b86" (UID: "1e5a598f-2e95-4f4e-a9ba-993823b16b86"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:30:04 crc kubenswrapper[4897]: I0214 19:30:04.088074 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e5a598f-2e95-4f4e-a9ba-993823b16b86-kube-api-access-4pph4" (OuterVolumeSpecName: "kube-api-access-4pph4") pod "1e5a598f-2e95-4f4e-a9ba-993823b16b86" (UID: "1e5a598f-2e95-4f4e-a9ba-993823b16b86"). InnerVolumeSpecName "kube-api-access-4pph4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:30:04 crc kubenswrapper[4897]: I0214 19:30:04.088132 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e5a598f-2e95-4f4e-a9ba-993823b16b86-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1e5a598f-2e95-4f4e-a9ba-993823b16b86" (UID: "1e5a598f-2e95-4f4e-a9ba-993823b16b86"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:30:04 crc kubenswrapper[4897]: I0214 19:30:04.185693 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pph4\" (UniqueName: \"kubernetes.io/projected/1e5a598f-2e95-4f4e-a9ba-993823b16b86-kube-api-access-4pph4\") on node \"crc\" DevicePath \"\"" Feb 14 19:30:04 crc kubenswrapper[4897]: I0214 19:30:04.185729 4897 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e5a598f-2e95-4f4e-a9ba-993823b16b86-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 19:30:04 crc kubenswrapper[4897]: I0214 19:30:04.185740 4897 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1e5a598f-2e95-4f4e-a9ba-993823b16b86-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 19:30:04 crc kubenswrapper[4897]: I0214 19:30:04.494899 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518290-9j2mr" event={"ID":"1e5a598f-2e95-4f4e-a9ba-993823b16b86","Type":"ContainerDied","Data":"f53e3008a3f32639e58d19c5834315a39f4f3d13cad4c91274b350607eec9d27"} Feb 14 19:30:04 crc kubenswrapper[4897]: I0214 19:30:04.495240 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f53e3008a3f32639e58d19c5834315a39f4f3d13cad4c91274b350607eec9d27" Feb 14 19:30:04 crc kubenswrapper[4897]: I0214 19:30:04.495254 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518290-9j2mr" Feb 14 19:30:04 crc kubenswrapper[4897]: I0214 19:30:04.543406 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518245-94xzh"] Feb 14 19:30:04 crc kubenswrapper[4897]: I0214 19:30:04.555859 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518245-94xzh"] Feb 14 19:30:05 crc kubenswrapper[4897]: I0214 19:30:05.821561 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66b06eb7-15a4-4237-9a72-9c3464f1cff1" path="/var/lib/kubelet/pods/66b06eb7-15a4-4237-9a72-9c3464f1cff1/volumes" Feb 14 19:30:31 crc kubenswrapper[4897]: I0214 19:30:31.725565 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:30:31 crc kubenswrapper[4897]: I0214 19:30:31.726306 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:30:31 crc kubenswrapper[4897]: I0214 19:30:31.726371 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 19:30:31 crc kubenswrapper[4897]: I0214 19:30:31.727753 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c6626430ea6c421b822bccb52e029bf5b509b0b28bfac88869c30b1dbfcb44a8"} pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 19:30:31 crc kubenswrapper[4897]: I0214 19:30:31.727853 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" containerID="cri-o://c6626430ea6c421b822bccb52e029bf5b509b0b28bfac88869c30b1dbfcb44a8" gracePeriod=600 Feb 14 19:30:31 crc kubenswrapper[4897]: I0214 19:30:31.939101 4897 generic.go:334] "Generic (PLEG): container finished" podID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerID="c6626430ea6c421b822bccb52e029bf5b509b0b28bfac88869c30b1dbfcb44a8" exitCode=0 Feb 14 19:30:31 crc kubenswrapper[4897]: I0214 19:30:31.939149 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerDied","Data":"c6626430ea6c421b822bccb52e029bf5b509b0b28bfac88869c30b1dbfcb44a8"} Feb 14 19:30:31 crc kubenswrapper[4897]: I0214 19:30:31.939180 4897 scope.go:117] "RemoveContainer" containerID="9695197876fefa6d21a161c8e8f588b60ef3eadbdb3a9ac58817b186cad5dae5" Feb 14 19:30:31 crc kubenswrapper[4897]: E0214 19:30:31.995190 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f885c6c_b913_48e3_93fc_abf932515ea9.slice/crio-conmon-c6626430ea6c421b822bccb52e029bf5b509b0b28bfac88869c30b1dbfcb44a8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f885c6c_b913_48e3_93fc_abf932515ea9.slice/crio-c6626430ea6c421b822bccb52e029bf5b509b0b28bfac88869c30b1dbfcb44a8.scope\": RecentStats: unable to find data in memory cache]" Feb 14 19:30:32 crc kubenswrapper[4897]: I0214 19:30:32.955368 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerStarted","Data":"ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431"} Feb 14 19:30:42 crc kubenswrapper[4897]: I0214 19:30:42.640495 4897 scope.go:117] "RemoveContainer" containerID="a7350ca45e490c895e24f3af30e20355949f58244b141c2f6ce196da928d8e82" Feb 14 19:31:17 crc kubenswrapper[4897]: I0214 19:31:17.546235 4897 generic.go:334] "Generic (PLEG): container finished" podID="2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00" containerID="e6c19c1c83a7eb9d1c09e056bfc94f7b6490696a370dc179db240d263bec12be" exitCode=0 Feb 14 19:31:17 crc kubenswrapper[4897]: I0214 19:31:17.546321 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" event={"ID":"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00","Type":"ContainerDied","Data":"e6c19c1c83a7eb9d1c09e056bfc94f7b6490696a370dc179db240d263bec12be"} Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.122576 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.233857 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-ceilometer-compute-config-data-2\") pod \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.233905 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-ceilometer-compute-config-data-1\") pod \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.233965 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx9lz\" (UniqueName: \"kubernetes.io/projected/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-kube-api-access-jx9lz\") pod \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.234074 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-inventory\") pod \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.234124 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-ssh-key-openstack-edpm-ipam\") pod \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.234167 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-ceilometer-compute-config-data-0\") pod \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.234218 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-telemetry-combined-ca-bundle\") pod \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\" (UID: \"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00\") " Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.240593 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00" (UID: "2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.241192 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-kube-api-access-jx9lz" (OuterVolumeSpecName: "kube-api-access-jx9lz") pod "2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00" (UID: "2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00"). InnerVolumeSpecName "kube-api-access-jx9lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.273369 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00" (UID: "2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.278189 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00" (UID: "2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.285715 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00" (UID: "2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.293159 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00" (UID: "2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.303614 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-inventory" (OuterVolumeSpecName: "inventory") pod "2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00" (UID: "2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.338091 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.338121 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.338130 4897 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.338139 4897 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.338149 4897 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.338158 4897 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.338166 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jx9lz\" (UniqueName: \"kubernetes.io/projected/2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00-kube-api-access-jx9lz\") on node \"crc\" DevicePath \"\"" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.577135 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" event={"ID":"2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00","Type":"ContainerDied","Data":"4c5fd59c2a076fa63c41005421ed74d8ae3976a25782c608b5500270a2bc7afa"} Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.577498 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c5fd59c2a076fa63c41005421ed74d8ae3976a25782c608b5500270a2bc7afa" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.577194 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-76ncx" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.712647 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w"] Feb 14 19:31:19 crc kubenswrapper[4897]: E0214 19:31:19.713444 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.713599 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 14 19:31:19 crc kubenswrapper[4897]: E0214 19:31:19.713738 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e5a598f-2e95-4f4e-a9ba-993823b16b86" containerName="collect-profiles" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.713841 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e5a598f-2e95-4f4e-a9ba-993823b16b86" containerName="collect-profiles" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.714233 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e5a598f-2e95-4f4e-a9ba-993823b16b86" containerName="collect-profiles" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.714776 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.715815 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.718785 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.721166 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.721368 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.721595 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.721721 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j869w" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.740791 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w"] Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.850098 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.850474 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.850556 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.850720 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.850779 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.851129 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pq8d\" (UniqueName: \"kubernetes.io/projected/40ebae8a-773a-4b42-9385-81e545bff644-kube-api-access-2pq8d\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.851272 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.953560 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.953659 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.953859 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.953925 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.954125 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.954165 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.954306 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pq8d\" (UniqueName: \"kubernetes.io/projected/40ebae8a-773a-4b42-9385-81e545bff644-kube-api-access-2pq8d\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.958792 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.959329 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.960012 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.960054 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.960838 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.960901 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" Feb 14 19:31:19 crc kubenswrapper[4897]: I0214 19:31:19.973475 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pq8d\" (UniqueName: \"kubernetes.io/projected/40ebae8a-773a-4b42-9385-81e545bff644-kube-api-access-2pq8d\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" Feb 14 19:31:20 crc kubenswrapper[4897]: I0214 19:31:20.045347 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" Feb 14 19:31:20 crc kubenswrapper[4897]: I0214 19:31:20.685975 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w"] Feb 14 19:31:20 crc kubenswrapper[4897]: W0214 19:31:20.687446 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40ebae8a_773a_4b42_9385_81e545bff644.slice/crio-15d3f3e4dac6c789e01e5483b9fccdc35ddbb2fb51cf4eabf9cb52cf97515f19 WatchSource:0}: Error finding container 15d3f3e4dac6c789e01e5483b9fccdc35ddbb2fb51cf4eabf9cb52cf97515f19: Status 404 returned error can't find the container with id 15d3f3e4dac6c789e01e5483b9fccdc35ddbb2fb51cf4eabf9cb52cf97515f19 Feb 14 19:31:21 crc kubenswrapper[4897]: I0214 19:31:21.601024 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" event={"ID":"40ebae8a-773a-4b42-9385-81e545bff644","Type":"ContainerStarted","Data":"84034863c5603741050e62d2c6a6313c0b566db053a535b66c883af208dc40c3"} Feb 14 19:31:21 crc kubenswrapper[4897]: I0214 19:31:21.601459 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" event={"ID":"40ebae8a-773a-4b42-9385-81e545bff644","Type":"ContainerStarted","Data":"15d3f3e4dac6c789e01e5483b9fccdc35ddbb2fb51cf4eabf9cb52cf97515f19"} Feb 14 19:31:21 crc kubenswrapper[4897]: I0214 19:31:21.629572 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" podStartSLOduration=2.157179655 podStartE2EDuration="2.62955091s" podCreationTimestamp="2026-02-14 19:31:19 +0000 UTC" firstStartedPulling="2026-02-14 19:31:20.690359512 +0000 UTC m=+2933.666768015" lastFinishedPulling="2026-02-14 19:31:21.162730787 +0000 UTC m=+2934.139139270" observedRunningTime="2026-02-14 19:31:21.622930324 +0000 UTC m=+2934.599338817" watchObservedRunningTime="2026-02-14 19:31:21.62955091 +0000 UTC m=+2934.605959393" Feb 14 19:33:01 crc kubenswrapper[4897]: I0214 19:33:01.726231 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:33:01 crc kubenswrapper[4897]: I0214 19:33:01.726999 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:33:27 crc kubenswrapper[4897]: I0214 19:33:27.239678 4897 generic.go:334] "Generic (PLEG): container finished" podID="40ebae8a-773a-4b42-9385-81e545bff644" containerID="84034863c5603741050e62d2c6a6313c0b566db053a535b66c883af208dc40c3" exitCode=0 Feb 14 19:33:27 crc kubenswrapper[4897]: I0214 19:33:27.239770 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" event={"ID":"40ebae8a-773a-4b42-9385-81e545bff644","Type":"ContainerDied","Data":"84034863c5603741050e62d2c6a6313c0b566db053a535b66c883af208dc40c3"} Feb 14 19:33:28 crc kubenswrapper[4897]: I0214 19:33:28.756725 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" Feb 14 19:33:28 crc kubenswrapper[4897]: I0214 19:33:28.955762 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pq8d\" (UniqueName: \"kubernetes.io/projected/40ebae8a-773a-4b42-9385-81e545bff644-kube-api-access-2pq8d\") pod \"40ebae8a-773a-4b42-9385-81e545bff644\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " Feb 14 19:33:28 crc kubenswrapper[4897]: I0214 19:33:28.955828 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-telemetry-power-monitoring-combined-ca-bundle\") pod \"40ebae8a-773a-4b42-9385-81e545bff644\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " Feb 14 19:33:28 crc kubenswrapper[4897]: I0214 19:33:28.955964 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-ceilometer-ipmi-config-data-0\") pod \"40ebae8a-773a-4b42-9385-81e545bff644\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " Feb 14 19:33:28 crc kubenswrapper[4897]: I0214 19:33:28.955991 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-inventory\") pod \"40ebae8a-773a-4b42-9385-81e545bff644\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " Feb 14 19:33:28 crc kubenswrapper[4897]: I0214 19:33:28.956020 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-ssh-key-openstack-edpm-ipam\") pod \"40ebae8a-773a-4b42-9385-81e545bff644\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " Feb 14 19:33:28 crc kubenswrapper[4897]: I0214 19:33:28.956186 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-ceilometer-ipmi-config-data-1\") pod \"40ebae8a-773a-4b42-9385-81e545bff644\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " Feb 14 19:33:28 crc kubenswrapper[4897]: I0214 19:33:28.956275 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-ceilometer-ipmi-config-data-2\") pod \"40ebae8a-773a-4b42-9385-81e545bff644\" (UID: \"40ebae8a-773a-4b42-9385-81e545bff644\") " Feb 14 19:33:28 crc kubenswrapper[4897]: I0214 19:33:28.977756 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "40ebae8a-773a-4b42-9385-81e545bff644" (UID: "40ebae8a-773a-4b42-9385-81e545bff644"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:33:28 crc kubenswrapper[4897]: I0214 19:33:28.977826 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40ebae8a-773a-4b42-9385-81e545bff644-kube-api-access-2pq8d" (OuterVolumeSpecName: "kube-api-access-2pq8d") pod "40ebae8a-773a-4b42-9385-81e545bff644" (UID: "40ebae8a-773a-4b42-9385-81e545bff644"). InnerVolumeSpecName "kube-api-access-2pq8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:33:28 crc kubenswrapper[4897]: I0214 19:33:28.998235 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "40ebae8a-773a-4b42-9385-81e545bff644" (UID: "40ebae8a-773a-4b42-9385-81e545bff644"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.005162 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "40ebae8a-773a-4b42-9385-81e545bff644" (UID: "40ebae8a-773a-4b42-9385-81e545bff644"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.008933 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "40ebae8a-773a-4b42-9385-81e545bff644" (UID: "40ebae8a-773a-4b42-9385-81e545bff644"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.011913 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "40ebae8a-773a-4b42-9385-81e545bff644" (UID: "40ebae8a-773a-4b42-9385-81e545bff644"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.017995 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-inventory" (OuterVolumeSpecName: "inventory") pod "40ebae8a-773a-4b42-9385-81e545bff644" (UID: "40ebae8a-773a-4b42-9385-81e545bff644"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.060399 4897 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.060443 4897 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.060458 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pq8d\" (UniqueName: \"kubernetes.io/projected/40ebae8a-773a-4b42-9385-81e545bff644-kube-api-access-2pq8d\") on node \"crc\" DevicePath \"\"" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.060474 4897 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.060489 4897 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.060502 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.060513 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/40ebae8a-773a-4b42-9385-81e545bff644-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.265900 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" event={"ID":"40ebae8a-773a-4b42-9385-81e545bff644","Type":"ContainerDied","Data":"15d3f3e4dac6c789e01e5483b9fccdc35ddbb2fb51cf4eabf9cb52cf97515f19"} Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.265942 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15d3f3e4dac6c789e01e5483b9fccdc35ddbb2fb51cf4eabf9cb52cf97515f19" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.266423 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.392335 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-8wz95"] Feb 14 19:33:29 crc kubenswrapper[4897]: E0214 19:33:29.393215 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ebae8a-773a-4b42-9385-81e545bff644" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.393258 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ebae8a-773a-4b42-9385-81e545bff644" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.393672 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="40ebae8a-773a-4b42-9385-81e545bff644" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.394969 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8wz95" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.397248 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.397574 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.397661 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.397898 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j869w" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.398473 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.406989 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-8wz95"] Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.574504 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8wz95\" (UID: \"0d55ecd3-26e4-46ef-9ab2-addd80af57d7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8wz95" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.574794 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8wz95\" (UID: \"0d55ecd3-26e4-46ef-9ab2-addd80af57d7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8wz95" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.575132 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8wz95\" (UID: \"0d55ecd3-26e4-46ef-9ab2-addd80af57d7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8wz95" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.575752 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztv66\" (UniqueName: \"kubernetes.io/projected/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-kube-api-access-ztv66\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8wz95\" (UID: \"0d55ecd3-26e4-46ef-9ab2-addd80af57d7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8wz95" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.575990 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8wz95\" (UID: \"0d55ecd3-26e4-46ef-9ab2-addd80af57d7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8wz95" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.678212 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8wz95\" (UID: \"0d55ecd3-26e4-46ef-9ab2-addd80af57d7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8wz95" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.678288 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8wz95\" (UID: \"0d55ecd3-26e4-46ef-9ab2-addd80af57d7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8wz95" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.678384 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztv66\" (UniqueName: \"kubernetes.io/projected/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-kube-api-access-ztv66\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8wz95\" (UID: \"0d55ecd3-26e4-46ef-9ab2-addd80af57d7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8wz95" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.678440 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8wz95\" (UID: \"0d55ecd3-26e4-46ef-9ab2-addd80af57d7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8wz95" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.678463 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8wz95\" (UID: \"0d55ecd3-26e4-46ef-9ab2-addd80af57d7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8wz95" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.684597 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8wz95\" (UID: \"0d55ecd3-26e4-46ef-9ab2-addd80af57d7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8wz95" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.684616 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8wz95\" (UID: \"0d55ecd3-26e4-46ef-9ab2-addd80af57d7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8wz95" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.686400 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8wz95\" (UID: \"0d55ecd3-26e4-46ef-9ab2-addd80af57d7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8wz95" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.690121 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8wz95\" (UID: \"0d55ecd3-26e4-46ef-9ab2-addd80af57d7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8wz95" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.696718 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztv66\" (UniqueName: \"kubernetes.io/projected/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-kube-api-access-ztv66\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8wz95\" (UID: \"0d55ecd3-26e4-46ef-9ab2-addd80af57d7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8wz95" Feb 14 19:33:29 crc kubenswrapper[4897]: I0214 19:33:29.724195 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8wz95" Feb 14 19:33:30 crc kubenswrapper[4897]: I0214 19:33:30.367877 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-8wz95"] Feb 14 19:33:31 crc kubenswrapper[4897]: I0214 19:33:31.040582 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6hmmv"] Feb 14 19:33:31 crc kubenswrapper[4897]: I0214 19:33:31.045531 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6hmmv" Feb 14 19:33:31 crc kubenswrapper[4897]: I0214 19:33:31.062311 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6hmmv"] Feb 14 19:33:31 crc kubenswrapper[4897]: I0214 19:33:31.117168 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0cec4e4-b381-43bd-a9b3-848e8d673b9e-catalog-content\") pod \"redhat-operators-6hmmv\" (UID: \"a0cec4e4-b381-43bd-a9b3-848e8d673b9e\") " pod="openshift-marketplace/redhat-operators-6hmmv" Feb 14 19:33:31 crc kubenswrapper[4897]: I0214 19:33:31.117442 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsz4s\" (UniqueName: \"kubernetes.io/projected/a0cec4e4-b381-43bd-a9b3-848e8d673b9e-kube-api-access-rsz4s\") pod \"redhat-operators-6hmmv\" (UID: \"a0cec4e4-b381-43bd-a9b3-848e8d673b9e\") " pod="openshift-marketplace/redhat-operators-6hmmv" Feb 14 19:33:31 crc kubenswrapper[4897]: I0214 19:33:31.117503 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0cec4e4-b381-43bd-a9b3-848e8d673b9e-utilities\") pod \"redhat-operators-6hmmv\" (UID: \"a0cec4e4-b381-43bd-a9b3-848e8d673b9e\") " pod="openshift-marketplace/redhat-operators-6hmmv" Feb 14 19:33:31 crc kubenswrapper[4897]: I0214 19:33:31.219359 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0cec4e4-b381-43bd-a9b3-848e8d673b9e-catalog-content\") pod \"redhat-operators-6hmmv\" (UID: \"a0cec4e4-b381-43bd-a9b3-848e8d673b9e\") " pod="openshift-marketplace/redhat-operators-6hmmv" Feb 14 19:33:31 crc kubenswrapper[4897]: I0214 19:33:31.219969 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0cec4e4-b381-43bd-a9b3-848e8d673b9e-catalog-content\") pod \"redhat-operators-6hmmv\" (UID: \"a0cec4e4-b381-43bd-a9b3-848e8d673b9e\") " pod="openshift-marketplace/redhat-operators-6hmmv" Feb 14 19:33:31 crc kubenswrapper[4897]: I0214 19:33:31.219986 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsz4s\" (UniqueName: \"kubernetes.io/projected/a0cec4e4-b381-43bd-a9b3-848e8d673b9e-kube-api-access-rsz4s\") pod \"redhat-operators-6hmmv\" (UID: \"a0cec4e4-b381-43bd-a9b3-848e8d673b9e\") " pod="openshift-marketplace/redhat-operators-6hmmv" Feb 14 19:33:31 crc kubenswrapper[4897]: I0214 19:33:31.220163 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0cec4e4-b381-43bd-a9b3-848e8d673b9e-utilities\") pod \"redhat-operators-6hmmv\" (UID: \"a0cec4e4-b381-43bd-a9b3-848e8d673b9e\") " pod="openshift-marketplace/redhat-operators-6hmmv" Feb 14 19:33:31 crc kubenswrapper[4897]: I0214 19:33:31.220644 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0cec4e4-b381-43bd-a9b3-848e8d673b9e-utilities\") pod \"redhat-operators-6hmmv\" (UID: \"a0cec4e4-b381-43bd-a9b3-848e8d673b9e\") " pod="openshift-marketplace/redhat-operators-6hmmv" Feb 14 19:33:31 crc kubenswrapper[4897]: I0214 19:33:31.249900 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsz4s\" (UniqueName: \"kubernetes.io/projected/a0cec4e4-b381-43bd-a9b3-848e8d673b9e-kube-api-access-rsz4s\") pod \"redhat-operators-6hmmv\" (UID: \"a0cec4e4-b381-43bd-a9b3-848e8d673b9e\") " pod="openshift-marketplace/redhat-operators-6hmmv" Feb 14 19:33:31 crc kubenswrapper[4897]: I0214 19:33:31.287581 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8wz95" event={"ID":"0d55ecd3-26e4-46ef-9ab2-addd80af57d7","Type":"ContainerStarted","Data":"0e46dd2eac106228c4525d03cb50ec82811ad8925d2922f1f17394326d40996b"} Feb 14 19:33:31 crc kubenswrapper[4897]: I0214 19:33:31.287643 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8wz95" event={"ID":"0d55ecd3-26e4-46ef-9ab2-addd80af57d7","Type":"ContainerStarted","Data":"e84d291bb9dbc832d053d2879918107490fec090975106cb310cfddd69968991"} Feb 14 19:33:31 crc kubenswrapper[4897]: I0214 19:33:31.324923 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8wz95" podStartSLOduration=1.8204571 podStartE2EDuration="2.324905531s" podCreationTimestamp="2026-02-14 19:33:29 +0000 UTC" firstStartedPulling="2026-02-14 19:33:30.371846642 +0000 UTC m=+3063.348255135" lastFinishedPulling="2026-02-14 19:33:30.876295053 +0000 UTC m=+3063.852703566" observedRunningTime="2026-02-14 19:33:31.323513537 +0000 UTC m=+3064.299922030" watchObservedRunningTime="2026-02-14 19:33:31.324905531 +0000 UTC m=+3064.301314014" Feb 14 19:33:31 crc kubenswrapper[4897]: I0214 19:33:31.468643 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6hmmv" Feb 14 19:33:31 crc kubenswrapper[4897]: I0214 19:33:31.725667 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:33:31 crc kubenswrapper[4897]: I0214 19:33:31.726057 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:33:32 crc kubenswrapper[4897]: I0214 19:33:32.014615 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6hmmv"] Feb 14 19:33:32 crc kubenswrapper[4897]: I0214 19:33:32.314172 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hmmv" event={"ID":"a0cec4e4-b381-43bd-a9b3-848e8d673b9e","Type":"ContainerStarted","Data":"b7e91149367241759b8cf0d3d6a93e3777bb474caf44923ab3e1f58b6c19ebc1"} Feb 14 19:33:32 crc kubenswrapper[4897]: I0214 19:33:32.314561 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hmmv" event={"ID":"a0cec4e4-b381-43bd-a9b3-848e8d673b9e","Type":"ContainerStarted","Data":"dca630850b3204f6e7e709daf4b7b6a4e5d64b158d100c6cc2bd1956aa0a99d5"} Feb 14 19:33:33 crc kubenswrapper[4897]: I0214 19:33:33.355984 4897 generic.go:334] "Generic (PLEG): container finished" podID="a0cec4e4-b381-43bd-a9b3-848e8d673b9e" containerID="b7e91149367241759b8cf0d3d6a93e3777bb474caf44923ab3e1f58b6c19ebc1" exitCode=0 Feb 14 19:33:33 crc kubenswrapper[4897]: I0214 19:33:33.356288 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hmmv" event={"ID":"a0cec4e4-b381-43bd-a9b3-848e8d673b9e","Type":"ContainerDied","Data":"b7e91149367241759b8cf0d3d6a93e3777bb474caf44923ab3e1f58b6c19ebc1"} Feb 14 19:33:34 crc kubenswrapper[4897]: I0214 19:33:34.391343 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hmmv" event={"ID":"a0cec4e4-b381-43bd-a9b3-848e8d673b9e","Type":"ContainerStarted","Data":"493dbb992f6f9f759b85c3366a58a6584d4fb03a38205f608086a103e521d7e2"} Feb 14 19:33:37 crc kubenswrapper[4897]: I0214 19:33:37.429356 4897 generic.go:334] "Generic (PLEG): container finished" podID="a0cec4e4-b381-43bd-a9b3-848e8d673b9e" containerID="493dbb992f6f9f759b85c3366a58a6584d4fb03a38205f608086a103e521d7e2" exitCode=0 Feb 14 19:33:37 crc kubenswrapper[4897]: I0214 19:33:37.430154 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hmmv" event={"ID":"a0cec4e4-b381-43bd-a9b3-848e8d673b9e","Type":"ContainerDied","Data":"493dbb992f6f9f759b85c3366a58a6584d4fb03a38205f608086a103e521d7e2"} Feb 14 19:33:38 crc kubenswrapper[4897]: I0214 19:33:38.446692 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hmmv" event={"ID":"a0cec4e4-b381-43bd-a9b3-848e8d673b9e","Type":"ContainerStarted","Data":"482384cca8f24180265c15609c70d2c1740535bea6d5f8cb320783cee19f0dac"} Feb 14 19:33:38 crc kubenswrapper[4897]: I0214 19:33:38.485488 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6hmmv" podStartSLOduration=4.015880826 podStartE2EDuration="8.485460759s" podCreationTimestamp="2026-02-14 19:33:30 +0000 UTC" firstStartedPulling="2026-02-14 19:33:33.361037946 +0000 UTC m=+3066.337446429" lastFinishedPulling="2026-02-14 19:33:37.830617839 +0000 UTC m=+3070.807026362" observedRunningTime="2026-02-14 19:33:38.469523735 +0000 UTC m=+3071.445932228" watchObservedRunningTime="2026-02-14 19:33:38.485460759 +0000 UTC m=+3071.461869262" Feb 14 19:33:41 crc kubenswrapper[4897]: I0214 19:33:41.470627 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6hmmv" Feb 14 19:33:41 crc kubenswrapper[4897]: I0214 19:33:41.471234 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6hmmv" Feb 14 19:33:42 crc kubenswrapper[4897]: I0214 19:33:42.529948 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6hmmv" podUID="a0cec4e4-b381-43bd-a9b3-848e8d673b9e" containerName="registry-server" probeResult="failure" output=< Feb 14 19:33:42 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 19:33:42 crc kubenswrapper[4897]: > Feb 14 19:33:46 crc kubenswrapper[4897]: I0214 19:33:46.553361 4897 generic.go:334] "Generic (PLEG): container finished" podID="0d55ecd3-26e4-46ef-9ab2-addd80af57d7" containerID="0e46dd2eac106228c4525d03cb50ec82811ad8925d2922f1f17394326d40996b" exitCode=0 Feb 14 19:33:46 crc kubenswrapper[4897]: I0214 19:33:46.553485 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8wz95" event={"ID":"0d55ecd3-26e4-46ef-9ab2-addd80af57d7","Type":"ContainerDied","Data":"0e46dd2eac106228c4525d03cb50ec82811ad8925d2922f1f17394326d40996b"} Feb 14 19:33:48 crc kubenswrapper[4897]: I0214 19:33:48.021810 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8wz95" Feb 14 19:33:48 crc kubenswrapper[4897]: I0214 19:33:48.072872 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-logging-compute-config-data-0\") pod \"0d55ecd3-26e4-46ef-9ab2-addd80af57d7\" (UID: \"0d55ecd3-26e4-46ef-9ab2-addd80af57d7\") " Feb 14 19:33:48 crc kubenswrapper[4897]: I0214 19:33:48.073116 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-logging-compute-config-data-1\") pod \"0d55ecd3-26e4-46ef-9ab2-addd80af57d7\" (UID: \"0d55ecd3-26e4-46ef-9ab2-addd80af57d7\") " Feb 14 19:33:48 crc kubenswrapper[4897]: I0214 19:33:48.073164 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-inventory\") pod \"0d55ecd3-26e4-46ef-9ab2-addd80af57d7\" (UID: \"0d55ecd3-26e4-46ef-9ab2-addd80af57d7\") " Feb 14 19:33:48 crc kubenswrapper[4897]: I0214 19:33:48.073204 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-ssh-key-openstack-edpm-ipam\") pod \"0d55ecd3-26e4-46ef-9ab2-addd80af57d7\" (UID: \"0d55ecd3-26e4-46ef-9ab2-addd80af57d7\") " Feb 14 19:33:48 crc kubenswrapper[4897]: I0214 19:33:48.073294 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztv66\" (UniqueName: \"kubernetes.io/projected/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-kube-api-access-ztv66\") pod \"0d55ecd3-26e4-46ef-9ab2-addd80af57d7\" (UID: \"0d55ecd3-26e4-46ef-9ab2-addd80af57d7\") " Feb 14 19:33:48 crc kubenswrapper[4897]: I0214 19:33:48.081345 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-kube-api-access-ztv66" (OuterVolumeSpecName: "kube-api-access-ztv66") pod "0d55ecd3-26e4-46ef-9ab2-addd80af57d7" (UID: "0d55ecd3-26e4-46ef-9ab2-addd80af57d7"). InnerVolumeSpecName "kube-api-access-ztv66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:33:48 crc kubenswrapper[4897]: I0214 19:33:48.112709 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0d55ecd3-26e4-46ef-9ab2-addd80af57d7" (UID: "0d55ecd3-26e4-46ef-9ab2-addd80af57d7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:33:48 crc kubenswrapper[4897]: I0214 19:33:48.129892 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-inventory" (OuterVolumeSpecName: "inventory") pod "0d55ecd3-26e4-46ef-9ab2-addd80af57d7" (UID: "0d55ecd3-26e4-46ef-9ab2-addd80af57d7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:33:48 crc kubenswrapper[4897]: I0214 19:33:48.135938 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "0d55ecd3-26e4-46ef-9ab2-addd80af57d7" (UID: "0d55ecd3-26e4-46ef-9ab2-addd80af57d7"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:33:48 crc kubenswrapper[4897]: I0214 19:33:48.142400 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "0d55ecd3-26e4-46ef-9ab2-addd80af57d7" (UID: "0d55ecd3-26e4-46ef-9ab2-addd80af57d7"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:33:48 crc kubenswrapper[4897]: I0214 19:33:48.176160 4897 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 14 19:33:48 crc kubenswrapper[4897]: I0214 19:33:48.176195 4897 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 19:33:48 crc kubenswrapper[4897]: I0214 19:33:48.176205 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 19:33:48 crc kubenswrapper[4897]: I0214 19:33:48.176217 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztv66\" (UniqueName: \"kubernetes.io/projected/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-kube-api-access-ztv66\") on node \"crc\" DevicePath \"\"" Feb 14 19:33:48 crc kubenswrapper[4897]: I0214 19:33:48.176226 4897 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0d55ecd3-26e4-46ef-9ab2-addd80af57d7-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 14 19:33:48 crc kubenswrapper[4897]: I0214 19:33:48.578531 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8wz95" event={"ID":"0d55ecd3-26e4-46ef-9ab2-addd80af57d7","Type":"ContainerDied","Data":"e84d291bb9dbc832d053d2879918107490fec090975106cb310cfddd69968991"} Feb 14 19:33:48 crc kubenswrapper[4897]: I0214 19:33:48.578574 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e84d291bb9dbc832d053d2879918107490fec090975106cb310cfddd69968991" Feb 14 19:33:48 crc kubenswrapper[4897]: I0214 19:33:48.578589 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8wz95" Feb 14 19:33:51 crc kubenswrapper[4897]: I0214 19:33:51.545293 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6hmmv" Feb 14 19:33:51 crc kubenswrapper[4897]: I0214 19:33:51.601787 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6hmmv" Feb 14 19:33:51 crc kubenswrapper[4897]: I0214 19:33:51.789579 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6hmmv"] Feb 14 19:33:52 crc kubenswrapper[4897]: I0214 19:33:52.627595 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6hmmv" podUID="a0cec4e4-b381-43bd-a9b3-848e8d673b9e" containerName="registry-server" containerID="cri-o://482384cca8f24180265c15609c70d2c1740535bea6d5f8cb320783cee19f0dac" gracePeriod=2 Feb 14 19:33:53 crc kubenswrapper[4897]: I0214 19:33:53.201536 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6hmmv" Feb 14 19:33:53 crc kubenswrapper[4897]: I0214 19:33:53.312452 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0cec4e4-b381-43bd-a9b3-848e8d673b9e-catalog-content\") pod \"a0cec4e4-b381-43bd-a9b3-848e8d673b9e\" (UID: \"a0cec4e4-b381-43bd-a9b3-848e8d673b9e\") " Feb 14 19:33:53 crc kubenswrapper[4897]: I0214 19:33:53.326218 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsz4s\" (UniqueName: \"kubernetes.io/projected/a0cec4e4-b381-43bd-a9b3-848e8d673b9e-kube-api-access-rsz4s\") pod \"a0cec4e4-b381-43bd-a9b3-848e8d673b9e\" (UID: \"a0cec4e4-b381-43bd-a9b3-848e8d673b9e\") " Feb 14 19:33:53 crc kubenswrapper[4897]: I0214 19:33:53.326394 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0cec4e4-b381-43bd-a9b3-848e8d673b9e-utilities\") pod \"a0cec4e4-b381-43bd-a9b3-848e8d673b9e\" (UID: \"a0cec4e4-b381-43bd-a9b3-848e8d673b9e\") " Feb 14 19:33:53 crc kubenswrapper[4897]: I0214 19:33:53.327926 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0cec4e4-b381-43bd-a9b3-848e8d673b9e-utilities" (OuterVolumeSpecName: "utilities") pod "a0cec4e4-b381-43bd-a9b3-848e8d673b9e" (UID: "a0cec4e4-b381-43bd-a9b3-848e8d673b9e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:33:53 crc kubenswrapper[4897]: I0214 19:33:53.348303 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0cec4e4-b381-43bd-a9b3-848e8d673b9e-kube-api-access-rsz4s" (OuterVolumeSpecName: "kube-api-access-rsz4s") pod "a0cec4e4-b381-43bd-a9b3-848e8d673b9e" (UID: "a0cec4e4-b381-43bd-a9b3-848e8d673b9e"). InnerVolumeSpecName "kube-api-access-rsz4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:33:53 crc kubenswrapper[4897]: I0214 19:33:53.430132 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsz4s\" (UniqueName: \"kubernetes.io/projected/a0cec4e4-b381-43bd-a9b3-848e8d673b9e-kube-api-access-rsz4s\") on node \"crc\" DevicePath \"\"" Feb 14 19:33:53 crc kubenswrapper[4897]: I0214 19:33:53.430176 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0cec4e4-b381-43bd-a9b3-848e8d673b9e-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 19:33:53 crc kubenswrapper[4897]: I0214 19:33:53.463369 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0cec4e4-b381-43bd-a9b3-848e8d673b9e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0cec4e4-b381-43bd-a9b3-848e8d673b9e" (UID: "a0cec4e4-b381-43bd-a9b3-848e8d673b9e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:33:53 crc kubenswrapper[4897]: I0214 19:33:53.532360 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0cec4e4-b381-43bd-a9b3-848e8d673b9e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 19:33:53 crc kubenswrapper[4897]: I0214 19:33:53.642493 4897 generic.go:334] "Generic (PLEG): container finished" podID="a0cec4e4-b381-43bd-a9b3-848e8d673b9e" containerID="482384cca8f24180265c15609c70d2c1740535bea6d5f8cb320783cee19f0dac" exitCode=0 Feb 14 19:33:53 crc kubenswrapper[4897]: I0214 19:33:53.642547 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hmmv" event={"ID":"a0cec4e4-b381-43bd-a9b3-848e8d673b9e","Type":"ContainerDied","Data":"482384cca8f24180265c15609c70d2c1740535bea6d5f8cb320783cee19f0dac"} Feb 14 19:33:53 crc kubenswrapper[4897]: I0214 19:33:53.642594 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6hmmv" Feb 14 19:33:53 crc kubenswrapper[4897]: I0214 19:33:53.642613 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hmmv" event={"ID":"a0cec4e4-b381-43bd-a9b3-848e8d673b9e","Type":"ContainerDied","Data":"dca630850b3204f6e7e709daf4b7b6a4e5d64b158d100c6cc2bd1956aa0a99d5"} Feb 14 19:33:53 crc kubenswrapper[4897]: I0214 19:33:53.642646 4897 scope.go:117] "RemoveContainer" containerID="482384cca8f24180265c15609c70d2c1740535bea6d5f8cb320783cee19f0dac" Feb 14 19:33:53 crc kubenswrapper[4897]: I0214 19:33:53.680834 4897 scope.go:117] "RemoveContainer" containerID="493dbb992f6f9f759b85c3366a58a6584d4fb03a38205f608086a103e521d7e2" Feb 14 19:33:53 crc kubenswrapper[4897]: I0214 19:33:53.696660 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6hmmv"] Feb 14 19:33:53 crc kubenswrapper[4897]: I0214 19:33:53.710790 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6hmmv"] Feb 14 19:33:53 crc kubenswrapper[4897]: I0214 19:33:53.730517 4897 scope.go:117] "RemoveContainer" containerID="b7e91149367241759b8cf0d3d6a93e3777bb474caf44923ab3e1f58b6c19ebc1" Feb 14 19:33:53 crc kubenswrapper[4897]: I0214 19:33:53.783162 4897 scope.go:117] "RemoveContainer" containerID="482384cca8f24180265c15609c70d2c1740535bea6d5f8cb320783cee19f0dac" Feb 14 19:33:53 crc kubenswrapper[4897]: E0214 19:33:53.783659 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"482384cca8f24180265c15609c70d2c1740535bea6d5f8cb320783cee19f0dac\": container with ID starting with 482384cca8f24180265c15609c70d2c1740535bea6d5f8cb320783cee19f0dac not found: ID does not exist" containerID="482384cca8f24180265c15609c70d2c1740535bea6d5f8cb320783cee19f0dac" Feb 14 19:33:53 crc kubenswrapper[4897]: I0214 19:33:53.783704 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"482384cca8f24180265c15609c70d2c1740535bea6d5f8cb320783cee19f0dac"} err="failed to get container status \"482384cca8f24180265c15609c70d2c1740535bea6d5f8cb320783cee19f0dac\": rpc error: code = NotFound desc = could not find container \"482384cca8f24180265c15609c70d2c1740535bea6d5f8cb320783cee19f0dac\": container with ID starting with 482384cca8f24180265c15609c70d2c1740535bea6d5f8cb320783cee19f0dac not found: ID does not exist" Feb 14 19:33:53 crc kubenswrapper[4897]: I0214 19:33:53.783732 4897 scope.go:117] "RemoveContainer" containerID="493dbb992f6f9f759b85c3366a58a6584d4fb03a38205f608086a103e521d7e2" Feb 14 19:33:53 crc kubenswrapper[4897]: E0214 19:33:53.784573 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"493dbb992f6f9f759b85c3366a58a6584d4fb03a38205f608086a103e521d7e2\": container with ID starting with 493dbb992f6f9f759b85c3366a58a6584d4fb03a38205f608086a103e521d7e2 not found: ID does not exist" containerID="493dbb992f6f9f759b85c3366a58a6584d4fb03a38205f608086a103e521d7e2" Feb 14 19:33:53 crc kubenswrapper[4897]: I0214 19:33:53.784623 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"493dbb992f6f9f759b85c3366a58a6584d4fb03a38205f608086a103e521d7e2"} err="failed to get container status \"493dbb992f6f9f759b85c3366a58a6584d4fb03a38205f608086a103e521d7e2\": rpc error: code = NotFound desc = could not find container \"493dbb992f6f9f759b85c3366a58a6584d4fb03a38205f608086a103e521d7e2\": container with ID starting with 493dbb992f6f9f759b85c3366a58a6584d4fb03a38205f608086a103e521d7e2 not found: ID does not exist" Feb 14 19:33:53 crc kubenswrapper[4897]: I0214 19:33:53.784657 4897 scope.go:117] "RemoveContainer" containerID="b7e91149367241759b8cf0d3d6a93e3777bb474caf44923ab3e1f58b6c19ebc1" Feb 14 19:33:53 crc kubenswrapper[4897]: E0214 19:33:53.785204 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7e91149367241759b8cf0d3d6a93e3777bb474caf44923ab3e1f58b6c19ebc1\": container with ID starting with b7e91149367241759b8cf0d3d6a93e3777bb474caf44923ab3e1f58b6c19ebc1 not found: ID does not exist" containerID="b7e91149367241759b8cf0d3d6a93e3777bb474caf44923ab3e1f58b6c19ebc1" Feb 14 19:33:53 crc kubenswrapper[4897]: I0214 19:33:53.785237 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7e91149367241759b8cf0d3d6a93e3777bb474caf44923ab3e1f58b6c19ebc1"} err="failed to get container status \"b7e91149367241759b8cf0d3d6a93e3777bb474caf44923ab3e1f58b6c19ebc1\": rpc error: code = NotFound desc = could not find container \"b7e91149367241759b8cf0d3d6a93e3777bb474caf44923ab3e1f58b6c19ebc1\": container with ID starting with b7e91149367241759b8cf0d3d6a93e3777bb474caf44923ab3e1f58b6c19ebc1 not found: ID does not exist" Feb 14 19:33:53 crc kubenswrapper[4897]: I0214 19:33:53.810189 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0cec4e4-b381-43bd-a9b3-848e8d673b9e" path="/var/lib/kubelet/pods/a0cec4e4-b381-43bd-a9b3-848e8d673b9e/volumes" Feb 14 19:34:01 crc kubenswrapper[4897]: I0214 19:34:01.725797 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:34:01 crc kubenswrapper[4897]: I0214 19:34:01.726359 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:34:01 crc kubenswrapper[4897]: I0214 19:34:01.726429 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 19:34:01 crc kubenswrapper[4897]: I0214 19:34:01.727502 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431"} pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 19:34:01 crc kubenswrapper[4897]: I0214 19:34:01.727575 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" containerID="cri-o://ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431" gracePeriod=600 Feb 14 19:34:01 crc kubenswrapper[4897]: E0214 19:34:01.857755 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:34:02 crc kubenswrapper[4897]: I0214 19:34:02.755667 4897 generic.go:334] "Generic (PLEG): container finished" podID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerID="ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431" exitCode=0 Feb 14 19:34:02 crc kubenswrapper[4897]: I0214 19:34:02.755822 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerDied","Data":"ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431"} Feb 14 19:34:02 crc kubenswrapper[4897]: I0214 19:34:02.755982 4897 scope.go:117] "RemoveContainer" containerID="c6626430ea6c421b822bccb52e029bf5b509b0b28bfac88869c30b1dbfcb44a8" Feb 14 19:34:02 crc kubenswrapper[4897]: I0214 19:34:02.756709 4897 scope.go:117] "RemoveContainer" containerID="ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431" Feb 14 19:34:02 crc kubenswrapper[4897]: E0214 19:34:02.756970 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:34:14 crc kubenswrapper[4897]: I0214 19:34:14.794391 4897 scope.go:117] "RemoveContainer" containerID="ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431" Feb 14 19:34:14 crc kubenswrapper[4897]: E0214 19:34:14.795455 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:34:26 crc kubenswrapper[4897]: I0214 19:34:26.795425 4897 scope.go:117] "RemoveContainer" containerID="ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431" Feb 14 19:34:26 crc kubenswrapper[4897]: E0214 19:34:26.797236 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:34:40 crc kubenswrapper[4897]: I0214 19:34:40.796208 4897 scope.go:117] "RemoveContainer" containerID="ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431" Feb 14 19:34:40 crc kubenswrapper[4897]: E0214 19:34:40.797859 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:34:51 crc kubenswrapper[4897]: I0214 19:34:51.795520 4897 scope.go:117] "RemoveContainer" containerID="ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431" Feb 14 19:34:51 crc kubenswrapper[4897]: E0214 19:34:51.796321 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:35:06 crc kubenswrapper[4897]: I0214 19:35:06.794046 4897 scope.go:117] "RemoveContainer" containerID="ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431" Feb 14 19:35:06 crc kubenswrapper[4897]: E0214 19:35:06.795720 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:35:21 crc kubenswrapper[4897]: I0214 19:35:21.794562 4897 scope.go:117] "RemoveContainer" containerID="ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431" Feb 14 19:35:21 crc kubenswrapper[4897]: E0214 19:35:21.795897 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:35:35 crc kubenswrapper[4897]: I0214 19:35:35.794866 4897 scope.go:117] "RemoveContainer" containerID="ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431" Feb 14 19:35:35 crc kubenswrapper[4897]: E0214 19:35:35.795931 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:35:49 crc kubenswrapper[4897]: I0214 19:35:49.795282 4897 scope.go:117] "RemoveContainer" containerID="ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431" Feb 14 19:35:49 crc kubenswrapper[4897]: E0214 19:35:49.796516 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:36:00 crc kubenswrapper[4897]: I0214 19:36:00.794800 4897 scope.go:117] "RemoveContainer" containerID="ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431" Feb 14 19:36:00 crc kubenswrapper[4897]: E0214 19:36:00.795881 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:36:12 crc kubenswrapper[4897]: I0214 19:36:12.419279 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rw2f7"] Feb 14 19:36:12 crc kubenswrapper[4897]: E0214 19:36:12.421320 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0cec4e4-b381-43bd-a9b3-848e8d673b9e" containerName="extract-content" Feb 14 19:36:12 crc kubenswrapper[4897]: I0214 19:36:12.421399 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0cec4e4-b381-43bd-a9b3-848e8d673b9e" containerName="extract-content" Feb 14 19:36:12 crc kubenswrapper[4897]: E0214 19:36:12.421462 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0cec4e4-b381-43bd-a9b3-848e8d673b9e" containerName="extract-utilities" Feb 14 19:36:12 crc kubenswrapper[4897]: I0214 19:36:12.421524 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0cec4e4-b381-43bd-a9b3-848e8d673b9e" containerName="extract-utilities" Feb 14 19:36:12 crc kubenswrapper[4897]: E0214 19:36:12.421599 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0cec4e4-b381-43bd-a9b3-848e8d673b9e" containerName="registry-server" Feb 14 19:36:12 crc kubenswrapper[4897]: I0214 19:36:12.421653 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0cec4e4-b381-43bd-a9b3-848e8d673b9e" containerName="registry-server" Feb 14 19:36:12 crc kubenswrapper[4897]: E0214 19:36:12.421915 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d55ecd3-26e4-46ef-9ab2-addd80af57d7" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 14 19:36:12 crc kubenswrapper[4897]: I0214 19:36:12.421980 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d55ecd3-26e4-46ef-9ab2-addd80af57d7" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 14 19:36:12 crc kubenswrapper[4897]: I0214 19:36:12.422249 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0cec4e4-b381-43bd-a9b3-848e8d673b9e" containerName="registry-server" Feb 14 19:36:12 crc kubenswrapper[4897]: I0214 19:36:12.422327 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d55ecd3-26e4-46ef-9ab2-addd80af57d7" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 14 19:36:12 crc kubenswrapper[4897]: I0214 19:36:12.423974 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rw2f7" Feb 14 19:36:12 crc kubenswrapper[4897]: I0214 19:36:12.440226 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rw2f7"] Feb 14 19:36:12 crc kubenswrapper[4897]: I0214 19:36:12.482250 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e33181b2-a46a-442c-8c53-97356d2461de-catalog-content\") pod \"redhat-marketplace-rw2f7\" (UID: \"e33181b2-a46a-442c-8c53-97356d2461de\") " pod="openshift-marketplace/redhat-marketplace-rw2f7" Feb 14 19:36:12 crc kubenswrapper[4897]: I0214 19:36:12.482339 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2sjp\" (UniqueName: \"kubernetes.io/projected/e33181b2-a46a-442c-8c53-97356d2461de-kube-api-access-r2sjp\") pod \"redhat-marketplace-rw2f7\" (UID: \"e33181b2-a46a-442c-8c53-97356d2461de\") " pod="openshift-marketplace/redhat-marketplace-rw2f7" Feb 14 19:36:12 crc kubenswrapper[4897]: I0214 19:36:12.482711 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e33181b2-a46a-442c-8c53-97356d2461de-utilities\") pod \"redhat-marketplace-rw2f7\" (UID: \"e33181b2-a46a-442c-8c53-97356d2461de\") " pod="openshift-marketplace/redhat-marketplace-rw2f7" Feb 14 19:36:12 crc kubenswrapper[4897]: I0214 19:36:12.585573 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e33181b2-a46a-442c-8c53-97356d2461de-utilities\") pod \"redhat-marketplace-rw2f7\" (UID: \"e33181b2-a46a-442c-8c53-97356d2461de\") " pod="openshift-marketplace/redhat-marketplace-rw2f7" Feb 14 19:36:12 crc kubenswrapper[4897]: I0214 19:36:12.585913 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e33181b2-a46a-442c-8c53-97356d2461de-catalog-content\") pod \"redhat-marketplace-rw2f7\" (UID: \"e33181b2-a46a-442c-8c53-97356d2461de\") " pod="openshift-marketplace/redhat-marketplace-rw2f7" Feb 14 19:36:12 crc kubenswrapper[4897]: I0214 19:36:12.586080 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2sjp\" (UniqueName: \"kubernetes.io/projected/e33181b2-a46a-442c-8c53-97356d2461de-kube-api-access-r2sjp\") pod \"redhat-marketplace-rw2f7\" (UID: \"e33181b2-a46a-442c-8c53-97356d2461de\") " pod="openshift-marketplace/redhat-marketplace-rw2f7" Feb 14 19:36:12 crc kubenswrapper[4897]: I0214 19:36:12.586099 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e33181b2-a46a-442c-8c53-97356d2461de-utilities\") pod \"redhat-marketplace-rw2f7\" (UID: \"e33181b2-a46a-442c-8c53-97356d2461de\") " pod="openshift-marketplace/redhat-marketplace-rw2f7" Feb 14 19:36:12 crc kubenswrapper[4897]: I0214 19:36:12.586427 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e33181b2-a46a-442c-8c53-97356d2461de-catalog-content\") pod \"redhat-marketplace-rw2f7\" (UID: \"e33181b2-a46a-442c-8c53-97356d2461de\") " pod="openshift-marketplace/redhat-marketplace-rw2f7" Feb 14 19:36:12 crc kubenswrapper[4897]: I0214 19:36:12.609653 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2sjp\" (UniqueName: \"kubernetes.io/projected/e33181b2-a46a-442c-8c53-97356d2461de-kube-api-access-r2sjp\") pod \"redhat-marketplace-rw2f7\" (UID: \"e33181b2-a46a-442c-8c53-97356d2461de\") " pod="openshift-marketplace/redhat-marketplace-rw2f7" Feb 14 19:36:12 crc kubenswrapper[4897]: I0214 19:36:12.743227 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rw2f7" Feb 14 19:36:13 crc kubenswrapper[4897]: I0214 19:36:13.234523 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rw2f7"] Feb 14 19:36:13 crc kubenswrapper[4897]: I0214 19:36:13.622590 4897 generic.go:334] "Generic (PLEG): container finished" podID="e33181b2-a46a-442c-8c53-97356d2461de" containerID="a72a112121146b6cc182ed0f3dc905876c84747eecb238ca18a69f3ce2d62054" exitCode=0 Feb 14 19:36:13 crc kubenswrapper[4897]: I0214 19:36:13.622672 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rw2f7" event={"ID":"e33181b2-a46a-442c-8c53-97356d2461de","Type":"ContainerDied","Data":"a72a112121146b6cc182ed0f3dc905876c84747eecb238ca18a69f3ce2d62054"} Feb 14 19:36:13 crc kubenswrapper[4897]: I0214 19:36:13.622926 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rw2f7" event={"ID":"e33181b2-a46a-442c-8c53-97356d2461de","Type":"ContainerStarted","Data":"60c75dd16dd8625281e8adf315d3c56f3b5f9d3b672041d974536da516b588b2"} Feb 14 19:36:13 crc kubenswrapper[4897]: I0214 19:36:13.626336 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 19:36:14 crc kubenswrapper[4897]: I0214 19:36:14.644176 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rw2f7" event={"ID":"e33181b2-a46a-442c-8c53-97356d2461de","Type":"ContainerStarted","Data":"c34049d8b202ba9b43f3ae8fe9f2825ad4405d393242775750b4683b9bad957e"} Feb 14 19:36:15 crc kubenswrapper[4897]: I0214 19:36:15.663808 4897 generic.go:334] "Generic (PLEG): container finished" podID="e33181b2-a46a-442c-8c53-97356d2461de" containerID="c34049d8b202ba9b43f3ae8fe9f2825ad4405d393242775750b4683b9bad957e" exitCode=0 Feb 14 19:36:15 crc kubenswrapper[4897]: I0214 19:36:15.663885 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rw2f7" event={"ID":"e33181b2-a46a-442c-8c53-97356d2461de","Type":"ContainerDied","Data":"c34049d8b202ba9b43f3ae8fe9f2825ad4405d393242775750b4683b9bad957e"} Feb 14 19:36:15 crc kubenswrapper[4897]: I0214 19:36:15.794338 4897 scope.go:117] "RemoveContainer" containerID="ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431" Feb 14 19:36:15 crc kubenswrapper[4897]: E0214 19:36:15.795415 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:36:16 crc kubenswrapper[4897]: I0214 19:36:16.679303 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rw2f7" event={"ID":"e33181b2-a46a-442c-8c53-97356d2461de","Type":"ContainerStarted","Data":"f55422fbf2820e3e1a4a3cee7d3e61af41b37b888fc85f129bd4894c4c17e8fb"} Feb 14 19:36:16 crc kubenswrapper[4897]: I0214 19:36:16.702477 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rw2f7" podStartSLOduration=2.245048844 podStartE2EDuration="4.702459817s" podCreationTimestamp="2026-02-14 19:36:12 +0000 UTC" firstStartedPulling="2026-02-14 19:36:13.625457844 +0000 UTC m=+3226.601866327" lastFinishedPulling="2026-02-14 19:36:16.082868797 +0000 UTC m=+3229.059277300" observedRunningTime="2026-02-14 19:36:16.695268594 +0000 UTC m=+3229.671677097" watchObservedRunningTime="2026-02-14 19:36:16.702459817 +0000 UTC m=+3229.678868290" Feb 14 19:36:22 crc kubenswrapper[4897]: I0214 19:36:22.743746 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rw2f7" Feb 14 19:36:22 crc kubenswrapper[4897]: I0214 19:36:22.744353 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rw2f7" Feb 14 19:36:22 crc kubenswrapper[4897]: I0214 19:36:22.805605 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rw2f7" Feb 14 19:36:23 crc kubenswrapper[4897]: I0214 19:36:23.834845 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rw2f7" Feb 14 19:36:26 crc kubenswrapper[4897]: I0214 19:36:26.414950 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rw2f7"] Feb 14 19:36:26 crc kubenswrapper[4897]: I0214 19:36:26.416427 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rw2f7" podUID="e33181b2-a46a-442c-8c53-97356d2461de" containerName="registry-server" containerID="cri-o://f55422fbf2820e3e1a4a3cee7d3e61af41b37b888fc85f129bd4894c4c17e8fb" gracePeriod=2 Feb 14 19:36:26 crc kubenswrapper[4897]: I0214 19:36:26.802843 4897 generic.go:334] "Generic (PLEG): container finished" podID="e33181b2-a46a-442c-8c53-97356d2461de" containerID="f55422fbf2820e3e1a4a3cee7d3e61af41b37b888fc85f129bd4894c4c17e8fb" exitCode=0 Feb 14 19:36:26 crc kubenswrapper[4897]: I0214 19:36:26.803117 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rw2f7" event={"ID":"e33181b2-a46a-442c-8c53-97356d2461de","Type":"ContainerDied","Data":"f55422fbf2820e3e1a4a3cee7d3e61af41b37b888fc85f129bd4894c4c17e8fb"} Feb 14 19:36:26 crc kubenswrapper[4897]: I0214 19:36:26.969889 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rw2f7" Feb 14 19:36:27 crc kubenswrapper[4897]: I0214 19:36:27.091962 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e33181b2-a46a-442c-8c53-97356d2461de-catalog-content\") pod \"e33181b2-a46a-442c-8c53-97356d2461de\" (UID: \"e33181b2-a46a-442c-8c53-97356d2461de\") " Feb 14 19:36:27 crc kubenswrapper[4897]: I0214 19:36:27.092411 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e33181b2-a46a-442c-8c53-97356d2461de-utilities\") pod \"e33181b2-a46a-442c-8c53-97356d2461de\" (UID: \"e33181b2-a46a-442c-8c53-97356d2461de\") " Feb 14 19:36:27 crc kubenswrapper[4897]: I0214 19:36:27.092661 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2sjp\" (UniqueName: \"kubernetes.io/projected/e33181b2-a46a-442c-8c53-97356d2461de-kube-api-access-r2sjp\") pod \"e33181b2-a46a-442c-8c53-97356d2461de\" (UID: \"e33181b2-a46a-442c-8c53-97356d2461de\") " Feb 14 19:36:27 crc kubenswrapper[4897]: I0214 19:36:27.095311 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e33181b2-a46a-442c-8c53-97356d2461de-utilities" (OuterVolumeSpecName: "utilities") pod "e33181b2-a46a-442c-8c53-97356d2461de" (UID: "e33181b2-a46a-442c-8c53-97356d2461de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:36:27 crc kubenswrapper[4897]: I0214 19:36:27.101798 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e33181b2-a46a-442c-8c53-97356d2461de-kube-api-access-r2sjp" (OuterVolumeSpecName: "kube-api-access-r2sjp") pod "e33181b2-a46a-442c-8c53-97356d2461de" (UID: "e33181b2-a46a-442c-8c53-97356d2461de"). InnerVolumeSpecName "kube-api-access-r2sjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:36:27 crc kubenswrapper[4897]: I0214 19:36:27.115867 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e33181b2-a46a-442c-8c53-97356d2461de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e33181b2-a46a-442c-8c53-97356d2461de" (UID: "e33181b2-a46a-442c-8c53-97356d2461de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:36:27 crc kubenswrapper[4897]: I0214 19:36:27.197508 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e33181b2-a46a-442c-8c53-97356d2461de-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 19:36:27 crc kubenswrapper[4897]: I0214 19:36:27.197537 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e33181b2-a46a-442c-8c53-97356d2461de-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 19:36:27 crc kubenswrapper[4897]: I0214 19:36:27.197546 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2sjp\" (UniqueName: \"kubernetes.io/projected/e33181b2-a46a-442c-8c53-97356d2461de-kube-api-access-r2sjp\") on node \"crc\" DevicePath \"\"" Feb 14 19:36:27 crc kubenswrapper[4897]: I0214 19:36:27.802003 4897 scope.go:117] "RemoveContainer" containerID="ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431" Feb 14 19:36:27 crc kubenswrapper[4897]: E0214 19:36:27.802372 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:36:27 crc kubenswrapper[4897]: I0214 19:36:27.818199 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rw2f7" event={"ID":"e33181b2-a46a-442c-8c53-97356d2461de","Type":"ContainerDied","Data":"60c75dd16dd8625281e8adf315d3c56f3b5f9d3b672041d974536da516b588b2"} Feb 14 19:36:27 crc kubenswrapper[4897]: I0214 19:36:27.818278 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rw2f7" Feb 14 19:36:27 crc kubenswrapper[4897]: I0214 19:36:27.818576 4897 scope.go:117] "RemoveContainer" containerID="f55422fbf2820e3e1a4a3cee7d3e61af41b37b888fc85f129bd4894c4c17e8fb" Feb 14 19:36:27 crc kubenswrapper[4897]: I0214 19:36:27.859525 4897 scope.go:117] "RemoveContainer" containerID="c34049d8b202ba9b43f3ae8fe9f2825ad4405d393242775750b4683b9bad957e" Feb 14 19:36:27 crc kubenswrapper[4897]: I0214 19:36:27.864830 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rw2f7"] Feb 14 19:36:27 crc kubenswrapper[4897]: I0214 19:36:27.902620 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rw2f7"] Feb 14 19:36:27 crc kubenswrapper[4897]: I0214 19:36:27.911839 4897 scope.go:117] "RemoveContainer" containerID="a72a112121146b6cc182ed0f3dc905876c84747eecb238ca18a69f3ce2d62054" Feb 14 19:36:29 crc kubenswrapper[4897]: I0214 19:36:29.820438 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e33181b2-a46a-442c-8c53-97356d2461de" path="/var/lib/kubelet/pods/e33181b2-a46a-442c-8c53-97356d2461de/volumes" Feb 14 19:36:31 crc kubenswrapper[4897]: E0214 19:36:31.878714 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode33181b2_a46a_442c_8c53_97356d2461de.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode33181b2_a46a_442c_8c53_97356d2461de.slice/crio-60c75dd16dd8625281e8adf315d3c56f3b5f9d3b672041d974536da516b588b2\": RecentStats: unable to find data in memory cache]" Feb 14 19:36:32 crc kubenswrapper[4897]: E0214 19:36:32.447408 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode33181b2_a46a_442c_8c53_97356d2461de.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode33181b2_a46a_442c_8c53_97356d2461de.slice/crio-60c75dd16dd8625281e8adf315d3c56f3b5f9d3b672041d974536da516b588b2\": RecentStats: unable to find data in memory cache]" Feb 14 19:36:40 crc kubenswrapper[4897]: I0214 19:36:40.794420 4897 scope.go:117] "RemoveContainer" containerID="ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431" Feb 14 19:36:40 crc kubenswrapper[4897]: E0214 19:36:40.795591 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:36:42 crc kubenswrapper[4897]: E0214 19:36:42.803300 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode33181b2_a46a_442c_8c53_97356d2461de.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode33181b2_a46a_442c_8c53_97356d2461de.slice/crio-60c75dd16dd8625281e8adf315d3c56f3b5f9d3b672041d974536da516b588b2\": RecentStats: unable to find data in memory cache]" Feb 14 19:36:46 crc kubenswrapper[4897]: E0214 19:36:46.649041 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode33181b2_a46a_442c_8c53_97356d2461de.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode33181b2_a46a_442c_8c53_97356d2461de.slice/crio-60c75dd16dd8625281e8adf315d3c56f3b5f9d3b672041d974536da516b588b2\": RecentStats: unable to find data in memory cache]" Feb 14 19:36:48 crc kubenswrapper[4897]: E0214 19:36:48.249429 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode33181b2_a46a_442c_8c53_97356d2461de.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode33181b2_a46a_442c_8c53_97356d2461de.slice/crio-60c75dd16dd8625281e8adf315d3c56f3b5f9d3b672041d974536da516b588b2\": RecentStats: unable to find data in memory cache]" Feb 14 19:36:48 crc kubenswrapper[4897]: E0214 19:36:48.249618 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode33181b2_a46a_442c_8c53_97356d2461de.slice/crio-60c75dd16dd8625281e8adf315d3c56f3b5f9d3b672041d974536da516b588b2\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode33181b2_a46a_442c_8c53_97356d2461de.slice\": RecentStats: unable to find data in memory cache]" Feb 14 19:36:52 crc kubenswrapper[4897]: I0214 19:36:52.798301 4897 scope.go:117] "RemoveContainer" containerID="ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431" Feb 14 19:36:52 crc kubenswrapper[4897]: E0214 19:36:52.799731 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:36:52 crc kubenswrapper[4897]: E0214 19:36:52.866789 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode33181b2_a46a_442c_8c53_97356d2461de.slice/crio-60c75dd16dd8625281e8adf315d3c56f3b5f9d3b672041d974536da516b588b2\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode33181b2_a46a_442c_8c53_97356d2461de.slice\": RecentStats: unable to find data in memory cache]" Feb 14 19:37:01 crc kubenswrapper[4897]: E0214 19:37:01.949488 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode33181b2_a46a_442c_8c53_97356d2461de.slice/crio-60c75dd16dd8625281e8adf315d3c56f3b5f9d3b672041d974536da516b588b2\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode33181b2_a46a_442c_8c53_97356d2461de.slice\": RecentStats: unable to find data in memory cache]" Feb 14 19:37:02 crc kubenswrapper[4897]: E0214 19:37:02.922062 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode33181b2_a46a_442c_8c53_97356d2461de.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode33181b2_a46a_442c_8c53_97356d2461de.slice/crio-60c75dd16dd8625281e8adf315d3c56f3b5f9d3b672041d974536da516b588b2\": RecentStats: unable to find data in memory cache]" Feb 14 19:37:05 crc kubenswrapper[4897]: I0214 19:37:05.794209 4897 scope.go:117] "RemoveContainer" containerID="ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431" Feb 14 19:37:05 crc kubenswrapper[4897]: E0214 19:37:05.795063 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:37:13 crc kubenswrapper[4897]: E0214 19:37:13.263605 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode33181b2_a46a_442c_8c53_97356d2461de.slice/crio-60c75dd16dd8625281e8adf315d3c56f3b5f9d3b672041d974536da516b588b2\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode33181b2_a46a_442c_8c53_97356d2461de.slice\": RecentStats: unable to find data in memory cache]" Feb 14 19:37:16 crc kubenswrapper[4897]: E0214 19:37:16.649245 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode33181b2_a46a_442c_8c53_97356d2461de.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode33181b2_a46a_442c_8c53_97356d2461de.slice/crio-60c75dd16dd8625281e8adf315d3c56f3b5f9d3b672041d974536da516b588b2\": RecentStats: unable to find data in memory cache]" Feb 14 19:37:18 crc kubenswrapper[4897]: I0214 19:37:18.794470 4897 scope.go:117] "RemoveContainer" containerID="ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431" Feb 14 19:37:18 crc kubenswrapper[4897]: E0214 19:37:18.795161 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:37:23 crc kubenswrapper[4897]: E0214 19:37:23.548054 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode33181b2_a46a_442c_8c53_97356d2461de.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode33181b2_a46a_442c_8c53_97356d2461de.slice/crio-60c75dd16dd8625281e8adf315d3c56f3b5f9d3b672041d974536da516b588b2\": RecentStats: unable to find data in memory cache]" Feb 14 19:37:32 crc kubenswrapper[4897]: I0214 19:37:32.797125 4897 scope.go:117] "RemoveContainer" containerID="ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431" Feb 14 19:37:32 crc kubenswrapper[4897]: E0214 19:37:32.798301 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:37:46 crc kubenswrapper[4897]: I0214 19:37:46.794657 4897 scope.go:117] "RemoveContainer" containerID="ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431" Feb 14 19:37:46 crc kubenswrapper[4897]: E0214 19:37:46.795498 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:38:00 crc kubenswrapper[4897]: I0214 19:38:00.796749 4897 scope.go:117] "RemoveContainer" containerID="ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431" Feb 14 19:38:00 crc kubenswrapper[4897]: E0214 19:38:00.797876 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:38:15 crc kubenswrapper[4897]: I0214 19:38:15.799492 4897 scope.go:117] "RemoveContainer" containerID="ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431" Feb 14 19:38:15 crc kubenswrapper[4897]: E0214 19:38:15.800225 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:38:26 crc kubenswrapper[4897]: I0214 19:38:26.794645 4897 scope.go:117] "RemoveContainer" containerID="ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431" Feb 14 19:38:26 crc kubenswrapper[4897]: E0214 19:38:26.795389 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:38:40 crc kubenswrapper[4897]: I0214 19:38:40.794722 4897 scope.go:117] "RemoveContainer" containerID="ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431" Feb 14 19:38:40 crc kubenswrapper[4897]: E0214 19:38:40.795678 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:38:54 crc kubenswrapper[4897]: I0214 19:38:54.795525 4897 scope.go:117] "RemoveContainer" containerID="ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431" Feb 14 19:38:54 crc kubenswrapper[4897]: E0214 19:38:54.796813 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:39:06 crc kubenswrapper[4897]: I0214 19:39:06.460124 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m7sdf"] Feb 14 19:39:06 crc kubenswrapper[4897]: E0214 19:39:06.461272 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e33181b2-a46a-442c-8c53-97356d2461de" containerName="extract-content" Feb 14 19:39:06 crc kubenswrapper[4897]: I0214 19:39:06.461288 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="e33181b2-a46a-442c-8c53-97356d2461de" containerName="extract-content" Feb 14 19:39:06 crc kubenswrapper[4897]: E0214 19:39:06.461341 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e33181b2-a46a-442c-8c53-97356d2461de" containerName="extract-utilities" Feb 14 19:39:06 crc kubenswrapper[4897]: I0214 19:39:06.461350 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="e33181b2-a46a-442c-8c53-97356d2461de" containerName="extract-utilities" Feb 14 19:39:06 crc kubenswrapper[4897]: E0214 19:39:06.461372 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e33181b2-a46a-442c-8c53-97356d2461de" containerName="registry-server" Feb 14 19:39:06 crc kubenswrapper[4897]: I0214 19:39:06.461380 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="e33181b2-a46a-442c-8c53-97356d2461de" containerName="registry-server" Feb 14 19:39:06 crc kubenswrapper[4897]: I0214 19:39:06.461629 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="e33181b2-a46a-442c-8c53-97356d2461de" containerName="registry-server" Feb 14 19:39:06 crc kubenswrapper[4897]: I0214 19:39:06.463842 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7sdf" Feb 14 19:39:06 crc kubenswrapper[4897]: I0214 19:39:06.482285 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m7sdf"] Feb 14 19:39:06 crc kubenswrapper[4897]: I0214 19:39:06.572362 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c980b259-4f5c-4dd2-83fc-9956bdda2dc9-utilities\") pod \"community-operators-m7sdf\" (UID: \"c980b259-4f5c-4dd2-83fc-9956bdda2dc9\") " pod="openshift-marketplace/community-operators-m7sdf" Feb 14 19:39:06 crc kubenswrapper[4897]: I0214 19:39:06.572434 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c980b259-4f5c-4dd2-83fc-9956bdda2dc9-catalog-content\") pod \"community-operators-m7sdf\" (UID: \"c980b259-4f5c-4dd2-83fc-9956bdda2dc9\") " pod="openshift-marketplace/community-operators-m7sdf" Feb 14 19:39:06 crc kubenswrapper[4897]: I0214 19:39:06.572548 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xpvs\" (UniqueName: \"kubernetes.io/projected/c980b259-4f5c-4dd2-83fc-9956bdda2dc9-kube-api-access-5xpvs\") pod \"community-operators-m7sdf\" (UID: \"c980b259-4f5c-4dd2-83fc-9956bdda2dc9\") " pod="openshift-marketplace/community-operators-m7sdf" Feb 14 19:39:06 crc kubenswrapper[4897]: I0214 19:39:06.674671 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xpvs\" (UniqueName: \"kubernetes.io/projected/c980b259-4f5c-4dd2-83fc-9956bdda2dc9-kube-api-access-5xpvs\") pod \"community-operators-m7sdf\" (UID: \"c980b259-4f5c-4dd2-83fc-9956bdda2dc9\") " pod="openshift-marketplace/community-operators-m7sdf" Feb 14 19:39:06 crc kubenswrapper[4897]: I0214 19:39:06.674826 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c980b259-4f5c-4dd2-83fc-9956bdda2dc9-utilities\") pod \"community-operators-m7sdf\" (UID: \"c980b259-4f5c-4dd2-83fc-9956bdda2dc9\") " pod="openshift-marketplace/community-operators-m7sdf" Feb 14 19:39:06 crc kubenswrapper[4897]: I0214 19:39:06.674900 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c980b259-4f5c-4dd2-83fc-9956bdda2dc9-catalog-content\") pod \"community-operators-m7sdf\" (UID: \"c980b259-4f5c-4dd2-83fc-9956bdda2dc9\") " pod="openshift-marketplace/community-operators-m7sdf" Feb 14 19:39:06 crc kubenswrapper[4897]: I0214 19:39:06.675457 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c980b259-4f5c-4dd2-83fc-9956bdda2dc9-utilities\") pod \"community-operators-m7sdf\" (UID: \"c980b259-4f5c-4dd2-83fc-9956bdda2dc9\") " pod="openshift-marketplace/community-operators-m7sdf" Feb 14 19:39:06 crc kubenswrapper[4897]: I0214 19:39:06.680613 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c980b259-4f5c-4dd2-83fc-9956bdda2dc9-catalog-content\") pod \"community-operators-m7sdf\" (UID: \"c980b259-4f5c-4dd2-83fc-9956bdda2dc9\") " pod="openshift-marketplace/community-operators-m7sdf" Feb 14 19:39:06 crc kubenswrapper[4897]: I0214 19:39:06.704102 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xpvs\" (UniqueName: \"kubernetes.io/projected/c980b259-4f5c-4dd2-83fc-9956bdda2dc9-kube-api-access-5xpvs\") pod \"community-operators-m7sdf\" (UID: \"c980b259-4f5c-4dd2-83fc-9956bdda2dc9\") " pod="openshift-marketplace/community-operators-m7sdf" Feb 14 19:39:06 crc kubenswrapper[4897]: I0214 19:39:06.792242 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7sdf" Feb 14 19:39:07 crc kubenswrapper[4897]: I0214 19:39:07.325086 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m7sdf"] Feb 14 19:39:07 crc kubenswrapper[4897]: I0214 19:39:07.813351 4897 scope.go:117] "RemoveContainer" containerID="ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431" Feb 14 19:39:07 crc kubenswrapper[4897]: I0214 19:39:07.878783 4897 generic.go:334] "Generic (PLEG): container finished" podID="c980b259-4f5c-4dd2-83fc-9956bdda2dc9" containerID="79eae5fd8b4cb4e359523e6e4d80ab814725225ee94559c3190447e8d8bfbf1c" exitCode=0 Feb 14 19:39:07 crc kubenswrapper[4897]: I0214 19:39:07.879244 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7sdf" event={"ID":"c980b259-4f5c-4dd2-83fc-9956bdda2dc9","Type":"ContainerDied","Data":"79eae5fd8b4cb4e359523e6e4d80ab814725225ee94559c3190447e8d8bfbf1c"} Feb 14 19:39:07 crc kubenswrapper[4897]: I0214 19:39:07.879294 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7sdf" event={"ID":"c980b259-4f5c-4dd2-83fc-9956bdda2dc9","Type":"ContainerStarted","Data":"dfafa04b7184a53defd33a315ba877ea574d5041d0b7b9d6f165a5ef28b70bf8"} Feb 14 19:39:08 crc kubenswrapper[4897]: I0214 19:39:08.899878 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7sdf" event={"ID":"c980b259-4f5c-4dd2-83fc-9956bdda2dc9","Type":"ContainerStarted","Data":"037d5bef20912ee13dc91bbe1164b88fc9a7f2ce3f2c7bc5e2f8591140b04763"} Feb 14 19:39:08 crc kubenswrapper[4897]: I0214 19:39:08.907320 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerStarted","Data":"0975d26308356b3c92cb19f91a95d0679d36ff9bac5e59fbcaf7cc24d4b0a2d7"} Feb 14 19:39:10 crc kubenswrapper[4897]: I0214 19:39:10.927846 4897 generic.go:334] "Generic (PLEG): container finished" podID="c980b259-4f5c-4dd2-83fc-9956bdda2dc9" containerID="037d5bef20912ee13dc91bbe1164b88fc9a7f2ce3f2c7bc5e2f8591140b04763" exitCode=0 Feb 14 19:39:10 crc kubenswrapper[4897]: I0214 19:39:10.927922 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7sdf" event={"ID":"c980b259-4f5c-4dd2-83fc-9956bdda2dc9","Type":"ContainerDied","Data":"037d5bef20912ee13dc91bbe1164b88fc9a7f2ce3f2c7bc5e2f8591140b04763"} Feb 14 19:39:11 crc kubenswrapper[4897]: I0214 19:39:11.943337 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7sdf" event={"ID":"c980b259-4f5c-4dd2-83fc-9956bdda2dc9","Type":"ContainerStarted","Data":"31181d25db41808a9c1a8bc1fe4335d637a7f757518257c8fb6c90ffd3d3b73b"} Feb 14 19:39:11 crc kubenswrapper[4897]: I0214 19:39:11.979386 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m7sdf" podStartSLOduration=2.555495596 podStartE2EDuration="5.979363787s" podCreationTimestamp="2026-02-14 19:39:06 +0000 UTC" firstStartedPulling="2026-02-14 19:39:07.884717438 +0000 UTC m=+3400.861125951" lastFinishedPulling="2026-02-14 19:39:11.308585649 +0000 UTC m=+3404.284994142" observedRunningTime="2026-02-14 19:39:11.963957368 +0000 UTC m=+3404.940365871" watchObservedRunningTime="2026-02-14 19:39:11.979363787 +0000 UTC m=+3404.955772270" Feb 14 19:39:16 crc kubenswrapper[4897]: I0214 19:39:16.792464 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m7sdf" Feb 14 19:39:16 crc kubenswrapper[4897]: I0214 19:39:16.792933 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m7sdf" Feb 14 19:39:16 crc kubenswrapper[4897]: I0214 19:39:16.848463 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m7sdf" Feb 14 19:39:17 crc kubenswrapper[4897]: I0214 19:39:17.071386 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m7sdf" Feb 14 19:39:17 crc kubenswrapper[4897]: I0214 19:39:17.129226 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m7sdf"] Feb 14 19:39:19 crc kubenswrapper[4897]: I0214 19:39:19.027545 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m7sdf" podUID="c980b259-4f5c-4dd2-83fc-9956bdda2dc9" containerName="registry-server" containerID="cri-o://31181d25db41808a9c1a8bc1fe4335d637a7f757518257c8fb6c90ffd3d3b73b" gracePeriod=2 Feb 14 19:39:19 crc kubenswrapper[4897]: I0214 19:39:19.607417 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7sdf" Feb 14 19:39:19 crc kubenswrapper[4897]: I0214 19:39:19.712924 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xpvs\" (UniqueName: \"kubernetes.io/projected/c980b259-4f5c-4dd2-83fc-9956bdda2dc9-kube-api-access-5xpvs\") pod \"c980b259-4f5c-4dd2-83fc-9956bdda2dc9\" (UID: \"c980b259-4f5c-4dd2-83fc-9956bdda2dc9\") " Feb 14 19:39:19 crc kubenswrapper[4897]: I0214 19:39:19.713218 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c980b259-4f5c-4dd2-83fc-9956bdda2dc9-utilities\") pod \"c980b259-4f5c-4dd2-83fc-9956bdda2dc9\" (UID: \"c980b259-4f5c-4dd2-83fc-9956bdda2dc9\") " Feb 14 19:39:19 crc kubenswrapper[4897]: I0214 19:39:19.713288 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c980b259-4f5c-4dd2-83fc-9956bdda2dc9-catalog-content\") pod \"c980b259-4f5c-4dd2-83fc-9956bdda2dc9\" (UID: \"c980b259-4f5c-4dd2-83fc-9956bdda2dc9\") " Feb 14 19:39:19 crc kubenswrapper[4897]: I0214 19:39:19.713884 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c980b259-4f5c-4dd2-83fc-9956bdda2dc9-utilities" (OuterVolumeSpecName: "utilities") pod "c980b259-4f5c-4dd2-83fc-9956bdda2dc9" (UID: "c980b259-4f5c-4dd2-83fc-9956bdda2dc9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:39:19 crc kubenswrapper[4897]: I0214 19:39:19.714291 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c980b259-4f5c-4dd2-83fc-9956bdda2dc9-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 19:39:19 crc kubenswrapper[4897]: I0214 19:39:19.720639 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c980b259-4f5c-4dd2-83fc-9956bdda2dc9-kube-api-access-5xpvs" (OuterVolumeSpecName: "kube-api-access-5xpvs") pod "c980b259-4f5c-4dd2-83fc-9956bdda2dc9" (UID: "c980b259-4f5c-4dd2-83fc-9956bdda2dc9"). InnerVolumeSpecName "kube-api-access-5xpvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:39:19 crc kubenswrapper[4897]: I0214 19:39:19.773362 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c980b259-4f5c-4dd2-83fc-9956bdda2dc9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c980b259-4f5c-4dd2-83fc-9956bdda2dc9" (UID: "c980b259-4f5c-4dd2-83fc-9956bdda2dc9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:39:19 crc kubenswrapper[4897]: I0214 19:39:19.816732 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xpvs\" (UniqueName: \"kubernetes.io/projected/c980b259-4f5c-4dd2-83fc-9956bdda2dc9-kube-api-access-5xpvs\") on node \"crc\" DevicePath \"\"" Feb 14 19:39:19 crc kubenswrapper[4897]: I0214 19:39:19.816772 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c980b259-4f5c-4dd2-83fc-9956bdda2dc9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 19:39:20 crc kubenswrapper[4897]: I0214 19:39:20.040774 4897 generic.go:334] "Generic (PLEG): container finished" podID="c980b259-4f5c-4dd2-83fc-9956bdda2dc9" containerID="31181d25db41808a9c1a8bc1fe4335d637a7f757518257c8fb6c90ffd3d3b73b" exitCode=0 Feb 14 19:39:20 crc kubenswrapper[4897]: I0214 19:39:20.040847 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7sdf" Feb 14 19:39:20 crc kubenswrapper[4897]: I0214 19:39:20.040841 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7sdf" event={"ID":"c980b259-4f5c-4dd2-83fc-9956bdda2dc9","Type":"ContainerDied","Data":"31181d25db41808a9c1a8bc1fe4335d637a7f757518257c8fb6c90ffd3d3b73b"} Feb 14 19:39:20 crc kubenswrapper[4897]: I0214 19:39:20.041109 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7sdf" event={"ID":"c980b259-4f5c-4dd2-83fc-9956bdda2dc9","Type":"ContainerDied","Data":"dfafa04b7184a53defd33a315ba877ea574d5041d0b7b9d6f165a5ef28b70bf8"} Feb 14 19:39:20 crc kubenswrapper[4897]: I0214 19:39:20.041146 4897 scope.go:117] "RemoveContainer" containerID="31181d25db41808a9c1a8bc1fe4335d637a7f757518257c8fb6c90ffd3d3b73b" Feb 14 19:39:20 crc kubenswrapper[4897]: I0214 19:39:20.078887 4897 scope.go:117] "RemoveContainer" containerID="037d5bef20912ee13dc91bbe1164b88fc9a7f2ce3f2c7bc5e2f8591140b04763" Feb 14 19:39:20 crc kubenswrapper[4897]: I0214 19:39:20.089773 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m7sdf"] Feb 14 19:39:20 crc kubenswrapper[4897]: I0214 19:39:20.110405 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m7sdf"] Feb 14 19:39:20 crc kubenswrapper[4897]: I0214 19:39:20.114805 4897 scope.go:117] "RemoveContainer" containerID="79eae5fd8b4cb4e359523e6e4d80ab814725225ee94559c3190447e8d8bfbf1c" Feb 14 19:39:20 crc kubenswrapper[4897]: I0214 19:39:20.179779 4897 scope.go:117] "RemoveContainer" containerID="31181d25db41808a9c1a8bc1fe4335d637a7f757518257c8fb6c90ffd3d3b73b" Feb 14 19:39:20 crc kubenswrapper[4897]: E0214 19:39:20.180304 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31181d25db41808a9c1a8bc1fe4335d637a7f757518257c8fb6c90ffd3d3b73b\": container with ID starting with 31181d25db41808a9c1a8bc1fe4335d637a7f757518257c8fb6c90ffd3d3b73b not found: ID does not exist" containerID="31181d25db41808a9c1a8bc1fe4335d637a7f757518257c8fb6c90ffd3d3b73b" Feb 14 19:39:20 crc kubenswrapper[4897]: I0214 19:39:20.180366 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31181d25db41808a9c1a8bc1fe4335d637a7f757518257c8fb6c90ffd3d3b73b"} err="failed to get container status \"31181d25db41808a9c1a8bc1fe4335d637a7f757518257c8fb6c90ffd3d3b73b\": rpc error: code = NotFound desc = could not find container \"31181d25db41808a9c1a8bc1fe4335d637a7f757518257c8fb6c90ffd3d3b73b\": container with ID starting with 31181d25db41808a9c1a8bc1fe4335d637a7f757518257c8fb6c90ffd3d3b73b not found: ID does not exist" Feb 14 19:39:20 crc kubenswrapper[4897]: I0214 19:39:20.180392 4897 scope.go:117] "RemoveContainer" containerID="037d5bef20912ee13dc91bbe1164b88fc9a7f2ce3f2c7bc5e2f8591140b04763" Feb 14 19:39:20 crc kubenswrapper[4897]: E0214 19:39:20.180826 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"037d5bef20912ee13dc91bbe1164b88fc9a7f2ce3f2c7bc5e2f8591140b04763\": container with ID starting with 037d5bef20912ee13dc91bbe1164b88fc9a7f2ce3f2c7bc5e2f8591140b04763 not found: ID does not exist" containerID="037d5bef20912ee13dc91bbe1164b88fc9a7f2ce3f2c7bc5e2f8591140b04763" Feb 14 19:39:20 crc kubenswrapper[4897]: I0214 19:39:20.180869 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"037d5bef20912ee13dc91bbe1164b88fc9a7f2ce3f2c7bc5e2f8591140b04763"} err="failed to get container status \"037d5bef20912ee13dc91bbe1164b88fc9a7f2ce3f2c7bc5e2f8591140b04763\": rpc error: code = NotFound desc = could not find container \"037d5bef20912ee13dc91bbe1164b88fc9a7f2ce3f2c7bc5e2f8591140b04763\": container with ID starting with 037d5bef20912ee13dc91bbe1164b88fc9a7f2ce3f2c7bc5e2f8591140b04763 not found: ID does not exist" Feb 14 19:39:20 crc kubenswrapper[4897]: I0214 19:39:20.180896 4897 scope.go:117] "RemoveContainer" containerID="79eae5fd8b4cb4e359523e6e4d80ab814725225ee94559c3190447e8d8bfbf1c" Feb 14 19:39:20 crc kubenswrapper[4897]: E0214 19:39:20.181335 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79eae5fd8b4cb4e359523e6e4d80ab814725225ee94559c3190447e8d8bfbf1c\": container with ID starting with 79eae5fd8b4cb4e359523e6e4d80ab814725225ee94559c3190447e8d8bfbf1c not found: ID does not exist" containerID="79eae5fd8b4cb4e359523e6e4d80ab814725225ee94559c3190447e8d8bfbf1c" Feb 14 19:39:20 crc kubenswrapper[4897]: I0214 19:39:20.181386 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79eae5fd8b4cb4e359523e6e4d80ab814725225ee94559c3190447e8d8bfbf1c"} err="failed to get container status \"79eae5fd8b4cb4e359523e6e4d80ab814725225ee94559c3190447e8d8bfbf1c\": rpc error: code = NotFound desc = could not find container \"79eae5fd8b4cb4e359523e6e4d80ab814725225ee94559c3190447e8d8bfbf1c\": container with ID starting with 79eae5fd8b4cb4e359523e6e4d80ab814725225ee94559c3190447e8d8bfbf1c not found: ID does not exist" Feb 14 19:39:21 crc kubenswrapper[4897]: I0214 19:39:21.823244 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c980b259-4f5c-4dd2-83fc-9956bdda2dc9" path="/var/lib/kubelet/pods/c980b259-4f5c-4dd2-83fc-9956bdda2dc9/volumes" Feb 14 19:41:00 crc kubenswrapper[4897]: I0214 19:41:00.242351 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xvbxv"] Feb 14 19:41:00 crc kubenswrapper[4897]: E0214 19:41:00.243711 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c980b259-4f5c-4dd2-83fc-9956bdda2dc9" containerName="extract-utilities" Feb 14 19:41:00 crc kubenswrapper[4897]: I0214 19:41:00.243731 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c980b259-4f5c-4dd2-83fc-9956bdda2dc9" containerName="extract-utilities" Feb 14 19:41:00 crc kubenswrapper[4897]: E0214 19:41:00.243743 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c980b259-4f5c-4dd2-83fc-9956bdda2dc9" containerName="extract-content" Feb 14 19:41:00 crc kubenswrapper[4897]: I0214 19:41:00.243752 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c980b259-4f5c-4dd2-83fc-9956bdda2dc9" containerName="extract-content" Feb 14 19:41:00 crc kubenswrapper[4897]: E0214 19:41:00.243769 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c980b259-4f5c-4dd2-83fc-9956bdda2dc9" containerName="registry-server" Feb 14 19:41:00 crc kubenswrapper[4897]: I0214 19:41:00.243777 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="c980b259-4f5c-4dd2-83fc-9956bdda2dc9" containerName="registry-server" Feb 14 19:41:00 crc kubenswrapper[4897]: I0214 19:41:00.244104 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="c980b259-4f5c-4dd2-83fc-9956bdda2dc9" containerName="registry-server" Feb 14 19:41:00 crc kubenswrapper[4897]: I0214 19:41:00.246219 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvbxv" Feb 14 19:41:00 crc kubenswrapper[4897]: I0214 19:41:00.268901 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xvbxv"] Feb 14 19:41:00 crc kubenswrapper[4897]: I0214 19:41:00.306762 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51e70669-6c3b-4a16-aa25-f4707f87ef77-catalog-content\") pod \"certified-operators-xvbxv\" (UID: \"51e70669-6c3b-4a16-aa25-f4707f87ef77\") " pod="openshift-marketplace/certified-operators-xvbxv" Feb 14 19:41:00 crc kubenswrapper[4897]: I0214 19:41:00.306848 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51e70669-6c3b-4a16-aa25-f4707f87ef77-utilities\") pod \"certified-operators-xvbxv\" (UID: \"51e70669-6c3b-4a16-aa25-f4707f87ef77\") " pod="openshift-marketplace/certified-operators-xvbxv" Feb 14 19:41:00 crc kubenswrapper[4897]: I0214 19:41:00.306963 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5f4f\" (UniqueName: \"kubernetes.io/projected/51e70669-6c3b-4a16-aa25-f4707f87ef77-kube-api-access-z5f4f\") pod \"certified-operators-xvbxv\" (UID: \"51e70669-6c3b-4a16-aa25-f4707f87ef77\") " pod="openshift-marketplace/certified-operators-xvbxv" Feb 14 19:41:00 crc kubenswrapper[4897]: I0214 19:41:00.408588 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51e70669-6c3b-4a16-aa25-f4707f87ef77-catalog-content\") pod \"certified-operators-xvbxv\" (UID: \"51e70669-6c3b-4a16-aa25-f4707f87ef77\") " pod="openshift-marketplace/certified-operators-xvbxv" Feb 14 19:41:00 crc kubenswrapper[4897]: I0214 19:41:00.408930 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51e70669-6c3b-4a16-aa25-f4707f87ef77-utilities\") pod \"certified-operators-xvbxv\" (UID: \"51e70669-6c3b-4a16-aa25-f4707f87ef77\") " pod="openshift-marketplace/certified-operators-xvbxv" Feb 14 19:41:00 crc kubenswrapper[4897]: I0214 19:41:00.409046 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5f4f\" (UniqueName: \"kubernetes.io/projected/51e70669-6c3b-4a16-aa25-f4707f87ef77-kube-api-access-z5f4f\") pod \"certified-operators-xvbxv\" (UID: \"51e70669-6c3b-4a16-aa25-f4707f87ef77\") " pod="openshift-marketplace/certified-operators-xvbxv" Feb 14 19:41:00 crc kubenswrapper[4897]: I0214 19:41:00.409358 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51e70669-6c3b-4a16-aa25-f4707f87ef77-catalog-content\") pod \"certified-operators-xvbxv\" (UID: \"51e70669-6c3b-4a16-aa25-f4707f87ef77\") " pod="openshift-marketplace/certified-operators-xvbxv" Feb 14 19:41:00 crc kubenswrapper[4897]: I0214 19:41:00.409733 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51e70669-6c3b-4a16-aa25-f4707f87ef77-utilities\") pod \"certified-operators-xvbxv\" (UID: \"51e70669-6c3b-4a16-aa25-f4707f87ef77\") " pod="openshift-marketplace/certified-operators-xvbxv" Feb 14 19:41:00 crc kubenswrapper[4897]: I0214 19:41:00.438019 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5f4f\" (UniqueName: \"kubernetes.io/projected/51e70669-6c3b-4a16-aa25-f4707f87ef77-kube-api-access-z5f4f\") pod \"certified-operators-xvbxv\" (UID: \"51e70669-6c3b-4a16-aa25-f4707f87ef77\") " pod="openshift-marketplace/certified-operators-xvbxv" Feb 14 19:41:00 crc kubenswrapper[4897]: I0214 19:41:00.594550 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvbxv" Feb 14 19:41:01 crc kubenswrapper[4897]: I0214 19:41:01.191216 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xvbxv"] Feb 14 19:41:01 crc kubenswrapper[4897]: I0214 19:41:01.392271 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvbxv" event={"ID":"51e70669-6c3b-4a16-aa25-f4707f87ef77","Type":"ContainerStarted","Data":"2e78bb4a575e525dbde002391836eb812532ac08d31e4321978477c3d4518236"} Feb 14 19:41:02 crc kubenswrapper[4897]: I0214 19:41:02.405753 4897 generic.go:334] "Generic (PLEG): container finished" podID="51e70669-6c3b-4a16-aa25-f4707f87ef77" containerID="125c0c8fce349b5a13da482ba386548d674e60117fb2b5310cb9065dd7a72ba3" exitCode=0 Feb 14 19:41:02 crc kubenswrapper[4897]: I0214 19:41:02.405828 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvbxv" event={"ID":"51e70669-6c3b-4a16-aa25-f4707f87ef77","Type":"ContainerDied","Data":"125c0c8fce349b5a13da482ba386548d674e60117fb2b5310cb9065dd7a72ba3"} Feb 14 19:41:03 crc kubenswrapper[4897]: I0214 19:41:03.418313 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvbxv" event={"ID":"51e70669-6c3b-4a16-aa25-f4707f87ef77","Type":"ContainerStarted","Data":"54730de0bbe06b44f1ae8f86f67e6b47212005e959bcfe343a20f853ac2b48ca"} Feb 14 19:41:05 crc kubenswrapper[4897]: I0214 19:41:05.444830 4897 generic.go:334] "Generic (PLEG): container finished" podID="51e70669-6c3b-4a16-aa25-f4707f87ef77" containerID="54730de0bbe06b44f1ae8f86f67e6b47212005e959bcfe343a20f853ac2b48ca" exitCode=0 Feb 14 19:41:05 crc kubenswrapper[4897]: I0214 19:41:05.444913 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvbxv" event={"ID":"51e70669-6c3b-4a16-aa25-f4707f87ef77","Type":"ContainerDied","Data":"54730de0bbe06b44f1ae8f86f67e6b47212005e959bcfe343a20f853ac2b48ca"} Feb 14 19:41:06 crc kubenswrapper[4897]: I0214 19:41:06.460540 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvbxv" event={"ID":"51e70669-6c3b-4a16-aa25-f4707f87ef77","Type":"ContainerStarted","Data":"2da03b7afdd572ff72dcc0af2020d8e0899e44db852bfae41a69eca7b6229665"} Feb 14 19:41:06 crc kubenswrapper[4897]: I0214 19:41:06.503949 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xvbxv" podStartSLOduration=3.037996416 podStartE2EDuration="6.50392838s" podCreationTimestamp="2026-02-14 19:41:00 +0000 UTC" firstStartedPulling="2026-02-14 19:41:02.408690034 +0000 UTC m=+3515.385098527" lastFinishedPulling="2026-02-14 19:41:05.874622008 +0000 UTC m=+3518.851030491" observedRunningTime="2026-02-14 19:41:06.490113602 +0000 UTC m=+3519.466522115" watchObservedRunningTime="2026-02-14 19:41:06.50392838 +0000 UTC m=+3519.480336863" Feb 14 19:41:10 crc kubenswrapper[4897]: I0214 19:41:10.595346 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xvbxv" Feb 14 19:41:10 crc kubenswrapper[4897]: I0214 19:41:10.596114 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xvbxv" Feb 14 19:41:10 crc kubenswrapper[4897]: I0214 19:41:10.653921 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xvbxv" Feb 14 19:41:11 crc kubenswrapper[4897]: I0214 19:41:11.591369 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xvbxv" Feb 14 19:41:13 crc kubenswrapper[4897]: I0214 19:41:13.294175 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xvbxv"] Feb 14 19:41:14 crc kubenswrapper[4897]: I0214 19:41:14.548291 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xvbxv" podUID="51e70669-6c3b-4a16-aa25-f4707f87ef77" containerName="registry-server" containerID="cri-o://2da03b7afdd572ff72dcc0af2020d8e0899e44db852bfae41a69eca7b6229665" gracePeriod=2 Feb 14 19:41:15 crc kubenswrapper[4897]: I0214 19:41:15.133962 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvbxv" Feb 14 19:41:15 crc kubenswrapper[4897]: I0214 19:41:15.205496 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5f4f\" (UniqueName: \"kubernetes.io/projected/51e70669-6c3b-4a16-aa25-f4707f87ef77-kube-api-access-z5f4f\") pod \"51e70669-6c3b-4a16-aa25-f4707f87ef77\" (UID: \"51e70669-6c3b-4a16-aa25-f4707f87ef77\") " Feb 14 19:41:15 crc kubenswrapper[4897]: I0214 19:41:15.205647 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51e70669-6c3b-4a16-aa25-f4707f87ef77-utilities\") pod \"51e70669-6c3b-4a16-aa25-f4707f87ef77\" (UID: \"51e70669-6c3b-4a16-aa25-f4707f87ef77\") " Feb 14 19:41:15 crc kubenswrapper[4897]: I0214 19:41:15.205732 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51e70669-6c3b-4a16-aa25-f4707f87ef77-catalog-content\") pod \"51e70669-6c3b-4a16-aa25-f4707f87ef77\" (UID: \"51e70669-6c3b-4a16-aa25-f4707f87ef77\") " Feb 14 19:41:15 crc kubenswrapper[4897]: I0214 19:41:15.206836 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51e70669-6c3b-4a16-aa25-f4707f87ef77-utilities" (OuterVolumeSpecName: "utilities") pod "51e70669-6c3b-4a16-aa25-f4707f87ef77" (UID: "51e70669-6c3b-4a16-aa25-f4707f87ef77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:41:15 crc kubenswrapper[4897]: I0214 19:41:15.211502 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51e70669-6c3b-4a16-aa25-f4707f87ef77-kube-api-access-z5f4f" (OuterVolumeSpecName: "kube-api-access-z5f4f") pod "51e70669-6c3b-4a16-aa25-f4707f87ef77" (UID: "51e70669-6c3b-4a16-aa25-f4707f87ef77"). InnerVolumeSpecName "kube-api-access-z5f4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:41:15 crc kubenswrapper[4897]: I0214 19:41:15.275715 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51e70669-6c3b-4a16-aa25-f4707f87ef77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51e70669-6c3b-4a16-aa25-f4707f87ef77" (UID: "51e70669-6c3b-4a16-aa25-f4707f87ef77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:41:15 crc kubenswrapper[4897]: I0214 19:41:15.308563 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5f4f\" (UniqueName: \"kubernetes.io/projected/51e70669-6c3b-4a16-aa25-f4707f87ef77-kube-api-access-z5f4f\") on node \"crc\" DevicePath \"\"" Feb 14 19:41:15 crc kubenswrapper[4897]: I0214 19:41:15.308593 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51e70669-6c3b-4a16-aa25-f4707f87ef77-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 19:41:15 crc kubenswrapper[4897]: I0214 19:41:15.308603 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51e70669-6c3b-4a16-aa25-f4707f87ef77-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 19:41:15 crc kubenswrapper[4897]: I0214 19:41:15.564592 4897 generic.go:334] "Generic (PLEG): container finished" podID="51e70669-6c3b-4a16-aa25-f4707f87ef77" containerID="2da03b7afdd572ff72dcc0af2020d8e0899e44db852bfae41a69eca7b6229665" exitCode=0 Feb 14 19:41:15 crc kubenswrapper[4897]: I0214 19:41:15.564645 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvbxv" event={"ID":"51e70669-6c3b-4a16-aa25-f4707f87ef77","Type":"ContainerDied","Data":"2da03b7afdd572ff72dcc0af2020d8e0899e44db852bfae41a69eca7b6229665"} Feb 14 19:41:15 crc kubenswrapper[4897]: I0214 19:41:15.564728 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvbxv" event={"ID":"51e70669-6c3b-4a16-aa25-f4707f87ef77","Type":"ContainerDied","Data":"2e78bb4a575e525dbde002391836eb812532ac08d31e4321978477c3d4518236"} Feb 14 19:41:15 crc kubenswrapper[4897]: I0214 19:41:15.564758 4897 scope.go:117] "RemoveContainer" containerID="2da03b7afdd572ff72dcc0af2020d8e0899e44db852bfae41a69eca7b6229665" Feb 14 19:41:15 crc kubenswrapper[4897]: I0214 19:41:15.564663 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvbxv" Feb 14 19:41:15 crc kubenswrapper[4897]: I0214 19:41:15.599280 4897 scope.go:117] "RemoveContainer" containerID="54730de0bbe06b44f1ae8f86f67e6b47212005e959bcfe343a20f853ac2b48ca" Feb 14 19:41:15 crc kubenswrapper[4897]: I0214 19:41:15.616380 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xvbxv"] Feb 14 19:41:15 crc kubenswrapper[4897]: I0214 19:41:15.630760 4897 scope.go:117] "RemoveContainer" containerID="125c0c8fce349b5a13da482ba386548d674e60117fb2b5310cb9065dd7a72ba3" Feb 14 19:41:15 crc kubenswrapper[4897]: I0214 19:41:15.639652 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xvbxv"] Feb 14 19:41:15 crc kubenswrapper[4897]: I0214 19:41:15.681722 4897 scope.go:117] "RemoveContainer" containerID="2da03b7afdd572ff72dcc0af2020d8e0899e44db852bfae41a69eca7b6229665" Feb 14 19:41:15 crc kubenswrapper[4897]: E0214 19:41:15.682272 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2da03b7afdd572ff72dcc0af2020d8e0899e44db852bfae41a69eca7b6229665\": container with ID starting with 2da03b7afdd572ff72dcc0af2020d8e0899e44db852bfae41a69eca7b6229665 not found: ID does not exist" containerID="2da03b7afdd572ff72dcc0af2020d8e0899e44db852bfae41a69eca7b6229665" Feb 14 19:41:15 crc kubenswrapper[4897]: I0214 19:41:15.682348 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2da03b7afdd572ff72dcc0af2020d8e0899e44db852bfae41a69eca7b6229665"} err="failed to get container status \"2da03b7afdd572ff72dcc0af2020d8e0899e44db852bfae41a69eca7b6229665\": rpc error: code = NotFound desc = could not find container \"2da03b7afdd572ff72dcc0af2020d8e0899e44db852bfae41a69eca7b6229665\": container with ID starting with 2da03b7afdd572ff72dcc0af2020d8e0899e44db852bfae41a69eca7b6229665 not found: ID does not exist" Feb 14 19:41:15 crc kubenswrapper[4897]: I0214 19:41:15.682402 4897 scope.go:117] "RemoveContainer" containerID="54730de0bbe06b44f1ae8f86f67e6b47212005e959bcfe343a20f853ac2b48ca" Feb 14 19:41:15 crc kubenswrapper[4897]: E0214 19:41:15.683278 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54730de0bbe06b44f1ae8f86f67e6b47212005e959bcfe343a20f853ac2b48ca\": container with ID starting with 54730de0bbe06b44f1ae8f86f67e6b47212005e959bcfe343a20f853ac2b48ca not found: ID does not exist" containerID="54730de0bbe06b44f1ae8f86f67e6b47212005e959bcfe343a20f853ac2b48ca" Feb 14 19:41:15 crc kubenswrapper[4897]: I0214 19:41:15.683326 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54730de0bbe06b44f1ae8f86f67e6b47212005e959bcfe343a20f853ac2b48ca"} err="failed to get container status \"54730de0bbe06b44f1ae8f86f67e6b47212005e959bcfe343a20f853ac2b48ca\": rpc error: code = NotFound desc = could not find container \"54730de0bbe06b44f1ae8f86f67e6b47212005e959bcfe343a20f853ac2b48ca\": container with ID starting with 54730de0bbe06b44f1ae8f86f67e6b47212005e959bcfe343a20f853ac2b48ca not found: ID does not exist" Feb 14 19:41:15 crc kubenswrapper[4897]: I0214 19:41:15.683361 4897 scope.go:117] "RemoveContainer" containerID="125c0c8fce349b5a13da482ba386548d674e60117fb2b5310cb9065dd7a72ba3" Feb 14 19:41:15 crc kubenswrapper[4897]: E0214 19:41:15.683855 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"125c0c8fce349b5a13da482ba386548d674e60117fb2b5310cb9065dd7a72ba3\": container with ID starting with 125c0c8fce349b5a13da482ba386548d674e60117fb2b5310cb9065dd7a72ba3 not found: ID does not exist" containerID="125c0c8fce349b5a13da482ba386548d674e60117fb2b5310cb9065dd7a72ba3" Feb 14 19:41:15 crc kubenswrapper[4897]: I0214 19:41:15.683891 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"125c0c8fce349b5a13da482ba386548d674e60117fb2b5310cb9065dd7a72ba3"} err="failed to get container status \"125c0c8fce349b5a13da482ba386548d674e60117fb2b5310cb9065dd7a72ba3\": rpc error: code = NotFound desc = could not find container \"125c0c8fce349b5a13da482ba386548d674e60117fb2b5310cb9065dd7a72ba3\": container with ID starting with 125c0c8fce349b5a13da482ba386548d674e60117fb2b5310cb9065dd7a72ba3 not found: ID does not exist" Feb 14 19:41:15 crc kubenswrapper[4897]: I0214 19:41:15.811834 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51e70669-6c3b-4a16-aa25-f4707f87ef77" path="/var/lib/kubelet/pods/51e70669-6c3b-4a16-aa25-f4707f87ef77/volumes" Feb 14 19:41:31 crc kubenswrapper[4897]: I0214 19:41:31.725893 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:41:31 crc kubenswrapper[4897]: I0214 19:41:31.726816 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:42:01 crc kubenswrapper[4897]: I0214 19:42:01.726497 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:42:01 crc kubenswrapper[4897]: I0214 19:42:01.727171 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:42:31 crc kubenswrapper[4897]: I0214 19:42:31.725815 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:42:31 crc kubenswrapper[4897]: I0214 19:42:31.726464 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:42:31 crc kubenswrapper[4897]: I0214 19:42:31.726515 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 19:42:31 crc kubenswrapper[4897]: I0214 19:42:31.727530 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0975d26308356b3c92cb19f91a95d0679d36ff9bac5e59fbcaf7cc24d4b0a2d7"} pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 19:42:31 crc kubenswrapper[4897]: I0214 19:42:31.727608 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" containerID="cri-o://0975d26308356b3c92cb19f91a95d0679d36ff9bac5e59fbcaf7cc24d4b0a2d7" gracePeriod=600 Feb 14 19:42:32 crc kubenswrapper[4897]: I0214 19:42:32.622072 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerDied","Data":"0975d26308356b3c92cb19f91a95d0679d36ff9bac5e59fbcaf7cc24d4b0a2d7"} Feb 14 19:42:32 crc kubenswrapper[4897]: I0214 19:42:32.622084 4897 generic.go:334] "Generic (PLEG): container finished" podID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerID="0975d26308356b3c92cb19f91a95d0679d36ff9bac5e59fbcaf7cc24d4b0a2d7" exitCode=0 Feb 14 19:42:32 crc kubenswrapper[4897]: I0214 19:42:32.622693 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerStarted","Data":"1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c"} Feb 14 19:42:32 crc kubenswrapper[4897]: I0214 19:42:32.622660 4897 scope.go:117] "RemoveContainer" containerID="ff33b65871b05e747cf54687f94620d0465d630cb0148fb954ee31a637969431" Feb 14 19:43:49 crc kubenswrapper[4897]: I0214 19:43:49.575857 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zl7mq"] Feb 14 19:43:49 crc kubenswrapper[4897]: E0214 19:43:49.576717 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e70669-6c3b-4a16-aa25-f4707f87ef77" containerName="extract-utilities" Feb 14 19:43:49 crc kubenswrapper[4897]: I0214 19:43:49.576729 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e70669-6c3b-4a16-aa25-f4707f87ef77" containerName="extract-utilities" Feb 14 19:43:49 crc kubenswrapper[4897]: E0214 19:43:49.576739 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e70669-6c3b-4a16-aa25-f4707f87ef77" containerName="extract-content" Feb 14 19:43:49 crc kubenswrapper[4897]: I0214 19:43:49.576746 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e70669-6c3b-4a16-aa25-f4707f87ef77" containerName="extract-content" Feb 14 19:43:49 crc kubenswrapper[4897]: E0214 19:43:49.576755 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e70669-6c3b-4a16-aa25-f4707f87ef77" containerName="registry-server" Feb 14 19:43:49 crc kubenswrapper[4897]: I0214 19:43:49.576762 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e70669-6c3b-4a16-aa25-f4707f87ef77" containerName="registry-server" Feb 14 19:43:49 crc kubenswrapper[4897]: I0214 19:43:49.576996 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="51e70669-6c3b-4a16-aa25-f4707f87ef77" containerName="registry-server" Feb 14 19:43:49 crc kubenswrapper[4897]: I0214 19:43:49.578674 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zl7mq" Feb 14 19:43:49 crc kubenswrapper[4897]: I0214 19:43:49.610210 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zl7mq"] Feb 14 19:43:49 crc kubenswrapper[4897]: I0214 19:43:49.632352 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/662c1e8b-24e1-4a90-b48e-674f21a33bd7-catalog-content\") pod \"redhat-operators-zl7mq\" (UID: \"662c1e8b-24e1-4a90-b48e-674f21a33bd7\") " pod="openshift-marketplace/redhat-operators-zl7mq" Feb 14 19:43:49 crc kubenswrapper[4897]: I0214 19:43:49.632740 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xktsn\" (UniqueName: \"kubernetes.io/projected/662c1e8b-24e1-4a90-b48e-674f21a33bd7-kube-api-access-xktsn\") pod \"redhat-operators-zl7mq\" (UID: \"662c1e8b-24e1-4a90-b48e-674f21a33bd7\") " pod="openshift-marketplace/redhat-operators-zl7mq" Feb 14 19:43:49 crc kubenswrapper[4897]: I0214 19:43:49.633197 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/662c1e8b-24e1-4a90-b48e-674f21a33bd7-utilities\") pod \"redhat-operators-zl7mq\" (UID: \"662c1e8b-24e1-4a90-b48e-674f21a33bd7\") " pod="openshift-marketplace/redhat-operators-zl7mq" Feb 14 19:43:49 crc kubenswrapper[4897]: I0214 19:43:49.738659 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xktsn\" (UniqueName: \"kubernetes.io/projected/662c1e8b-24e1-4a90-b48e-674f21a33bd7-kube-api-access-xktsn\") pod \"redhat-operators-zl7mq\" (UID: \"662c1e8b-24e1-4a90-b48e-674f21a33bd7\") " pod="openshift-marketplace/redhat-operators-zl7mq" Feb 14 19:43:49 crc kubenswrapper[4897]: I0214 19:43:49.738909 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/662c1e8b-24e1-4a90-b48e-674f21a33bd7-utilities\") pod \"redhat-operators-zl7mq\" (UID: \"662c1e8b-24e1-4a90-b48e-674f21a33bd7\") " pod="openshift-marketplace/redhat-operators-zl7mq" Feb 14 19:43:49 crc kubenswrapper[4897]: I0214 19:43:49.738958 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/662c1e8b-24e1-4a90-b48e-674f21a33bd7-catalog-content\") pod \"redhat-operators-zl7mq\" (UID: \"662c1e8b-24e1-4a90-b48e-674f21a33bd7\") " pod="openshift-marketplace/redhat-operators-zl7mq" Feb 14 19:43:49 crc kubenswrapper[4897]: I0214 19:43:49.739711 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/662c1e8b-24e1-4a90-b48e-674f21a33bd7-utilities\") pod \"redhat-operators-zl7mq\" (UID: \"662c1e8b-24e1-4a90-b48e-674f21a33bd7\") " pod="openshift-marketplace/redhat-operators-zl7mq" Feb 14 19:43:49 crc kubenswrapper[4897]: I0214 19:43:49.752178 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/662c1e8b-24e1-4a90-b48e-674f21a33bd7-catalog-content\") pod \"redhat-operators-zl7mq\" (UID: \"662c1e8b-24e1-4a90-b48e-674f21a33bd7\") " pod="openshift-marketplace/redhat-operators-zl7mq" Feb 14 19:43:49 crc kubenswrapper[4897]: I0214 19:43:49.767086 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xktsn\" (UniqueName: \"kubernetes.io/projected/662c1e8b-24e1-4a90-b48e-674f21a33bd7-kube-api-access-xktsn\") pod \"redhat-operators-zl7mq\" (UID: \"662c1e8b-24e1-4a90-b48e-674f21a33bd7\") " pod="openshift-marketplace/redhat-operators-zl7mq" Feb 14 19:43:49 crc kubenswrapper[4897]: I0214 19:43:49.902472 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zl7mq" Feb 14 19:43:50 crc kubenswrapper[4897]: I0214 19:43:50.404739 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zl7mq"] Feb 14 19:43:50 crc kubenswrapper[4897]: I0214 19:43:50.639220 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zl7mq" event={"ID":"662c1e8b-24e1-4a90-b48e-674f21a33bd7","Type":"ContainerStarted","Data":"94ae94c8713738cbeeea27590af280581415e658b2f623d0b373c9883934cfab"} Feb 14 19:43:50 crc kubenswrapper[4897]: I0214 19:43:50.639272 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zl7mq" event={"ID":"662c1e8b-24e1-4a90-b48e-674f21a33bd7","Type":"ContainerStarted","Data":"bf191e3fa19b89f020d093087b9976c1428972cede48e77c8dd882f0a02f2ad8"} Feb 14 19:43:50 crc kubenswrapper[4897]: I0214 19:43:50.641044 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 19:43:51 crc kubenswrapper[4897]: I0214 19:43:51.655625 4897 generic.go:334] "Generic (PLEG): container finished" podID="662c1e8b-24e1-4a90-b48e-674f21a33bd7" containerID="94ae94c8713738cbeeea27590af280581415e658b2f623d0b373c9883934cfab" exitCode=0 Feb 14 19:43:51 crc kubenswrapper[4897]: I0214 19:43:51.655785 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zl7mq" event={"ID":"662c1e8b-24e1-4a90-b48e-674f21a33bd7","Type":"ContainerDied","Data":"94ae94c8713738cbeeea27590af280581415e658b2f623d0b373c9883934cfab"} Feb 14 19:43:51 crc kubenswrapper[4897]: I0214 19:43:51.656268 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zl7mq" event={"ID":"662c1e8b-24e1-4a90-b48e-674f21a33bd7","Type":"ContainerStarted","Data":"d9bf6479867b148183c150a3fe3e52779dbaf604c64e820ebcda4290ef70ebe3"} Feb 14 19:43:56 crc kubenswrapper[4897]: I0214 19:43:56.713849 4897 generic.go:334] "Generic (PLEG): container finished" podID="662c1e8b-24e1-4a90-b48e-674f21a33bd7" containerID="d9bf6479867b148183c150a3fe3e52779dbaf604c64e820ebcda4290ef70ebe3" exitCode=0 Feb 14 19:43:56 crc kubenswrapper[4897]: I0214 19:43:56.713998 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zl7mq" event={"ID":"662c1e8b-24e1-4a90-b48e-674f21a33bd7","Type":"ContainerDied","Data":"d9bf6479867b148183c150a3fe3e52779dbaf604c64e820ebcda4290ef70ebe3"} Feb 14 19:43:57 crc kubenswrapper[4897]: I0214 19:43:57.730354 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zl7mq" event={"ID":"662c1e8b-24e1-4a90-b48e-674f21a33bd7","Type":"ContainerStarted","Data":"3260b6600eb75b9d1743c6acfae1d2a4a3582bd63316863734d20ae37c275fc4"} Feb 14 19:43:57 crc kubenswrapper[4897]: I0214 19:43:57.751116 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zl7mq" podStartSLOduration=2.25891208 podStartE2EDuration="8.751090153s" podCreationTimestamp="2026-02-14 19:43:49 +0000 UTC" firstStartedPulling="2026-02-14 19:43:50.640807766 +0000 UTC m=+3683.617216249" lastFinishedPulling="2026-02-14 19:43:57.132985809 +0000 UTC m=+3690.109394322" observedRunningTime="2026-02-14 19:43:57.750597777 +0000 UTC m=+3690.727006290" watchObservedRunningTime="2026-02-14 19:43:57.751090153 +0000 UTC m=+3690.727498656" Feb 14 19:43:59 crc kubenswrapper[4897]: I0214 19:43:59.903143 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zl7mq" Feb 14 19:43:59 crc kubenswrapper[4897]: I0214 19:43:59.903743 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zl7mq" Feb 14 19:44:00 crc kubenswrapper[4897]: I0214 19:44:00.970886 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zl7mq" podUID="662c1e8b-24e1-4a90-b48e-674f21a33bd7" containerName="registry-server" probeResult="failure" output=< Feb 14 19:44:00 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 19:44:00 crc kubenswrapper[4897]: > Feb 14 19:44:09 crc kubenswrapper[4897]: I0214 19:44:09.963781 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zl7mq" Feb 14 19:44:10 crc kubenswrapper[4897]: I0214 19:44:10.044012 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zl7mq" Feb 14 19:44:10 crc kubenswrapper[4897]: I0214 19:44:10.211731 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zl7mq"] Feb 14 19:44:11 crc kubenswrapper[4897]: I0214 19:44:11.881979 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zl7mq" podUID="662c1e8b-24e1-4a90-b48e-674f21a33bd7" containerName="registry-server" containerID="cri-o://3260b6600eb75b9d1743c6acfae1d2a4a3582bd63316863734d20ae37c275fc4" gracePeriod=2 Feb 14 19:44:12 crc kubenswrapper[4897]: I0214 19:44:12.430277 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zl7mq" Feb 14 19:44:12 crc kubenswrapper[4897]: I0214 19:44:12.530315 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/662c1e8b-24e1-4a90-b48e-674f21a33bd7-catalog-content\") pod \"662c1e8b-24e1-4a90-b48e-674f21a33bd7\" (UID: \"662c1e8b-24e1-4a90-b48e-674f21a33bd7\") " Feb 14 19:44:12 crc kubenswrapper[4897]: I0214 19:44:12.530522 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xktsn\" (UniqueName: \"kubernetes.io/projected/662c1e8b-24e1-4a90-b48e-674f21a33bd7-kube-api-access-xktsn\") pod \"662c1e8b-24e1-4a90-b48e-674f21a33bd7\" (UID: \"662c1e8b-24e1-4a90-b48e-674f21a33bd7\") " Feb 14 19:44:12 crc kubenswrapper[4897]: I0214 19:44:12.530726 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/662c1e8b-24e1-4a90-b48e-674f21a33bd7-utilities\") pod \"662c1e8b-24e1-4a90-b48e-674f21a33bd7\" (UID: \"662c1e8b-24e1-4a90-b48e-674f21a33bd7\") " Feb 14 19:44:12 crc kubenswrapper[4897]: I0214 19:44:12.531772 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/662c1e8b-24e1-4a90-b48e-674f21a33bd7-utilities" (OuterVolumeSpecName: "utilities") pod "662c1e8b-24e1-4a90-b48e-674f21a33bd7" (UID: "662c1e8b-24e1-4a90-b48e-674f21a33bd7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:44:12 crc kubenswrapper[4897]: I0214 19:44:12.538238 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/662c1e8b-24e1-4a90-b48e-674f21a33bd7-kube-api-access-xktsn" (OuterVolumeSpecName: "kube-api-access-xktsn") pod "662c1e8b-24e1-4a90-b48e-674f21a33bd7" (UID: "662c1e8b-24e1-4a90-b48e-674f21a33bd7"). InnerVolumeSpecName "kube-api-access-xktsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:44:12 crc kubenswrapper[4897]: I0214 19:44:12.633524 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xktsn\" (UniqueName: \"kubernetes.io/projected/662c1e8b-24e1-4a90-b48e-674f21a33bd7-kube-api-access-xktsn\") on node \"crc\" DevicePath \"\"" Feb 14 19:44:12 crc kubenswrapper[4897]: I0214 19:44:12.633564 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/662c1e8b-24e1-4a90-b48e-674f21a33bd7-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 19:44:12 crc kubenswrapper[4897]: I0214 19:44:12.688182 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/662c1e8b-24e1-4a90-b48e-674f21a33bd7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "662c1e8b-24e1-4a90-b48e-674f21a33bd7" (UID: "662c1e8b-24e1-4a90-b48e-674f21a33bd7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:44:12 crc kubenswrapper[4897]: I0214 19:44:12.735880 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/662c1e8b-24e1-4a90-b48e-674f21a33bd7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 19:44:12 crc kubenswrapper[4897]: I0214 19:44:12.892466 4897 generic.go:334] "Generic (PLEG): container finished" podID="662c1e8b-24e1-4a90-b48e-674f21a33bd7" containerID="3260b6600eb75b9d1743c6acfae1d2a4a3582bd63316863734d20ae37c275fc4" exitCode=0 Feb 14 19:44:12 crc kubenswrapper[4897]: I0214 19:44:12.892517 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zl7mq" event={"ID":"662c1e8b-24e1-4a90-b48e-674f21a33bd7","Type":"ContainerDied","Data":"3260b6600eb75b9d1743c6acfae1d2a4a3582bd63316863734d20ae37c275fc4"} Feb 14 19:44:12 crc kubenswrapper[4897]: I0214 19:44:12.892744 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zl7mq" event={"ID":"662c1e8b-24e1-4a90-b48e-674f21a33bd7","Type":"ContainerDied","Data":"bf191e3fa19b89f020d093087b9976c1428972cede48e77c8dd882f0a02f2ad8"} Feb 14 19:44:12 crc kubenswrapper[4897]: I0214 19:44:12.892772 4897 scope.go:117] "RemoveContainer" containerID="3260b6600eb75b9d1743c6acfae1d2a4a3582bd63316863734d20ae37c275fc4" Feb 14 19:44:12 crc kubenswrapper[4897]: I0214 19:44:12.892533 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zl7mq" Feb 14 19:44:12 crc kubenswrapper[4897]: I0214 19:44:12.920260 4897 scope.go:117] "RemoveContainer" containerID="d9bf6479867b148183c150a3fe3e52779dbaf604c64e820ebcda4290ef70ebe3" Feb 14 19:44:12 crc kubenswrapper[4897]: I0214 19:44:12.924447 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zl7mq"] Feb 14 19:44:12 crc kubenswrapper[4897]: I0214 19:44:12.934208 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zl7mq"] Feb 14 19:44:12 crc kubenswrapper[4897]: I0214 19:44:12.955377 4897 scope.go:117] "RemoveContainer" containerID="94ae94c8713738cbeeea27590af280581415e658b2f623d0b373c9883934cfab" Feb 14 19:44:13 crc kubenswrapper[4897]: I0214 19:44:13.003230 4897 scope.go:117] "RemoveContainer" containerID="3260b6600eb75b9d1743c6acfae1d2a4a3582bd63316863734d20ae37c275fc4" Feb 14 19:44:13 crc kubenswrapper[4897]: E0214 19:44:13.003857 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3260b6600eb75b9d1743c6acfae1d2a4a3582bd63316863734d20ae37c275fc4\": container with ID starting with 3260b6600eb75b9d1743c6acfae1d2a4a3582bd63316863734d20ae37c275fc4 not found: ID does not exist" containerID="3260b6600eb75b9d1743c6acfae1d2a4a3582bd63316863734d20ae37c275fc4" Feb 14 19:44:13 crc kubenswrapper[4897]: I0214 19:44:13.003930 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3260b6600eb75b9d1743c6acfae1d2a4a3582bd63316863734d20ae37c275fc4"} err="failed to get container status \"3260b6600eb75b9d1743c6acfae1d2a4a3582bd63316863734d20ae37c275fc4\": rpc error: code = NotFound desc = could not find container \"3260b6600eb75b9d1743c6acfae1d2a4a3582bd63316863734d20ae37c275fc4\": container with ID starting with 3260b6600eb75b9d1743c6acfae1d2a4a3582bd63316863734d20ae37c275fc4 not found: ID does not exist" Feb 14 19:44:13 crc kubenswrapper[4897]: I0214 19:44:13.003977 4897 scope.go:117] "RemoveContainer" containerID="d9bf6479867b148183c150a3fe3e52779dbaf604c64e820ebcda4290ef70ebe3" Feb 14 19:44:13 crc kubenswrapper[4897]: E0214 19:44:13.004426 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9bf6479867b148183c150a3fe3e52779dbaf604c64e820ebcda4290ef70ebe3\": container with ID starting with d9bf6479867b148183c150a3fe3e52779dbaf604c64e820ebcda4290ef70ebe3 not found: ID does not exist" containerID="d9bf6479867b148183c150a3fe3e52779dbaf604c64e820ebcda4290ef70ebe3" Feb 14 19:44:13 crc kubenswrapper[4897]: I0214 19:44:13.004466 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9bf6479867b148183c150a3fe3e52779dbaf604c64e820ebcda4290ef70ebe3"} err="failed to get container status \"d9bf6479867b148183c150a3fe3e52779dbaf604c64e820ebcda4290ef70ebe3\": rpc error: code = NotFound desc = could not find container \"d9bf6479867b148183c150a3fe3e52779dbaf604c64e820ebcda4290ef70ebe3\": container with ID starting with d9bf6479867b148183c150a3fe3e52779dbaf604c64e820ebcda4290ef70ebe3 not found: ID does not exist" Feb 14 19:44:13 crc kubenswrapper[4897]: I0214 19:44:13.004493 4897 scope.go:117] "RemoveContainer" containerID="94ae94c8713738cbeeea27590af280581415e658b2f623d0b373c9883934cfab" Feb 14 19:44:13 crc kubenswrapper[4897]: E0214 19:44:13.004957 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94ae94c8713738cbeeea27590af280581415e658b2f623d0b373c9883934cfab\": container with ID starting with 94ae94c8713738cbeeea27590af280581415e658b2f623d0b373c9883934cfab not found: ID does not exist" containerID="94ae94c8713738cbeeea27590af280581415e658b2f623d0b373c9883934cfab" Feb 14 19:44:13 crc kubenswrapper[4897]: I0214 19:44:13.005008 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94ae94c8713738cbeeea27590af280581415e658b2f623d0b373c9883934cfab"} err="failed to get container status \"94ae94c8713738cbeeea27590af280581415e658b2f623d0b373c9883934cfab\": rpc error: code = NotFound desc = could not find container \"94ae94c8713738cbeeea27590af280581415e658b2f623d0b373c9883934cfab\": container with ID starting with 94ae94c8713738cbeeea27590af280581415e658b2f623d0b373c9883934cfab not found: ID does not exist" Feb 14 19:44:13 crc kubenswrapper[4897]: I0214 19:44:13.815610 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="662c1e8b-24e1-4a90-b48e-674f21a33bd7" path="/var/lib/kubelet/pods/662c1e8b-24e1-4a90-b48e-674f21a33bd7/volumes" Feb 14 19:45:00 crc kubenswrapper[4897]: I0214 19:45:00.188728 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518305-m59xt"] Feb 14 19:45:00 crc kubenswrapper[4897]: E0214 19:45:00.190256 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="662c1e8b-24e1-4a90-b48e-674f21a33bd7" containerName="registry-server" Feb 14 19:45:00 crc kubenswrapper[4897]: I0214 19:45:00.190282 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="662c1e8b-24e1-4a90-b48e-674f21a33bd7" containerName="registry-server" Feb 14 19:45:00 crc kubenswrapper[4897]: E0214 19:45:00.190329 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="662c1e8b-24e1-4a90-b48e-674f21a33bd7" containerName="extract-content" Feb 14 19:45:00 crc kubenswrapper[4897]: I0214 19:45:00.190343 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="662c1e8b-24e1-4a90-b48e-674f21a33bd7" containerName="extract-content" Feb 14 19:45:00 crc kubenswrapper[4897]: E0214 19:45:00.190364 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="662c1e8b-24e1-4a90-b48e-674f21a33bd7" containerName="extract-utilities" Feb 14 19:45:00 crc kubenswrapper[4897]: I0214 19:45:00.190375 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="662c1e8b-24e1-4a90-b48e-674f21a33bd7" containerName="extract-utilities" Feb 14 19:45:00 crc kubenswrapper[4897]: I0214 19:45:00.190753 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="662c1e8b-24e1-4a90-b48e-674f21a33bd7" containerName="registry-server" Feb 14 19:45:00 crc kubenswrapper[4897]: I0214 19:45:00.192183 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518305-m59xt" Feb 14 19:45:00 crc kubenswrapper[4897]: I0214 19:45:00.194570 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 19:45:00 crc kubenswrapper[4897]: I0214 19:45:00.194764 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 19:45:00 crc kubenswrapper[4897]: I0214 19:45:00.208703 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518305-m59xt"] Feb 14 19:45:00 crc kubenswrapper[4897]: I0214 19:45:00.365552 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl7xz\" (UniqueName: \"kubernetes.io/projected/d6d4a5a8-24e3-47a4-a469-af6d71e977c2-kube-api-access-nl7xz\") pod \"collect-profiles-29518305-m59xt\" (UID: \"d6d4a5a8-24e3-47a4-a469-af6d71e977c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518305-m59xt" Feb 14 19:45:00 crc kubenswrapper[4897]: I0214 19:45:00.365607 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6d4a5a8-24e3-47a4-a469-af6d71e977c2-secret-volume\") pod \"collect-profiles-29518305-m59xt\" (UID: \"d6d4a5a8-24e3-47a4-a469-af6d71e977c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518305-m59xt" Feb 14 19:45:00 crc kubenswrapper[4897]: I0214 19:45:00.365845 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6d4a5a8-24e3-47a4-a469-af6d71e977c2-config-volume\") pod \"collect-profiles-29518305-m59xt\" (UID: \"d6d4a5a8-24e3-47a4-a469-af6d71e977c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518305-m59xt" Feb 14 19:45:00 crc kubenswrapper[4897]: I0214 19:45:00.467894 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6d4a5a8-24e3-47a4-a469-af6d71e977c2-config-volume\") pod \"collect-profiles-29518305-m59xt\" (UID: \"d6d4a5a8-24e3-47a4-a469-af6d71e977c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518305-m59xt" Feb 14 19:45:00 crc kubenswrapper[4897]: I0214 19:45:00.469091 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6d4a5a8-24e3-47a4-a469-af6d71e977c2-config-volume\") pod \"collect-profiles-29518305-m59xt\" (UID: \"d6d4a5a8-24e3-47a4-a469-af6d71e977c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518305-m59xt" Feb 14 19:45:00 crc kubenswrapper[4897]: I0214 19:45:00.469306 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl7xz\" (UniqueName: \"kubernetes.io/projected/d6d4a5a8-24e3-47a4-a469-af6d71e977c2-kube-api-access-nl7xz\") pod \"collect-profiles-29518305-m59xt\" (UID: \"d6d4a5a8-24e3-47a4-a469-af6d71e977c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518305-m59xt" Feb 14 19:45:00 crc kubenswrapper[4897]: I0214 19:45:00.469334 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6d4a5a8-24e3-47a4-a469-af6d71e977c2-secret-volume\") pod \"collect-profiles-29518305-m59xt\" (UID: \"d6d4a5a8-24e3-47a4-a469-af6d71e977c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518305-m59xt" Feb 14 19:45:00 crc kubenswrapper[4897]: I0214 19:45:00.476825 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6d4a5a8-24e3-47a4-a469-af6d71e977c2-secret-volume\") pod \"collect-profiles-29518305-m59xt\" (UID: \"d6d4a5a8-24e3-47a4-a469-af6d71e977c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518305-m59xt" Feb 14 19:45:00 crc kubenswrapper[4897]: I0214 19:45:00.485499 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl7xz\" (UniqueName: \"kubernetes.io/projected/d6d4a5a8-24e3-47a4-a469-af6d71e977c2-kube-api-access-nl7xz\") pod \"collect-profiles-29518305-m59xt\" (UID: \"d6d4a5a8-24e3-47a4-a469-af6d71e977c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518305-m59xt" Feb 14 19:45:00 crc kubenswrapper[4897]: I0214 19:45:00.521190 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518305-m59xt" Feb 14 19:45:00 crc kubenswrapper[4897]: I0214 19:45:00.989360 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518305-m59xt"] Feb 14 19:45:01 crc kubenswrapper[4897]: I0214 19:45:01.491521 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518305-m59xt" event={"ID":"d6d4a5a8-24e3-47a4-a469-af6d71e977c2","Type":"ContainerStarted","Data":"ef82e0517fc2681276ba36eaa8177a3a2f6cdeae979510c85cee8434231c5c52"} Feb 14 19:45:01 crc kubenswrapper[4897]: I0214 19:45:01.491904 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518305-m59xt" event={"ID":"d6d4a5a8-24e3-47a4-a469-af6d71e977c2","Type":"ContainerStarted","Data":"c1a993d86c89a4624b437f37e03d3ddbf7cc16221c1ce3f991ef0054fbd4c3a5"} Feb 14 19:45:01 crc kubenswrapper[4897]: I0214 19:45:01.510314 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29518305-m59xt" podStartSLOduration=1.510294675 podStartE2EDuration="1.510294675s" podCreationTimestamp="2026-02-14 19:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 19:45:01.506100185 +0000 UTC m=+3754.482508678" watchObservedRunningTime="2026-02-14 19:45:01.510294675 +0000 UTC m=+3754.486703158" Feb 14 19:45:01 crc kubenswrapper[4897]: I0214 19:45:01.725910 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:45:01 crc kubenswrapper[4897]: I0214 19:45:01.725967 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:45:01 crc kubenswrapper[4897]: E0214 19:45:01.928470 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6d4a5a8_24e3_47a4_a469_af6d71e977c2.slice/crio-ef82e0517fc2681276ba36eaa8177a3a2f6cdeae979510c85cee8434231c5c52.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6d4a5a8_24e3_47a4_a469_af6d71e977c2.slice/crio-conmon-ef82e0517fc2681276ba36eaa8177a3a2f6cdeae979510c85cee8434231c5c52.scope\": RecentStats: unable to find data in memory cache]" Feb 14 19:45:02 crc kubenswrapper[4897]: I0214 19:45:02.505550 4897 generic.go:334] "Generic (PLEG): container finished" podID="d6d4a5a8-24e3-47a4-a469-af6d71e977c2" containerID="ef82e0517fc2681276ba36eaa8177a3a2f6cdeae979510c85cee8434231c5c52" exitCode=0 Feb 14 19:45:02 crc kubenswrapper[4897]: I0214 19:45:02.506179 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518305-m59xt" event={"ID":"d6d4a5a8-24e3-47a4-a469-af6d71e977c2","Type":"ContainerDied","Data":"ef82e0517fc2681276ba36eaa8177a3a2f6cdeae979510c85cee8434231c5c52"} Feb 14 19:45:03 crc kubenswrapper[4897]: I0214 19:45:03.988531 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518305-m59xt" Feb 14 19:45:04 crc kubenswrapper[4897]: I0214 19:45:04.077096 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6d4a5a8-24e3-47a4-a469-af6d71e977c2-config-volume\") pod \"d6d4a5a8-24e3-47a4-a469-af6d71e977c2\" (UID: \"d6d4a5a8-24e3-47a4-a469-af6d71e977c2\") " Feb 14 19:45:04 crc kubenswrapper[4897]: I0214 19:45:04.077302 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6d4a5a8-24e3-47a4-a469-af6d71e977c2-secret-volume\") pod \"d6d4a5a8-24e3-47a4-a469-af6d71e977c2\" (UID: \"d6d4a5a8-24e3-47a4-a469-af6d71e977c2\") " Feb 14 19:45:04 crc kubenswrapper[4897]: I0214 19:45:04.077382 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl7xz\" (UniqueName: \"kubernetes.io/projected/d6d4a5a8-24e3-47a4-a469-af6d71e977c2-kube-api-access-nl7xz\") pod \"d6d4a5a8-24e3-47a4-a469-af6d71e977c2\" (UID: \"d6d4a5a8-24e3-47a4-a469-af6d71e977c2\") " Feb 14 19:45:04 crc kubenswrapper[4897]: I0214 19:45:04.078303 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6d4a5a8-24e3-47a4-a469-af6d71e977c2-config-volume" (OuterVolumeSpecName: "config-volume") pod "d6d4a5a8-24e3-47a4-a469-af6d71e977c2" (UID: "d6d4a5a8-24e3-47a4-a469-af6d71e977c2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 19:45:04 crc kubenswrapper[4897]: I0214 19:45:04.078521 4897 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6d4a5a8-24e3-47a4-a469-af6d71e977c2-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 19:45:04 crc kubenswrapper[4897]: I0214 19:45:04.088972 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6d4a5a8-24e3-47a4-a469-af6d71e977c2-kube-api-access-nl7xz" (OuterVolumeSpecName: "kube-api-access-nl7xz") pod "d6d4a5a8-24e3-47a4-a469-af6d71e977c2" (UID: "d6d4a5a8-24e3-47a4-a469-af6d71e977c2"). InnerVolumeSpecName "kube-api-access-nl7xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:45:04 crc kubenswrapper[4897]: I0214 19:45:04.106012 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6d4a5a8-24e3-47a4-a469-af6d71e977c2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d6d4a5a8-24e3-47a4-a469-af6d71e977c2" (UID: "d6d4a5a8-24e3-47a4-a469-af6d71e977c2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 19:45:04 crc kubenswrapper[4897]: I0214 19:45:04.181381 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl7xz\" (UniqueName: \"kubernetes.io/projected/d6d4a5a8-24e3-47a4-a469-af6d71e977c2-kube-api-access-nl7xz\") on node \"crc\" DevicePath \"\"" Feb 14 19:45:04 crc kubenswrapper[4897]: I0214 19:45:04.181425 4897 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6d4a5a8-24e3-47a4-a469-af6d71e977c2-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 19:45:04 crc kubenswrapper[4897]: I0214 19:45:04.527268 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518305-m59xt" event={"ID":"d6d4a5a8-24e3-47a4-a469-af6d71e977c2","Type":"ContainerDied","Data":"c1a993d86c89a4624b437f37e03d3ddbf7cc16221c1ce3f991ef0054fbd4c3a5"} Feb 14 19:45:04 crc kubenswrapper[4897]: I0214 19:45:04.527749 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1a993d86c89a4624b437f37e03d3ddbf7cc16221c1ce3f991ef0054fbd4c3a5" Feb 14 19:45:04 crc kubenswrapper[4897]: I0214 19:45:04.527309 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518305-m59xt" Feb 14 19:45:04 crc kubenswrapper[4897]: I0214 19:45:04.594754 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518260-wrkrg"] Feb 14 19:45:04 crc kubenswrapper[4897]: I0214 19:45:04.607250 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518260-wrkrg"] Feb 14 19:45:05 crc kubenswrapper[4897]: I0214 19:45:05.808694 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91b297e5-cb98-47d7-96bf-9a680217ecfe" path="/var/lib/kubelet/pods/91b297e5-cb98-47d7-96bf-9a680217ecfe/volumes" Feb 14 19:45:31 crc kubenswrapper[4897]: I0214 19:45:31.725621 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:45:31 crc kubenswrapper[4897]: I0214 19:45:31.726343 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:45:32 crc kubenswrapper[4897]: E0214 19:45:32.363331 4897 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.41:60282->38.102.83.41:34573: write tcp 38.102.83.41:60282->38.102.83.41:34573: write: broken pipe Feb 14 19:45:32 crc kubenswrapper[4897]: E0214 19:45:32.363341 4897 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.41:60282->38.102.83.41:34573: read tcp 38.102.83.41:60282->38.102.83.41:34573: read: connection reset by peer Feb 14 19:45:43 crc kubenswrapper[4897]: I0214 19:45:43.265966 4897 scope.go:117] "RemoveContainer" containerID="0ab894dc8e727385b753abb8e4553a750d12f7d61a2d4e60d807fff50993237c" Feb 14 19:46:01 crc kubenswrapper[4897]: I0214 19:46:01.725685 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:46:01 crc kubenswrapper[4897]: I0214 19:46:01.726365 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:46:01 crc kubenswrapper[4897]: I0214 19:46:01.726424 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 19:46:01 crc kubenswrapper[4897]: I0214 19:46:01.727563 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c"} pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 19:46:01 crc kubenswrapper[4897]: I0214 19:46:01.727652 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" containerID="cri-o://1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c" gracePeriod=600 Feb 14 19:46:01 crc kubenswrapper[4897]: E0214 19:46:01.857943 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:46:01 crc kubenswrapper[4897]: E0214 19:46:01.957177 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f885c6c_b913_48e3_93fc_abf932515ea9.slice/crio-1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f885c6c_b913_48e3_93fc_abf932515ea9.slice/crio-conmon-1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c.scope\": RecentStats: unable to find data in memory cache]" Feb 14 19:46:02 crc kubenswrapper[4897]: I0214 19:46:02.274295 4897 generic.go:334] "Generic (PLEG): container finished" podID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerID="1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c" exitCode=0 Feb 14 19:46:02 crc kubenswrapper[4897]: I0214 19:46:02.274344 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerDied","Data":"1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c"} Feb 14 19:46:02 crc kubenswrapper[4897]: I0214 19:46:02.274383 4897 scope.go:117] "RemoveContainer" containerID="0975d26308356b3c92cb19f91a95d0679d36ff9bac5e59fbcaf7cc24d4b0a2d7" Feb 14 19:46:02 crc kubenswrapper[4897]: I0214 19:46:02.275224 4897 scope.go:117] "RemoveContainer" containerID="1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c" Feb 14 19:46:02 crc kubenswrapper[4897]: E0214 19:46:02.275591 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:46:13 crc kubenswrapper[4897]: I0214 19:46:13.794455 4897 scope.go:117] "RemoveContainer" containerID="1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c" Feb 14 19:46:13 crc kubenswrapper[4897]: E0214 19:46:13.795718 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:46:25 crc kubenswrapper[4897]: I0214 19:46:25.794839 4897 scope.go:117] "RemoveContainer" containerID="1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c" Feb 14 19:46:25 crc kubenswrapper[4897]: E0214 19:46:25.796008 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:46:29 crc kubenswrapper[4897]: I0214 19:46:29.596616 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k4f7h"] Feb 14 19:46:29 crc kubenswrapper[4897]: E0214 19:46:29.597530 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6d4a5a8-24e3-47a4-a469-af6d71e977c2" containerName="collect-profiles" Feb 14 19:46:29 crc kubenswrapper[4897]: I0214 19:46:29.597542 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6d4a5a8-24e3-47a4-a469-af6d71e977c2" containerName="collect-profiles" Feb 14 19:46:29 crc kubenswrapper[4897]: I0214 19:46:29.597758 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6d4a5a8-24e3-47a4-a469-af6d71e977c2" containerName="collect-profiles" Feb 14 19:46:29 crc kubenswrapper[4897]: I0214 19:46:29.599354 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k4f7h" Feb 14 19:46:29 crc kubenswrapper[4897]: I0214 19:46:29.629898 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k4f7h"] Feb 14 19:46:29 crc kubenswrapper[4897]: I0214 19:46:29.707121 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-879f9\" (UniqueName: \"kubernetes.io/projected/5b8c65f4-f3fe-43cd-8372-fffed345c8c4-kube-api-access-879f9\") pod \"redhat-marketplace-k4f7h\" (UID: \"5b8c65f4-f3fe-43cd-8372-fffed345c8c4\") " pod="openshift-marketplace/redhat-marketplace-k4f7h" Feb 14 19:46:29 crc kubenswrapper[4897]: I0214 19:46:29.707178 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b8c65f4-f3fe-43cd-8372-fffed345c8c4-utilities\") pod \"redhat-marketplace-k4f7h\" (UID: \"5b8c65f4-f3fe-43cd-8372-fffed345c8c4\") " pod="openshift-marketplace/redhat-marketplace-k4f7h" Feb 14 19:46:29 crc kubenswrapper[4897]: I0214 19:46:29.707886 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b8c65f4-f3fe-43cd-8372-fffed345c8c4-catalog-content\") pod \"redhat-marketplace-k4f7h\" (UID: \"5b8c65f4-f3fe-43cd-8372-fffed345c8c4\") " pod="openshift-marketplace/redhat-marketplace-k4f7h" Feb 14 19:46:29 crc kubenswrapper[4897]: I0214 19:46:29.810779 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b8c65f4-f3fe-43cd-8372-fffed345c8c4-catalog-content\") pod \"redhat-marketplace-k4f7h\" (UID: \"5b8c65f4-f3fe-43cd-8372-fffed345c8c4\") " pod="openshift-marketplace/redhat-marketplace-k4f7h" Feb 14 19:46:29 crc kubenswrapper[4897]: I0214 19:46:29.811183 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-879f9\" (UniqueName: \"kubernetes.io/projected/5b8c65f4-f3fe-43cd-8372-fffed345c8c4-kube-api-access-879f9\") pod \"redhat-marketplace-k4f7h\" (UID: \"5b8c65f4-f3fe-43cd-8372-fffed345c8c4\") " pod="openshift-marketplace/redhat-marketplace-k4f7h" Feb 14 19:46:29 crc kubenswrapper[4897]: I0214 19:46:29.811213 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b8c65f4-f3fe-43cd-8372-fffed345c8c4-utilities\") pod \"redhat-marketplace-k4f7h\" (UID: \"5b8c65f4-f3fe-43cd-8372-fffed345c8c4\") " pod="openshift-marketplace/redhat-marketplace-k4f7h" Feb 14 19:46:29 crc kubenswrapper[4897]: I0214 19:46:29.811789 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b8c65f4-f3fe-43cd-8372-fffed345c8c4-utilities\") pod \"redhat-marketplace-k4f7h\" (UID: \"5b8c65f4-f3fe-43cd-8372-fffed345c8c4\") " pod="openshift-marketplace/redhat-marketplace-k4f7h" Feb 14 19:46:29 crc kubenswrapper[4897]: I0214 19:46:29.812086 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b8c65f4-f3fe-43cd-8372-fffed345c8c4-catalog-content\") pod \"redhat-marketplace-k4f7h\" (UID: \"5b8c65f4-f3fe-43cd-8372-fffed345c8c4\") " pod="openshift-marketplace/redhat-marketplace-k4f7h" Feb 14 19:46:29 crc kubenswrapper[4897]: I0214 19:46:29.845338 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-879f9\" (UniqueName: \"kubernetes.io/projected/5b8c65f4-f3fe-43cd-8372-fffed345c8c4-kube-api-access-879f9\") pod \"redhat-marketplace-k4f7h\" (UID: \"5b8c65f4-f3fe-43cd-8372-fffed345c8c4\") " pod="openshift-marketplace/redhat-marketplace-k4f7h" Feb 14 19:46:29 crc kubenswrapper[4897]: I0214 19:46:29.934386 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k4f7h" Feb 14 19:46:30 crc kubenswrapper[4897]: I0214 19:46:30.468158 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k4f7h"] Feb 14 19:46:30 crc kubenswrapper[4897]: I0214 19:46:30.643945 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4f7h" event={"ID":"5b8c65f4-f3fe-43cd-8372-fffed345c8c4","Type":"ContainerStarted","Data":"2959701171cc89f7db5e6f2f2abaa7cfd9649782f82a743396ae675e2e334379"} Feb 14 19:46:31 crc kubenswrapper[4897]: I0214 19:46:31.662811 4897 generic.go:334] "Generic (PLEG): container finished" podID="5b8c65f4-f3fe-43cd-8372-fffed345c8c4" containerID="c771f29443d55e1817db667a2c9bca9109a31094f533eac799328b773a8de014" exitCode=0 Feb 14 19:46:31 crc kubenswrapper[4897]: I0214 19:46:31.662878 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4f7h" event={"ID":"5b8c65f4-f3fe-43cd-8372-fffed345c8c4","Type":"ContainerDied","Data":"c771f29443d55e1817db667a2c9bca9109a31094f533eac799328b773a8de014"} Feb 14 19:46:32 crc kubenswrapper[4897]: I0214 19:46:32.677910 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4f7h" event={"ID":"5b8c65f4-f3fe-43cd-8372-fffed345c8c4","Type":"ContainerStarted","Data":"07c6ba669db34edf6c234a02aae00ec0d13083bca0c7d47c4bd186d634eb0c23"} Feb 14 19:46:33 crc kubenswrapper[4897]: I0214 19:46:33.690917 4897 generic.go:334] "Generic (PLEG): container finished" podID="5b8c65f4-f3fe-43cd-8372-fffed345c8c4" containerID="07c6ba669db34edf6c234a02aae00ec0d13083bca0c7d47c4bd186d634eb0c23" exitCode=0 Feb 14 19:46:33 crc kubenswrapper[4897]: I0214 19:46:33.690977 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4f7h" event={"ID":"5b8c65f4-f3fe-43cd-8372-fffed345c8c4","Type":"ContainerDied","Data":"07c6ba669db34edf6c234a02aae00ec0d13083bca0c7d47c4bd186d634eb0c23"} Feb 14 19:46:34 crc kubenswrapper[4897]: I0214 19:46:34.704567 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4f7h" event={"ID":"5b8c65f4-f3fe-43cd-8372-fffed345c8c4","Type":"ContainerStarted","Data":"9242174db041b2afe51635e10e71a63d647a72353861347f5e6aa2551e45cb57"} Feb 14 19:46:34 crc kubenswrapper[4897]: I0214 19:46:34.729554 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k4f7h" podStartSLOduration=3.23712356 podStartE2EDuration="5.729532901s" podCreationTimestamp="2026-02-14 19:46:29 +0000 UTC" firstStartedPulling="2026-02-14 19:46:31.667099465 +0000 UTC m=+3844.643507978" lastFinishedPulling="2026-02-14 19:46:34.159508826 +0000 UTC m=+3847.135917319" observedRunningTime="2026-02-14 19:46:34.727836577 +0000 UTC m=+3847.704245110" watchObservedRunningTime="2026-02-14 19:46:34.729532901 +0000 UTC m=+3847.705941384" Feb 14 19:46:38 crc kubenswrapper[4897]: I0214 19:46:38.795560 4897 scope.go:117] "RemoveContainer" containerID="1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c" Feb 14 19:46:38 crc kubenswrapper[4897]: E0214 19:46:38.796929 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:46:39 crc kubenswrapper[4897]: I0214 19:46:39.934673 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k4f7h" Feb 14 19:46:39 crc kubenswrapper[4897]: I0214 19:46:39.935070 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k4f7h" Feb 14 19:46:40 crc kubenswrapper[4897]: I0214 19:46:40.005736 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k4f7h" Feb 14 19:46:40 crc kubenswrapper[4897]: I0214 19:46:40.842306 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k4f7h" Feb 14 19:46:40 crc kubenswrapper[4897]: I0214 19:46:40.896160 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k4f7h"] Feb 14 19:46:42 crc kubenswrapper[4897]: I0214 19:46:42.812678 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k4f7h" podUID="5b8c65f4-f3fe-43cd-8372-fffed345c8c4" containerName="registry-server" containerID="cri-o://9242174db041b2afe51635e10e71a63d647a72353861347f5e6aa2551e45cb57" gracePeriod=2 Feb 14 19:46:43 crc kubenswrapper[4897]: I0214 19:46:43.483252 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k4f7h" Feb 14 19:46:43 crc kubenswrapper[4897]: I0214 19:46:43.592843 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b8c65f4-f3fe-43cd-8372-fffed345c8c4-catalog-content\") pod \"5b8c65f4-f3fe-43cd-8372-fffed345c8c4\" (UID: \"5b8c65f4-f3fe-43cd-8372-fffed345c8c4\") " Feb 14 19:46:43 crc kubenswrapper[4897]: I0214 19:46:43.592987 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-879f9\" (UniqueName: \"kubernetes.io/projected/5b8c65f4-f3fe-43cd-8372-fffed345c8c4-kube-api-access-879f9\") pod \"5b8c65f4-f3fe-43cd-8372-fffed345c8c4\" (UID: \"5b8c65f4-f3fe-43cd-8372-fffed345c8c4\") " Feb 14 19:46:43 crc kubenswrapper[4897]: I0214 19:46:43.593155 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b8c65f4-f3fe-43cd-8372-fffed345c8c4-utilities\") pod \"5b8c65f4-f3fe-43cd-8372-fffed345c8c4\" (UID: \"5b8c65f4-f3fe-43cd-8372-fffed345c8c4\") " Feb 14 19:46:43 crc kubenswrapper[4897]: I0214 19:46:43.594649 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b8c65f4-f3fe-43cd-8372-fffed345c8c4-utilities" (OuterVolumeSpecName: "utilities") pod "5b8c65f4-f3fe-43cd-8372-fffed345c8c4" (UID: "5b8c65f4-f3fe-43cd-8372-fffed345c8c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:46:43 crc kubenswrapper[4897]: I0214 19:46:43.600168 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b8c65f4-f3fe-43cd-8372-fffed345c8c4-kube-api-access-879f9" (OuterVolumeSpecName: "kube-api-access-879f9") pod "5b8c65f4-f3fe-43cd-8372-fffed345c8c4" (UID: "5b8c65f4-f3fe-43cd-8372-fffed345c8c4"). InnerVolumeSpecName "kube-api-access-879f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:46:43 crc kubenswrapper[4897]: I0214 19:46:43.636649 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b8c65f4-f3fe-43cd-8372-fffed345c8c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b8c65f4-f3fe-43cd-8372-fffed345c8c4" (UID: "5b8c65f4-f3fe-43cd-8372-fffed345c8c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:46:43 crc kubenswrapper[4897]: I0214 19:46:43.696173 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b8c65f4-f3fe-43cd-8372-fffed345c8c4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 19:46:43 crc kubenswrapper[4897]: I0214 19:46:43.696237 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-879f9\" (UniqueName: \"kubernetes.io/projected/5b8c65f4-f3fe-43cd-8372-fffed345c8c4-kube-api-access-879f9\") on node \"crc\" DevicePath \"\"" Feb 14 19:46:43 crc kubenswrapper[4897]: I0214 19:46:43.696252 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b8c65f4-f3fe-43cd-8372-fffed345c8c4-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 19:46:43 crc kubenswrapper[4897]: I0214 19:46:43.830721 4897 generic.go:334] "Generic (PLEG): container finished" podID="5b8c65f4-f3fe-43cd-8372-fffed345c8c4" containerID="9242174db041b2afe51635e10e71a63d647a72353861347f5e6aa2551e45cb57" exitCode=0 Feb 14 19:46:43 crc kubenswrapper[4897]: I0214 19:46:43.830791 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4f7h" event={"ID":"5b8c65f4-f3fe-43cd-8372-fffed345c8c4","Type":"ContainerDied","Data":"9242174db041b2afe51635e10e71a63d647a72353861347f5e6aa2551e45cb57"} Feb 14 19:46:43 crc kubenswrapper[4897]: I0214 19:46:43.830915 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k4f7h" Feb 14 19:46:43 crc kubenswrapper[4897]: I0214 19:46:43.831146 4897 scope.go:117] "RemoveContainer" containerID="9242174db041b2afe51635e10e71a63d647a72353861347f5e6aa2551e45cb57" Feb 14 19:46:43 crc kubenswrapper[4897]: I0214 19:46:43.831115 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4f7h" event={"ID":"5b8c65f4-f3fe-43cd-8372-fffed345c8c4","Type":"ContainerDied","Data":"2959701171cc89f7db5e6f2f2abaa7cfd9649782f82a743396ae675e2e334379"} Feb 14 19:46:43 crc kubenswrapper[4897]: I0214 19:46:43.873214 4897 scope.go:117] "RemoveContainer" containerID="07c6ba669db34edf6c234a02aae00ec0d13083bca0c7d47c4bd186d634eb0c23" Feb 14 19:46:43 crc kubenswrapper[4897]: I0214 19:46:43.877655 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k4f7h"] Feb 14 19:46:43 crc kubenswrapper[4897]: I0214 19:46:43.893075 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k4f7h"] Feb 14 19:46:43 crc kubenswrapper[4897]: I0214 19:46:43.910439 4897 scope.go:117] "RemoveContainer" containerID="c771f29443d55e1817db667a2c9bca9109a31094f533eac799328b773a8de014" Feb 14 19:46:43 crc kubenswrapper[4897]: I0214 19:46:43.969981 4897 scope.go:117] "RemoveContainer" containerID="9242174db041b2afe51635e10e71a63d647a72353861347f5e6aa2551e45cb57" Feb 14 19:46:43 crc kubenswrapper[4897]: E0214 19:46:43.970555 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9242174db041b2afe51635e10e71a63d647a72353861347f5e6aa2551e45cb57\": container with ID starting with 9242174db041b2afe51635e10e71a63d647a72353861347f5e6aa2551e45cb57 not found: ID does not exist" containerID="9242174db041b2afe51635e10e71a63d647a72353861347f5e6aa2551e45cb57" Feb 14 19:46:43 crc kubenswrapper[4897]: I0214 19:46:43.970611 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9242174db041b2afe51635e10e71a63d647a72353861347f5e6aa2551e45cb57"} err="failed to get container status \"9242174db041b2afe51635e10e71a63d647a72353861347f5e6aa2551e45cb57\": rpc error: code = NotFound desc = could not find container \"9242174db041b2afe51635e10e71a63d647a72353861347f5e6aa2551e45cb57\": container with ID starting with 9242174db041b2afe51635e10e71a63d647a72353861347f5e6aa2551e45cb57 not found: ID does not exist" Feb 14 19:46:43 crc kubenswrapper[4897]: I0214 19:46:43.970646 4897 scope.go:117] "RemoveContainer" containerID="07c6ba669db34edf6c234a02aae00ec0d13083bca0c7d47c4bd186d634eb0c23" Feb 14 19:46:43 crc kubenswrapper[4897]: E0214 19:46:43.971260 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07c6ba669db34edf6c234a02aae00ec0d13083bca0c7d47c4bd186d634eb0c23\": container with ID starting with 07c6ba669db34edf6c234a02aae00ec0d13083bca0c7d47c4bd186d634eb0c23 not found: ID does not exist" containerID="07c6ba669db34edf6c234a02aae00ec0d13083bca0c7d47c4bd186d634eb0c23" Feb 14 19:46:43 crc kubenswrapper[4897]: I0214 19:46:43.971298 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07c6ba669db34edf6c234a02aae00ec0d13083bca0c7d47c4bd186d634eb0c23"} err="failed to get container status \"07c6ba669db34edf6c234a02aae00ec0d13083bca0c7d47c4bd186d634eb0c23\": rpc error: code = NotFound desc = could not find container \"07c6ba669db34edf6c234a02aae00ec0d13083bca0c7d47c4bd186d634eb0c23\": container with ID starting with 07c6ba669db34edf6c234a02aae00ec0d13083bca0c7d47c4bd186d634eb0c23 not found: ID does not exist" Feb 14 19:46:43 crc kubenswrapper[4897]: I0214 19:46:43.971325 4897 scope.go:117] "RemoveContainer" containerID="c771f29443d55e1817db667a2c9bca9109a31094f533eac799328b773a8de014" Feb 14 19:46:43 crc kubenswrapper[4897]: E0214 19:46:43.971721 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c771f29443d55e1817db667a2c9bca9109a31094f533eac799328b773a8de014\": container with ID starting with c771f29443d55e1817db667a2c9bca9109a31094f533eac799328b773a8de014 not found: ID does not exist" containerID="c771f29443d55e1817db667a2c9bca9109a31094f533eac799328b773a8de014" Feb 14 19:46:43 crc kubenswrapper[4897]: I0214 19:46:43.971770 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c771f29443d55e1817db667a2c9bca9109a31094f533eac799328b773a8de014"} err="failed to get container status \"c771f29443d55e1817db667a2c9bca9109a31094f533eac799328b773a8de014\": rpc error: code = NotFound desc = could not find container \"c771f29443d55e1817db667a2c9bca9109a31094f533eac799328b773a8de014\": container with ID starting with c771f29443d55e1817db667a2c9bca9109a31094f533eac799328b773a8de014 not found: ID does not exist" Feb 14 19:46:45 crc kubenswrapper[4897]: I0214 19:46:45.805677 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b8c65f4-f3fe-43cd-8372-fffed345c8c4" path="/var/lib/kubelet/pods/5b8c65f4-f3fe-43cd-8372-fffed345c8c4/volumes" Feb 14 19:46:51 crc kubenswrapper[4897]: I0214 19:46:51.797531 4897 scope.go:117] "RemoveContainer" containerID="1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c" Feb 14 19:46:51 crc kubenswrapper[4897]: E0214 19:46:51.800835 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:47:04 crc kubenswrapper[4897]: I0214 19:47:04.795302 4897 scope.go:117] "RemoveContainer" containerID="1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c" Feb 14 19:47:04 crc kubenswrapper[4897]: E0214 19:47:04.796375 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:47:19 crc kubenswrapper[4897]: I0214 19:47:19.794559 4897 scope.go:117] "RemoveContainer" containerID="1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c" Feb 14 19:47:19 crc kubenswrapper[4897]: E0214 19:47:19.795546 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:47:32 crc kubenswrapper[4897]: I0214 19:47:32.794924 4897 scope.go:117] "RemoveContainer" containerID="1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c" Feb 14 19:47:32 crc kubenswrapper[4897]: E0214 19:47:32.795599 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:47:46 crc kubenswrapper[4897]: I0214 19:47:46.794386 4897 scope.go:117] "RemoveContainer" containerID="1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c" Feb 14 19:47:46 crc kubenswrapper[4897]: E0214 19:47:46.795348 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:48:00 crc kubenswrapper[4897]: I0214 19:48:00.793380 4897 scope.go:117] "RemoveContainer" containerID="1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c" Feb 14 19:48:00 crc kubenswrapper[4897]: E0214 19:48:00.794188 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:48:12 crc kubenswrapper[4897]: I0214 19:48:12.794856 4897 scope.go:117] "RemoveContainer" containerID="1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c" Feb 14 19:48:12 crc kubenswrapper[4897]: E0214 19:48:12.796308 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:48:27 crc kubenswrapper[4897]: I0214 19:48:27.802529 4897 scope.go:117] "RemoveContainer" containerID="1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c" Feb 14 19:48:27 crc kubenswrapper[4897]: E0214 19:48:27.803310 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:48:28 crc kubenswrapper[4897]: E0214 19:48:28.196933 4897 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.41:53518->38.102.83.41:34573: write tcp 38.102.83.41:53518->38.102.83.41:34573: write: broken pipe Feb 14 19:48:42 crc kubenswrapper[4897]: I0214 19:48:42.796245 4897 scope.go:117] "RemoveContainer" containerID="1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c" Feb 14 19:48:42 crc kubenswrapper[4897]: E0214 19:48:42.797492 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:48:57 crc kubenswrapper[4897]: I0214 19:48:57.804668 4897 scope.go:117] "RemoveContainer" containerID="1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c" Feb 14 19:48:57 crc kubenswrapper[4897]: E0214 19:48:57.805588 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:49:09 crc kubenswrapper[4897]: I0214 19:49:09.302209 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-85fqk"] Feb 14 19:49:09 crc kubenswrapper[4897]: E0214 19:49:09.303276 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b8c65f4-f3fe-43cd-8372-fffed345c8c4" containerName="extract-utilities" Feb 14 19:49:09 crc kubenswrapper[4897]: I0214 19:49:09.303288 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b8c65f4-f3fe-43cd-8372-fffed345c8c4" containerName="extract-utilities" Feb 14 19:49:09 crc kubenswrapper[4897]: E0214 19:49:09.303323 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b8c65f4-f3fe-43cd-8372-fffed345c8c4" containerName="registry-server" Feb 14 19:49:09 crc kubenswrapper[4897]: I0214 19:49:09.303329 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b8c65f4-f3fe-43cd-8372-fffed345c8c4" containerName="registry-server" Feb 14 19:49:09 crc kubenswrapper[4897]: E0214 19:49:09.303347 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b8c65f4-f3fe-43cd-8372-fffed345c8c4" containerName="extract-content" Feb 14 19:49:09 crc kubenswrapper[4897]: I0214 19:49:09.303352 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b8c65f4-f3fe-43cd-8372-fffed345c8c4" containerName="extract-content" Feb 14 19:49:09 crc kubenswrapper[4897]: I0214 19:49:09.303549 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b8c65f4-f3fe-43cd-8372-fffed345c8c4" containerName="registry-server" Feb 14 19:49:09 crc kubenswrapper[4897]: I0214 19:49:09.305257 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-85fqk" Feb 14 19:49:09 crc kubenswrapper[4897]: I0214 19:49:09.332420 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-85fqk"] Feb 14 19:49:09 crc kubenswrapper[4897]: I0214 19:49:09.410314 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74fd145d-c90c-48fc-b306-b89bb9a2edcf-catalog-content\") pod \"community-operators-85fqk\" (UID: \"74fd145d-c90c-48fc-b306-b89bb9a2edcf\") " pod="openshift-marketplace/community-operators-85fqk" Feb 14 19:49:09 crc kubenswrapper[4897]: I0214 19:49:09.410406 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mhwf\" (UniqueName: \"kubernetes.io/projected/74fd145d-c90c-48fc-b306-b89bb9a2edcf-kube-api-access-6mhwf\") pod \"community-operators-85fqk\" (UID: \"74fd145d-c90c-48fc-b306-b89bb9a2edcf\") " pod="openshift-marketplace/community-operators-85fqk" Feb 14 19:49:09 crc kubenswrapper[4897]: I0214 19:49:09.410449 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74fd145d-c90c-48fc-b306-b89bb9a2edcf-utilities\") pod \"community-operators-85fqk\" (UID: \"74fd145d-c90c-48fc-b306-b89bb9a2edcf\") " pod="openshift-marketplace/community-operators-85fqk" Feb 14 19:49:09 crc kubenswrapper[4897]: I0214 19:49:09.512012 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74fd145d-c90c-48fc-b306-b89bb9a2edcf-catalog-content\") pod \"community-operators-85fqk\" (UID: \"74fd145d-c90c-48fc-b306-b89bb9a2edcf\") " pod="openshift-marketplace/community-operators-85fqk" Feb 14 19:49:09 crc kubenswrapper[4897]: I0214 19:49:09.512123 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mhwf\" (UniqueName: \"kubernetes.io/projected/74fd145d-c90c-48fc-b306-b89bb9a2edcf-kube-api-access-6mhwf\") pod \"community-operators-85fqk\" (UID: \"74fd145d-c90c-48fc-b306-b89bb9a2edcf\") " pod="openshift-marketplace/community-operators-85fqk" Feb 14 19:49:09 crc kubenswrapper[4897]: I0214 19:49:09.512173 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74fd145d-c90c-48fc-b306-b89bb9a2edcf-utilities\") pod \"community-operators-85fqk\" (UID: \"74fd145d-c90c-48fc-b306-b89bb9a2edcf\") " pod="openshift-marketplace/community-operators-85fqk" Feb 14 19:49:09 crc kubenswrapper[4897]: I0214 19:49:09.512751 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74fd145d-c90c-48fc-b306-b89bb9a2edcf-utilities\") pod \"community-operators-85fqk\" (UID: \"74fd145d-c90c-48fc-b306-b89bb9a2edcf\") " pod="openshift-marketplace/community-operators-85fqk" Feb 14 19:49:09 crc kubenswrapper[4897]: I0214 19:49:09.512755 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74fd145d-c90c-48fc-b306-b89bb9a2edcf-catalog-content\") pod \"community-operators-85fqk\" (UID: \"74fd145d-c90c-48fc-b306-b89bb9a2edcf\") " pod="openshift-marketplace/community-operators-85fqk" Feb 14 19:49:09 crc kubenswrapper[4897]: I0214 19:49:09.540740 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mhwf\" (UniqueName: \"kubernetes.io/projected/74fd145d-c90c-48fc-b306-b89bb9a2edcf-kube-api-access-6mhwf\") pod \"community-operators-85fqk\" (UID: \"74fd145d-c90c-48fc-b306-b89bb9a2edcf\") " pod="openshift-marketplace/community-operators-85fqk" Feb 14 19:49:09 crc kubenswrapper[4897]: I0214 19:49:09.631179 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-85fqk" Feb 14 19:49:10 crc kubenswrapper[4897]: I0214 19:49:10.140337 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-85fqk"] Feb 14 19:49:10 crc kubenswrapper[4897]: I0214 19:49:10.591855 4897 generic.go:334] "Generic (PLEG): container finished" podID="74fd145d-c90c-48fc-b306-b89bb9a2edcf" containerID="e63e56b6f6d570c090f19cf433d60b322eee00d5019b731ad60624bffe8d88a9" exitCode=0 Feb 14 19:49:10 crc kubenswrapper[4897]: I0214 19:49:10.592086 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85fqk" event={"ID":"74fd145d-c90c-48fc-b306-b89bb9a2edcf","Type":"ContainerDied","Data":"e63e56b6f6d570c090f19cf433d60b322eee00d5019b731ad60624bffe8d88a9"} Feb 14 19:49:10 crc kubenswrapper[4897]: I0214 19:49:10.592163 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85fqk" event={"ID":"74fd145d-c90c-48fc-b306-b89bb9a2edcf","Type":"ContainerStarted","Data":"b2ccca8de3b341e1397bdb9f4dc34aeffb28992eb1a68a0c87a640f1d5847638"} Feb 14 19:49:10 crc kubenswrapper[4897]: I0214 19:49:10.593904 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 19:49:10 crc kubenswrapper[4897]: I0214 19:49:10.793860 4897 scope.go:117] "RemoveContainer" containerID="1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c" Feb 14 19:49:10 crc kubenswrapper[4897]: E0214 19:49:10.794214 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:49:12 crc kubenswrapper[4897]: I0214 19:49:12.646443 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85fqk" event={"ID":"74fd145d-c90c-48fc-b306-b89bb9a2edcf","Type":"ContainerStarted","Data":"ccf9b5189b8e9a0edd5d84c8173baec4beb9c8588ac32652b9e3d731f7bde57e"} Feb 14 19:49:13 crc kubenswrapper[4897]: I0214 19:49:13.659700 4897 generic.go:334] "Generic (PLEG): container finished" podID="74fd145d-c90c-48fc-b306-b89bb9a2edcf" containerID="ccf9b5189b8e9a0edd5d84c8173baec4beb9c8588ac32652b9e3d731f7bde57e" exitCode=0 Feb 14 19:49:13 crc kubenswrapper[4897]: I0214 19:49:13.659801 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85fqk" event={"ID":"74fd145d-c90c-48fc-b306-b89bb9a2edcf","Type":"ContainerDied","Data":"ccf9b5189b8e9a0edd5d84c8173baec4beb9c8588ac32652b9e3d731f7bde57e"} Feb 14 19:49:14 crc kubenswrapper[4897]: I0214 19:49:14.682454 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85fqk" event={"ID":"74fd145d-c90c-48fc-b306-b89bb9a2edcf","Type":"ContainerStarted","Data":"0d80f6a745666f0bfa09460925cc4f8ed5c5fed35e619a7bd5896cf6dc385ef0"} Feb 14 19:49:14 crc kubenswrapper[4897]: I0214 19:49:14.715826 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-85fqk" podStartSLOduration=2.122322612 podStartE2EDuration="5.715772609s" podCreationTimestamp="2026-02-14 19:49:09 +0000 UTC" firstStartedPulling="2026-02-14 19:49:10.593667593 +0000 UTC m=+4003.570076066" lastFinishedPulling="2026-02-14 19:49:14.18711755 +0000 UTC m=+4007.163526063" observedRunningTime="2026-02-14 19:49:14.709460603 +0000 UTC m=+4007.685869186" watchObservedRunningTime="2026-02-14 19:49:14.715772609 +0000 UTC m=+4007.692181122" Feb 14 19:49:19 crc kubenswrapper[4897]: I0214 19:49:19.631578 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-85fqk" Feb 14 19:49:19 crc kubenswrapper[4897]: I0214 19:49:19.632456 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-85fqk" Feb 14 19:49:20 crc kubenswrapper[4897]: I0214 19:49:20.692619 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-85fqk" podUID="74fd145d-c90c-48fc-b306-b89bb9a2edcf" containerName="registry-server" probeResult="failure" output=< Feb 14 19:49:20 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 19:49:20 crc kubenswrapper[4897]: > Feb 14 19:49:24 crc kubenswrapper[4897]: I0214 19:49:24.795062 4897 scope.go:117] "RemoveContainer" containerID="1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c" Feb 14 19:49:24 crc kubenswrapper[4897]: E0214 19:49:24.796403 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:49:29 crc kubenswrapper[4897]: I0214 19:49:29.698466 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-85fqk" Feb 14 19:49:29 crc kubenswrapper[4897]: I0214 19:49:29.774960 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-85fqk" Feb 14 19:49:29 crc kubenswrapper[4897]: I0214 19:49:29.945527 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-85fqk"] Feb 14 19:49:30 crc kubenswrapper[4897]: I0214 19:49:30.890922 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-85fqk" podUID="74fd145d-c90c-48fc-b306-b89bb9a2edcf" containerName="registry-server" containerID="cri-o://0d80f6a745666f0bfa09460925cc4f8ed5c5fed35e619a7bd5896cf6dc385ef0" gracePeriod=2 Feb 14 19:49:31 crc kubenswrapper[4897]: I0214 19:49:31.476312 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-85fqk" Feb 14 19:49:31 crc kubenswrapper[4897]: I0214 19:49:31.588290 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mhwf\" (UniqueName: \"kubernetes.io/projected/74fd145d-c90c-48fc-b306-b89bb9a2edcf-kube-api-access-6mhwf\") pod \"74fd145d-c90c-48fc-b306-b89bb9a2edcf\" (UID: \"74fd145d-c90c-48fc-b306-b89bb9a2edcf\") " Feb 14 19:49:31 crc kubenswrapper[4897]: I0214 19:49:31.588558 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74fd145d-c90c-48fc-b306-b89bb9a2edcf-catalog-content\") pod \"74fd145d-c90c-48fc-b306-b89bb9a2edcf\" (UID: \"74fd145d-c90c-48fc-b306-b89bb9a2edcf\") " Feb 14 19:49:31 crc kubenswrapper[4897]: I0214 19:49:31.588940 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74fd145d-c90c-48fc-b306-b89bb9a2edcf-utilities\") pod \"74fd145d-c90c-48fc-b306-b89bb9a2edcf\" (UID: \"74fd145d-c90c-48fc-b306-b89bb9a2edcf\") " Feb 14 19:49:31 crc kubenswrapper[4897]: I0214 19:49:31.590219 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74fd145d-c90c-48fc-b306-b89bb9a2edcf-utilities" (OuterVolumeSpecName: "utilities") pod "74fd145d-c90c-48fc-b306-b89bb9a2edcf" (UID: "74fd145d-c90c-48fc-b306-b89bb9a2edcf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:49:31 crc kubenswrapper[4897]: I0214 19:49:31.595984 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74fd145d-c90c-48fc-b306-b89bb9a2edcf-kube-api-access-6mhwf" (OuterVolumeSpecName: "kube-api-access-6mhwf") pod "74fd145d-c90c-48fc-b306-b89bb9a2edcf" (UID: "74fd145d-c90c-48fc-b306-b89bb9a2edcf"). InnerVolumeSpecName "kube-api-access-6mhwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:49:31 crc kubenswrapper[4897]: I0214 19:49:31.650791 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74fd145d-c90c-48fc-b306-b89bb9a2edcf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74fd145d-c90c-48fc-b306-b89bb9a2edcf" (UID: "74fd145d-c90c-48fc-b306-b89bb9a2edcf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:49:31 crc kubenswrapper[4897]: I0214 19:49:31.693170 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74fd145d-c90c-48fc-b306-b89bb9a2edcf-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 19:49:31 crc kubenswrapper[4897]: I0214 19:49:31.693211 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mhwf\" (UniqueName: \"kubernetes.io/projected/74fd145d-c90c-48fc-b306-b89bb9a2edcf-kube-api-access-6mhwf\") on node \"crc\" DevicePath \"\"" Feb 14 19:49:31 crc kubenswrapper[4897]: I0214 19:49:31.693226 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74fd145d-c90c-48fc-b306-b89bb9a2edcf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 19:49:31 crc kubenswrapper[4897]: I0214 19:49:31.907329 4897 generic.go:334] "Generic (PLEG): container finished" podID="74fd145d-c90c-48fc-b306-b89bb9a2edcf" containerID="0d80f6a745666f0bfa09460925cc4f8ed5c5fed35e619a7bd5896cf6dc385ef0" exitCode=0 Feb 14 19:49:31 crc kubenswrapper[4897]: I0214 19:49:31.907433 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-85fqk" Feb 14 19:49:31 crc kubenswrapper[4897]: I0214 19:49:31.907430 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85fqk" event={"ID":"74fd145d-c90c-48fc-b306-b89bb9a2edcf","Type":"ContainerDied","Data":"0d80f6a745666f0bfa09460925cc4f8ed5c5fed35e619a7bd5896cf6dc385ef0"} Feb 14 19:49:31 crc kubenswrapper[4897]: I0214 19:49:31.908346 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85fqk" event={"ID":"74fd145d-c90c-48fc-b306-b89bb9a2edcf","Type":"ContainerDied","Data":"b2ccca8de3b341e1397bdb9f4dc34aeffb28992eb1a68a0c87a640f1d5847638"} Feb 14 19:49:31 crc kubenswrapper[4897]: I0214 19:49:31.908396 4897 scope.go:117] "RemoveContainer" containerID="0d80f6a745666f0bfa09460925cc4f8ed5c5fed35e619a7bd5896cf6dc385ef0" Feb 14 19:49:31 crc kubenswrapper[4897]: I0214 19:49:31.938240 4897 scope.go:117] "RemoveContainer" containerID="ccf9b5189b8e9a0edd5d84c8173baec4beb9c8588ac32652b9e3d731f7bde57e" Feb 14 19:49:31 crc kubenswrapper[4897]: I0214 19:49:31.952181 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-85fqk"] Feb 14 19:49:31 crc kubenswrapper[4897]: I0214 19:49:31.969732 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-85fqk"] Feb 14 19:49:31 crc kubenswrapper[4897]: I0214 19:49:31.978718 4897 scope.go:117] "RemoveContainer" containerID="e63e56b6f6d570c090f19cf433d60b322eee00d5019b731ad60624bffe8d88a9" Feb 14 19:49:32 crc kubenswrapper[4897]: I0214 19:49:32.022017 4897 scope.go:117] "RemoveContainer" containerID="0d80f6a745666f0bfa09460925cc4f8ed5c5fed35e619a7bd5896cf6dc385ef0" Feb 14 19:49:32 crc kubenswrapper[4897]: E0214 19:49:32.022708 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d80f6a745666f0bfa09460925cc4f8ed5c5fed35e619a7bd5896cf6dc385ef0\": container with ID starting with 0d80f6a745666f0bfa09460925cc4f8ed5c5fed35e619a7bd5896cf6dc385ef0 not found: ID does not exist" containerID="0d80f6a745666f0bfa09460925cc4f8ed5c5fed35e619a7bd5896cf6dc385ef0" Feb 14 19:49:32 crc kubenswrapper[4897]: I0214 19:49:32.022770 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d80f6a745666f0bfa09460925cc4f8ed5c5fed35e619a7bd5896cf6dc385ef0"} err="failed to get container status \"0d80f6a745666f0bfa09460925cc4f8ed5c5fed35e619a7bd5896cf6dc385ef0\": rpc error: code = NotFound desc = could not find container \"0d80f6a745666f0bfa09460925cc4f8ed5c5fed35e619a7bd5896cf6dc385ef0\": container with ID starting with 0d80f6a745666f0bfa09460925cc4f8ed5c5fed35e619a7bd5896cf6dc385ef0 not found: ID does not exist" Feb 14 19:49:32 crc kubenswrapper[4897]: I0214 19:49:32.022806 4897 scope.go:117] "RemoveContainer" containerID="ccf9b5189b8e9a0edd5d84c8173baec4beb9c8588ac32652b9e3d731f7bde57e" Feb 14 19:49:32 crc kubenswrapper[4897]: E0214 19:49:32.023671 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccf9b5189b8e9a0edd5d84c8173baec4beb9c8588ac32652b9e3d731f7bde57e\": container with ID starting with ccf9b5189b8e9a0edd5d84c8173baec4beb9c8588ac32652b9e3d731f7bde57e not found: ID does not exist" containerID="ccf9b5189b8e9a0edd5d84c8173baec4beb9c8588ac32652b9e3d731f7bde57e" Feb 14 19:49:32 crc kubenswrapper[4897]: I0214 19:49:32.023724 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccf9b5189b8e9a0edd5d84c8173baec4beb9c8588ac32652b9e3d731f7bde57e"} err="failed to get container status \"ccf9b5189b8e9a0edd5d84c8173baec4beb9c8588ac32652b9e3d731f7bde57e\": rpc error: code = NotFound desc = could not find container \"ccf9b5189b8e9a0edd5d84c8173baec4beb9c8588ac32652b9e3d731f7bde57e\": container with ID starting with ccf9b5189b8e9a0edd5d84c8173baec4beb9c8588ac32652b9e3d731f7bde57e not found: ID does not exist" Feb 14 19:49:32 crc kubenswrapper[4897]: I0214 19:49:32.023770 4897 scope.go:117] "RemoveContainer" containerID="e63e56b6f6d570c090f19cf433d60b322eee00d5019b731ad60624bffe8d88a9" Feb 14 19:49:32 crc kubenswrapper[4897]: E0214 19:49:32.024313 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e63e56b6f6d570c090f19cf433d60b322eee00d5019b731ad60624bffe8d88a9\": container with ID starting with e63e56b6f6d570c090f19cf433d60b322eee00d5019b731ad60624bffe8d88a9 not found: ID does not exist" containerID="e63e56b6f6d570c090f19cf433d60b322eee00d5019b731ad60624bffe8d88a9" Feb 14 19:49:32 crc kubenswrapper[4897]: I0214 19:49:32.024386 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e63e56b6f6d570c090f19cf433d60b322eee00d5019b731ad60624bffe8d88a9"} err="failed to get container status \"e63e56b6f6d570c090f19cf433d60b322eee00d5019b731ad60624bffe8d88a9\": rpc error: code = NotFound desc = could not find container \"e63e56b6f6d570c090f19cf433d60b322eee00d5019b731ad60624bffe8d88a9\": container with ID starting with e63e56b6f6d570c090f19cf433d60b322eee00d5019b731ad60624bffe8d88a9 not found: ID does not exist" Feb 14 19:49:33 crc kubenswrapper[4897]: I0214 19:49:33.809662 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74fd145d-c90c-48fc-b306-b89bb9a2edcf" path="/var/lib/kubelet/pods/74fd145d-c90c-48fc-b306-b89bb9a2edcf/volumes" Feb 14 19:49:39 crc kubenswrapper[4897]: I0214 19:49:39.795505 4897 scope.go:117] "RemoveContainer" containerID="1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c" Feb 14 19:49:39 crc kubenswrapper[4897]: E0214 19:49:39.796693 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:49:50 crc kubenswrapper[4897]: I0214 19:49:50.794452 4897 scope.go:117] "RemoveContainer" containerID="1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c" Feb 14 19:49:50 crc kubenswrapper[4897]: E0214 19:49:50.795308 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:50:03 crc kubenswrapper[4897]: I0214 19:50:03.794515 4897 scope.go:117] "RemoveContainer" containerID="1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c" Feb 14 19:50:03 crc kubenswrapper[4897]: E0214 19:50:03.795292 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:50:17 crc kubenswrapper[4897]: I0214 19:50:17.805514 4897 scope.go:117] "RemoveContainer" containerID="1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c" Feb 14 19:50:17 crc kubenswrapper[4897]: E0214 19:50:17.806783 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:50:31 crc kubenswrapper[4897]: I0214 19:50:31.798712 4897 scope.go:117] "RemoveContainer" containerID="1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c" Feb 14 19:50:31 crc kubenswrapper[4897]: E0214 19:50:31.799525 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:50:46 crc kubenswrapper[4897]: I0214 19:50:46.793737 4897 scope.go:117] "RemoveContainer" containerID="1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c" Feb 14 19:50:46 crc kubenswrapper[4897]: E0214 19:50:46.794726 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:51:00 crc kubenswrapper[4897]: I0214 19:51:00.793809 4897 scope.go:117] "RemoveContainer" containerID="1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c" Feb 14 19:51:00 crc kubenswrapper[4897]: E0214 19:51:00.794532 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:51:15 crc kubenswrapper[4897]: I0214 19:51:15.794562 4897 scope.go:117] "RemoveContainer" containerID="1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c" Feb 14 19:51:16 crc kubenswrapper[4897]: I0214 19:51:16.194945 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerStarted","Data":"7d0e93917dd8a36f9df22083fb12bdf30d6b7b30575e0be385fe1a6647406065"} Feb 14 19:51:54 crc kubenswrapper[4897]: I0214 19:51:54.040064 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gfdpb"] Feb 14 19:51:54 crc kubenswrapper[4897]: E0214 19:51:54.041270 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74fd145d-c90c-48fc-b306-b89bb9a2edcf" containerName="extract-content" Feb 14 19:51:54 crc kubenswrapper[4897]: I0214 19:51:54.041289 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="74fd145d-c90c-48fc-b306-b89bb9a2edcf" containerName="extract-content" Feb 14 19:51:54 crc kubenswrapper[4897]: E0214 19:51:54.041332 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74fd145d-c90c-48fc-b306-b89bb9a2edcf" containerName="registry-server" Feb 14 19:51:54 crc kubenswrapper[4897]: I0214 19:51:54.041359 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="74fd145d-c90c-48fc-b306-b89bb9a2edcf" containerName="registry-server" Feb 14 19:51:54 crc kubenswrapper[4897]: E0214 19:51:54.041374 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74fd145d-c90c-48fc-b306-b89bb9a2edcf" containerName="extract-utilities" Feb 14 19:51:54 crc kubenswrapper[4897]: I0214 19:51:54.041382 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="74fd145d-c90c-48fc-b306-b89bb9a2edcf" containerName="extract-utilities" Feb 14 19:51:54 crc kubenswrapper[4897]: I0214 19:51:54.041678 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="74fd145d-c90c-48fc-b306-b89bb9a2edcf" containerName="registry-server" Feb 14 19:51:54 crc kubenswrapper[4897]: I0214 19:51:54.043849 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gfdpb" Feb 14 19:51:54 crc kubenswrapper[4897]: I0214 19:51:54.060898 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gfdpb"] Feb 14 19:51:54 crc kubenswrapper[4897]: I0214 19:51:54.131507 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9jtb\" (UniqueName: \"kubernetes.io/projected/81c4703d-0bf8-4606-a295-c34619fd8155-kube-api-access-n9jtb\") pod \"certified-operators-gfdpb\" (UID: \"81c4703d-0bf8-4606-a295-c34619fd8155\") " pod="openshift-marketplace/certified-operators-gfdpb" Feb 14 19:51:54 crc kubenswrapper[4897]: I0214 19:51:54.131673 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81c4703d-0bf8-4606-a295-c34619fd8155-catalog-content\") pod \"certified-operators-gfdpb\" (UID: \"81c4703d-0bf8-4606-a295-c34619fd8155\") " pod="openshift-marketplace/certified-operators-gfdpb" Feb 14 19:51:54 crc kubenswrapper[4897]: I0214 19:51:54.131793 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81c4703d-0bf8-4606-a295-c34619fd8155-utilities\") pod \"certified-operators-gfdpb\" (UID: \"81c4703d-0bf8-4606-a295-c34619fd8155\") " pod="openshift-marketplace/certified-operators-gfdpb" Feb 14 19:51:54 crc kubenswrapper[4897]: I0214 19:51:54.234613 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81c4703d-0bf8-4606-a295-c34619fd8155-catalog-content\") pod \"certified-operators-gfdpb\" (UID: \"81c4703d-0bf8-4606-a295-c34619fd8155\") " pod="openshift-marketplace/certified-operators-gfdpb" Feb 14 19:51:54 crc kubenswrapper[4897]: I0214 19:51:54.234757 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81c4703d-0bf8-4606-a295-c34619fd8155-utilities\") pod \"certified-operators-gfdpb\" (UID: \"81c4703d-0bf8-4606-a295-c34619fd8155\") " pod="openshift-marketplace/certified-operators-gfdpb" Feb 14 19:51:54 crc kubenswrapper[4897]: I0214 19:51:54.234808 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9jtb\" (UniqueName: \"kubernetes.io/projected/81c4703d-0bf8-4606-a295-c34619fd8155-kube-api-access-n9jtb\") pod \"certified-operators-gfdpb\" (UID: \"81c4703d-0bf8-4606-a295-c34619fd8155\") " pod="openshift-marketplace/certified-operators-gfdpb" Feb 14 19:51:54 crc kubenswrapper[4897]: I0214 19:51:54.235167 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81c4703d-0bf8-4606-a295-c34619fd8155-catalog-content\") pod \"certified-operators-gfdpb\" (UID: \"81c4703d-0bf8-4606-a295-c34619fd8155\") " pod="openshift-marketplace/certified-operators-gfdpb" Feb 14 19:51:54 crc kubenswrapper[4897]: I0214 19:51:54.235282 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81c4703d-0bf8-4606-a295-c34619fd8155-utilities\") pod \"certified-operators-gfdpb\" (UID: \"81c4703d-0bf8-4606-a295-c34619fd8155\") " pod="openshift-marketplace/certified-operators-gfdpb" Feb 14 19:51:54 crc kubenswrapper[4897]: I0214 19:51:54.259829 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9jtb\" (UniqueName: \"kubernetes.io/projected/81c4703d-0bf8-4606-a295-c34619fd8155-kube-api-access-n9jtb\") pod \"certified-operators-gfdpb\" (UID: \"81c4703d-0bf8-4606-a295-c34619fd8155\") " pod="openshift-marketplace/certified-operators-gfdpb" Feb 14 19:51:54 crc kubenswrapper[4897]: I0214 19:51:54.372980 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gfdpb" Feb 14 19:51:54 crc kubenswrapper[4897]: I0214 19:51:54.913292 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gfdpb"] Feb 14 19:51:55 crc kubenswrapper[4897]: I0214 19:51:55.691801 4897 generic.go:334] "Generic (PLEG): container finished" podID="81c4703d-0bf8-4606-a295-c34619fd8155" containerID="e61c44d8f34363cad5df3bf0a498faeaafecb9de467b8200c6c1e0f7124fa57e" exitCode=0 Feb 14 19:51:55 crc kubenswrapper[4897]: I0214 19:51:55.691907 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gfdpb" event={"ID":"81c4703d-0bf8-4606-a295-c34619fd8155","Type":"ContainerDied","Data":"e61c44d8f34363cad5df3bf0a498faeaafecb9de467b8200c6c1e0f7124fa57e"} Feb 14 19:51:55 crc kubenswrapper[4897]: I0214 19:51:55.692240 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gfdpb" event={"ID":"81c4703d-0bf8-4606-a295-c34619fd8155","Type":"ContainerStarted","Data":"c3afe8c6099c9e1098982509c4102cb1bd85b1887446a9377d491be1a2d5117b"} Feb 14 19:51:57 crc kubenswrapper[4897]: I0214 19:51:57.718454 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gfdpb" event={"ID":"81c4703d-0bf8-4606-a295-c34619fd8155","Type":"ContainerStarted","Data":"fa3a70bd09472fb1c04d2a53c4fa934ce795efa21c4f6d38657700896cf9da12"} Feb 14 19:51:58 crc kubenswrapper[4897]: I0214 19:51:58.734289 4897 generic.go:334] "Generic (PLEG): container finished" podID="81c4703d-0bf8-4606-a295-c34619fd8155" containerID="fa3a70bd09472fb1c04d2a53c4fa934ce795efa21c4f6d38657700896cf9da12" exitCode=0 Feb 14 19:51:58 crc kubenswrapper[4897]: I0214 19:51:58.734667 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gfdpb" event={"ID":"81c4703d-0bf8-4606-a295-c34619fd8155","Type":"ContainerDied","Data":"fa3a70bd09472fb1c04d2a53c4fa934ce795efa21c4f6d38657700896cf9da12"} Feb 14 19:51:59 crc kubenswrapper[4897]: I0214 19:51:59.746567 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gfdpb" event={"ID":"81c4703d-0bf8-4606-a295-c34619fd8155","Type":"ContainerStarted","Data":"d2e6ccec80e90d29c047560ccb0c9a62c88e0395ff65475d3fc2173c8cedfde9"} Feb 14 19:51:59 crc kubenswrapper[4897]: I0214 19:51:59.772741 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gfdpb" podStartSLOduration=2.112635495 podStartE2EDuration="5.772724085s" podCreationTimestamp="2026-02-14 19:51:54 +0000 UTC" firstStartedPulling="2026-02-14 19:51:55.693740284 +0000 UTC m=+4168.670148787" lastFinishedPulling="2026-02-14 19:51:59.353828894 +0000 UTC m=+4172.330237377" observedRunningTime="2026-02-14 19:51:59.766704678 +0000 UTC m=+4172.743113161" watchObservedRunningTime="2026-02-14 19:51:59.772724085 +0000 UTC m=+4172.749132568" Feb 14 19:52:04 crc kubenswrapper[4897]: I0214 19:52:04.373831 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gfdpb" Feb 14 19:52:04 crc kubenswrapper[4897]: I0214 19:52:04.374384 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gfdpb" Feb 14 19:52:05 crc kubenswrapper[4897]: I0214 19:52:05.436026 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-gfdpb" podUID="81c4703d-0bf8-4606-a295-c34619fd8155" containerName="registry-server" probeResult="failure" output=< Feb 14 19:52:05 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 19:52:05 crc kubenswrapper[4897]: > Feb 14 19:52:14 crc kubenswrapper[4897]: I0214 19:52:14.440750 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gfdpb" Feb 14 19:52:14 crc kubenswrapper[4897]: I0214 19:52:14.508171 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gfdpb" Feb 14 19:52:14 crc kubenswrapper[4897]: I0214 19:52:14.692828 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gfdpb"] Feb 14 19:52:15 crc kubenswrapper[4897]: I0214 19:52:15.927411 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gfdpb" podUID="81c4703d-0bf8-4606-a295-c34619fd8155" containerName="registry-server" containerID="cri-o://d2e6ccec80e90d29c047560ccb0c9a62c88e0395ff65475d3fc2173c8cedfde9" gracePeriod=2 Feb 14 19:52:16 crc kubenswrapper[4897]: I0214 19:52:16.943970 4897 generic.go:334] "Generic (PLEG): container finished" podID="81c4703d-0bf8-4606-a295-c34619fd8155" containerID="d2e6ccec80e90d29c047560ccb0c9a62c88e0395ff65475d3fc2173c8cedfde9" exitCode=0 Feb 14 19:52:16 crc kubenswrapper[4897]: I0214 19:52:16.944104 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gfdpb" event={"ID":"81c4703d-0bf8-4606-a295-c34619fd8155","Type":"ContainerDied","Data":"d2e6ccec80e90d29c047560ccb0c9a62c88e0395ff65475d3fc2173c8cedfde9"} Feb 14 19:52:16 crc kubenswrapper[4897]: I0214 19:52:16.944420 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gfdpb" event={"ID":"81c4703d-0bf8-4606-a295-c34619fd8155","Type":"ContainerDied","Data":"c3afe8c6099c9e1098982509c4102cb1bd85b1887446a9377d491be1a2d5117b"} Feb 14 19:52:16 crc kubenswrapper[4897]: I0214 19:52:16.944444 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3afe8c6099c9e1098982509c4102cb1bd85b1887446a9377d491be1a2d5117b" Feb 14 19:52:17 crc kubenswrapper[4897]: I0214 19:52:17.453667 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gfdpb" Feb 14 19:52:17 crc kubenswrapper[4897]: I0214 19:52:17.632608 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81c4703d-0bf8-4606-a295-c34619fd8155-utilities\") pod \"81c4703d-0bf8-4606-a295-c34619fd8155\" (UID: \"81c4703d-0bf8-4606-a295-c34619fd8155\") " Feb 14 19:52:17 crc kubenswrapper[4897]: I0214 19:52:17.632857 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81c4703d-0bf8-4606-a295-c34619fd8155-catalog-content\") pod \"81c4703d-0bf8-4606-a295-c34619fd8155\" (UID: \"81c4703d-0bf8-4606-a295-c34619fd8155\") " Feb 14 19:52:17 crc kubenswrapper[4897]: I0214 19:52:17.632989 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9jtb\" (UniqueName: \"kubernetes.io/projected/81c4703d-0bf8-4606-a295-c34619fd8155-kube-api-access-n9jtb\") pod \"81c4703d-0bf8-4606-a295-c34619fd8155\" (UID: \"81c4703d-0bf8-4606-a295-c34619fd8155\") " Feb 14 19:52:17 crc kubenswrapper[4897]: I0214 19:52:17.633368 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81c4703d-0bf8-4606-a295-c34619fd8155-utilities" (OuterVolumeSpecName: "utilities") pod "81c4703d-0bf8-4606-a295-c34619fd8155" (UID: "81c4703d-0bf8-4606-a295-c34619fd8155"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:52:17 crc kubenswrapper[4897]: I0214 19:52:17.634087 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81c4703d-0bf8-4606-a295-c34619fd8155-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 19:52:17 crc kubenswrapper[4897]: I0214 19:52:17.644254 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81c4703d-0bf8-4606-a295-c34619fd8155-kube-api-access-n9jtb" (OuterVolumeSpecName: "kube-api-access-n9jtb") pod "81c4703d-0bf8-4606-a295-c34619fd8155" (UID: "81c4703d-0bf8-4606-a295-c34619fd8155"). InnerVolumeSpecName "kube-api-access-n9jtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:52:17 crc kubenswrapper[4897]: I0214 19:52:17.691418 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81c4703d-0bf8-4606-a295-c34619fd8155-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81c4703d-0bf8-4606-a295-c34619fd8155" (UID: "81c4703d-0bf8-4606-a295-c34619fd8155"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:52:17 crc kubenswrapper[4897]: I0214 19:52:17.736969 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81c4703d-0bf8-4606-a295-c34619fd8155-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 19:52:17 crc kubenswrapper[4897]: I0214 19:52:17.737016 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9jtb\" (UniqueName: \"kubernetes.io/projected/81c4703d-0bf8-4606-a295-c34619fd8155-kube-api-access-n9jtb\") on node \"crc\" DevicePath \"\"" Feb 14 19:52:17 crc kubenswrapper[4897]: I0214 19:52:17.958486 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gfdpb" Feb 14 19:52:17 crc kubenswrapper[4897]: I0214 19:52:17.996002 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gfdpb"] Feb 14 19:52:18 crc kubenswrapper[4897]: I0214 19:52:18.012596 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gfdpb"] Feb 14 19:52:19 crc kubenswrapper[4897]: I0214 19:52:19.813949 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81c4703d-0bf8-4606-a295-c34619fd8155" path="/var/lib/kubelet/pods/81c4703d-0bf8-4606-a295-c34619fd8155/volumes" Feb 14 19:53:31 crc kubenswrapper[4897]: I0214 19:53:31.726176 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:53:31 crc kubenswrapper[4897]: I0214 19:53:31.726867 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:53:56 crc kubenswrapper[4897]: E0214 19:53:56.153276 4897 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.41:35756->38.102.83.41:34573: read tcp 38.102.83.41:35756->38.102.83.41:34573: read: connection reset by peer Feb 14 19:53:56 crc kubenswrapper[4897]: E0214 19:53:56.154149 4897 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.41:35756->38.102.83.41:34573: write tcp 38.102.83.41:35756->38.102.83.41:34573: write: broken pipe Feb 14 19:54:01 crc kubenswrapper[4897]: I0214 19:54:01.725542 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:54:01 crc kubenswrapper[4897]: I0214 19:54:01.726300 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:54:26 crc kubenswrapper[4897]: I0214 19:54:26.375775 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-29tcf"] Feb 14 19:54:26 crc kubenswrapper[4897]: E0214 19:54:26.377275 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81c4703d-0bf8-4606-a295-c34619fd8155" containerName="extract-utilities" Feb 14 19:54:26 crc kubenswrapper[4897]: I0214 19:54:26.377300 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="81c4703d-0bf8-4606-a295-c34619fd8155" containerName="extract-utilities" Feb 14 19:54:26 crc kubenswrapper[4897]: E0214 19:54:26.377336 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81c4703d-0bf8-4606-a295-c34619fd8155" containerName="registry-server" Feb 14 19:54:26 crc kubenswrapper[4897]: I0214 19:54:26.377351 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="81c4703d-0bf8-4606-a295-c34619fd8155" containerName="registry-server" Feb 14 19:54:26 crc kubenswrapper[4897]: E0214 19:54:26.377383 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81c4703d-0bf8-4606-a295-c34619fd8155" containerName="extract-content" Feb 14 19:54:26 crc kubenswrapper[4897]: I0214 19:54:26.377409 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="81c4703d-0bf8-4606-a295-c34619fd8155" containerName="extract-content" Feb 14 19:54:26 crc kubenswrapper[4897]: I0214 19:54:26.377861 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="81c4703d-0bf8-4606-a295-c34619fd8155" containerName="registry-server" Feb 14 19:54:26 crc kubenswrapper[4897]: I0214 19:54:26.383773 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-29tcf" Feb 14 19:54:26 crc kubenswrapper[4897]: I0214 19:54:26.393799 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-29tcf"] Feb 14 19:54:26 crc kubenswrapper[4897]: I0214 19:54:26.460275 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mr64\" (UniqueName: \"kubernetes.io/projected/02270ee3-594d-44a0-9ad0-2a9dbafa5717-kube-api-access-4mr64\") pod \"redhat-operators-29tcf\" (UID: \"02270ee3-594d-44a0-9ad0-2a9dbafa5717\") " pod="openshift-marketplace/redhat-operators-29tcf" Feb 14 19:54:26 crc kubenswrapper[4897]: I0214 19:54:26.460343 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02270ee3-594d-44a0-9ad0-2a9dbafa5717-catalog-content\") pod \"redhat-operators-29tcf\" (UID: \"02270ee3-594d-44a0-9ad0-2a9dbafa5717\") " pod="openshift-marketplace/redhat-operators-29tcf" Feb 14 19:54:26 crc kubenswrapper[4897]: I0214 19:54:26.460468 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02270ee3-594d-44a0-9ad0-2a9dbafa5717-utilities\") pod \"redhat-operators-29tcf\" (UID: \"02270ee3-594d-44a0-9ad0-2a9dbafa5717\") " pod="openshift-marketplace/redhat-operators-29tcf" Feb 14 19:54:26 crc kubenswrapper[4897]: I0214 19:54:26.563195 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02270ee3-594d-44a0-9ad0-2a9dbafa5717-utilities\") pod \"redhat-operators-29tcf\" (UID: \"02270ee3-594d-44a0-9ad0-2a9dbafa5717\") " pod="openshift-marketplace/redhat-operators-29tcf" Feb 14 19:54:26 crc kubenswrapper[4897]: I0214 19:54:26.563615 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02270ee3-594d-44a0-9ad0-2a9dbafa5717-utilities\") pod \"redhat-operators-29tcf\" (UID: \"02270ee3-594d-44a0-9ad0-2a9dbafa5717\") " pod="openshift-marketplace/redhat-operators-29tcf" Feb 14 19:54:26 crc kubenswrapper[4897]: I0214 19:54:26.563634 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mr64\" (UniqueName: \"kubernetes.io/projected/02270ee3-594d-44a0-9ad0-2a9dbafa5717-kube-api-access-4mr64\") pod \"redhat-operators-29tcf\" (UID: \"02270ee3-594d-44a0-9ad0-2a9dbafa5717\") " pod="openshift-marketplace/redhat-operators-29tcf" Feb 14 19:54:26 crc kubenswrapper[4897]: I0214 19:54:26.563701 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02270ee3-594d-44a0-9ad0-2a9dbafa5717-catalog-content\") pod \"redhat-operators-29tcf\" (UID: \"02270ee3-594d-44a0-9ad0-2a9dbafa5717\") " pod="openshift-marketplace/redhat-operators-29tcf" Feb 14 19:54:26 crc kubenswrapper[4897]: I0214 19:54:26.564079 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02270ee3-594d-44a0-9ad0-2a9dbafa5717-catalog-content\") pod \"redhat-operators-29tcf\" (UID: \"02270ee3-594d-44a0-9ad0-2a9dbafa5717\") " pod="openshift-marketplace/redhat-operators-29tcf" Feb 14 19:54:26 crc kubenswrapper[4897]: I0214 19:54:26.592292 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mr64\" (UniqueName: \"kubernetes.io/projected/02270ee3-594d-44a0-9ad0-2a9dbafa5717-kube-api-access-4mr64\") pod \"redhat-operators-29tcf\" (UID: \"02270ee3-594d-44a0-9ad0-2a9dbafa5717\") " pod="openshift-marketplace/redhat-operators-29tcf" Feb 14 19:54:26 crc kubenswrapper[4897]: I0214 19:54:26.720958 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-29tcf" Feb 14 19:54:27 crc kubenswrapper[4897]: W0214 19:54:27.151024 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02270ee3_594d_44a0_9ad0_2a9dbafa5717.slice/crio-4508c23eae99b6735f4b728fd1aab7a893fac97dbfe868c77b93d5c932cd3c37 WatchSource:0}: Error finding container 4508c23eae99b6735f4b728fd1aab7a893fac97dbfe868c77b93d5c932cd3c37: Status 404 returned error can't find the container with id 4508c23eae99b6735f4b728fd1aab7a893fac97dbfe868c77b93d5c932cd3c37 Feb 14 19:54:27 crc kubenswrapper[4897]: I0214 19:54:27.152146 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-29tcf"] Feb 14 19:54:27 crc kubenswrapper[4897]: I0214 19:54:27.657244 4897 generic.go:334] "Generic (PLEG): container finished" podID="02270ee3-594d-44a0-9ad0-2a9dbafa5717" containerID="d963878dbc2a569baff5874d4219b0183e8a958c2326c56391b98309475edfb6" exitCode=0 Feb 14 19:54:27 crc kubenswrapper[4897]: I0214 19:54:27.657350 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29tcf" event={"ID":"02270ee3-594d-44a0-9ad0-2a9dbafa5717","Type":"ContainerDied","Data":"d963878dbc2a569baff5874d4219b0183e8a958c2326c56391b98309475edfb6"} Feb 14 19:54:27 crc kubenswrapper[4897]: I0214 19:54:27.657481 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29tcf" event={"ID":"02270ee3-594d-44a0-9ad0-2a9dbafa5717","Type":"ContainerStarted","Data":"4508c23eae99b6735f4b728fd1aab7a893fac97dbfe868c77b93d5c932cd3c37"} Feb 14 19:54:27 crc kubenswrapper[4897]: I0214 19:54:27.659174 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 19:54:28 crc kubenswrapper[4897]: I0214 19:54:28.671287 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29tcf" event={"ID":"02270ee3-594d-44a0-9ad0-2a9dbafa5717","Type":"ContainerStarted","Data":"21b4629f8aeb67421ef5772b47f2688301c3b9605b51cb476bc277834919eca2"} Feb 14 19:54:31 crc kubenswrapper[4897]: I0214 19:54:31.726073 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:54:31 crc kubenswrapper[4897]: I0214 19:54:31.726587 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:54:31 crc kubenswrapper[4897]: I0214 19:54:31.726656 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 19:54:31 crc kubenswrapper[4897]: I0214 19:54:31.727552 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7d0e93917dd8a36f9df22083fb12bdf30d6b7b30575e0be385fe1a6647406065"} pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 19:54:31 crc kubenswrapper[4897]: I0214 19:54:31.727597 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" containerID="cri-o://7d0e93917dd8a36f9df22083fb12bdf30d6b7b30575e0be385fe1a6647406065" gracePeriod=600 Feb 14 19:54:31 crc kubenswrapper[4897]: E0214 19:54:31.899543 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f885c6c_b913_48e3_93fc_abf932515ea9.slice/crio-conmon-7d0e93917dd8a36f9df22083fb12bdf30d6b7b30575e0be385fe1a6647406065.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f885c6c_b913_48e3_93fc_abf932515ea9.slice/crio-7d0e93917dd8a36f9df22083fb12bdf30d6b7b30575e0be385fe1a6647406065.scope\": RecentStats: unable to find data in memory cache]" Feb 14 19:54:32 crc kubenswrapper[4897]: I0214 19:54:32.731159 4897 generic.go:334] "Generic (PLEG): container finished" podID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerID="7d0e93917dd8a36f9df22083fb12bdf30d6b7b30575e0be385fe1a6647406065" exitCode=0 Feb 14 19:54:32 crc kubenswrapper[4897]: I0214 19:54:32.731242 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerDied","Data":"7d0e93917dd8a36f9df22083fb12bdf30d6b7b30575e0be385fe1a6647406065"} Feb 14 19:54:32 crc kubenswrapper[4897]: I0214 19:54:32.731472 4897 scope.go:117] "RemoveContainer" containerID="1a1d411f5ced1b49f694a67454c9a21b2fc2aff8df91ef95b71ba170beaf3c9c" Feb 14 19:54:33 crc kubenswrapper[4897]: I0214 19:54:33.746384 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerStarted","Data":"a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d"} Feb 14 19:54:34 crc kubenswrapper[4897]: I0214 19:54:34.760585 4897 generic.go:334] "Generic (PLEG): container finished" podID="02270ee3-594d-44a0-9ad0-2a9dbafa5717" containerID="21b4629f8aeb67421ef5772b47f2688301c3b9605b51cb476bc277834919eca2" exitCode=0 Feb 14 19:54:34 crc kubenswrapper[4897]: I0214 19:54:34.760630 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29tcf" event={"ID":"02270ee3-594d-44a0-9ad0-2a9dbafa5717","Type":"ContainerDied","Data":"21b4629f8aeb67421ef5772b47f2688301c3b9605b51cb476bc277834919eca2"} Feb 14 19:54:35 crc kubenswrapper[4897]: I0214 19:54:35.815663 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29tcf" event={"ID":"02270ee3-594d-44a0-9ad0-2a9dbafa5717","Type":"ContainerStarted","Data":"23ef426ad8e9e8e64748c24dc396cb707789ec7c7bda1920284ccdc39de0944c"} Feb 14 19:54:35 crc kubenswrapper[4897]: I0214 19:54:35.843781 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-29tcf" podStartSLOduration=2.34982258 podStartE2EDuration="9.843761814s" podCreationTimestamp="2026-02-14 19:54:26 +0000 UTC" firstStartedPulling="2026-02-14 19:54:27.658918157 +0000 UTC m=+4320.635326640" lastFinishedPulling="2026-02-14 19:54:35.152857381 +0000 UTC m=+4328.129265874" observedRunningTime="2026-02-14 19:54:35.836583491 +0000 UTC m=+4328.812991974" watchObservedRunningTime="2026-02-14 19:54:35.843761814 +0000 UTC m=+4328.820170287" Feb 14 19:54:36 crc kubenswrapper[4897]: I0214 19:54:36.722115 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-29tcf" Feb 14 19:54:36 crc kubenswrapper[4897]: I0214 19:54:36.722423 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-29tcf" Feb 14 19:54:37 crc kubenswrapper[4897]: I0214 19:54:37.766812 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-29tcf" podUID="02270ee3-594d-44a0-9ad0-2a9dbafa5717" containerName="registry-server" probeResult="failure" output=< Feb 14 19:54:37 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 19:54:37 crc kubenswrapper[4897]: > Feb 14 19:54:47 crc kubenswrapper[4897]: I0214 19:54:47.258304 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-29tcf" Feb 14 19:54:47 crc kubenswrapper[4897]: I0214 19:54:47.332962 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-29tcf" Feb 14 19:54:47 crc kubenswrapper[4897]: I0214 19:54:47.502717 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-29tcf"] Feb 14 19:54:48 crc kubenswrapper[4897]: I0214 19:54:48.933513 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-29tcf" podUID="02270ee3-594d-44a0-9ad0-2a9dbafa5717" containerName="registry-server" containerID="cri-o://23ef426ad8e9e8e64748c24dc396cb707789ec7c7bda1920284ccdc39de0944c" gracePeriod=2 Feb 14 19:54:49 crc kubenswrapper[4897]: I0214 19:54:49.532009 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-29tcf" Feb 14 19:54:49 crc kubenswrapper[4897]: I0214 19:54:49.630664 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02270ee3-594d-44a0-9ad0-2a9dbafa5717-utilities\") pod \"02270ee3-594d-44a0-9ad0-2a9dbafa5717\" (UID: \"02270ee3-594d-44a0-9ad0-2a9dbafa5717\") " Feb 14 19:54:49 crc kubenswrapper[4897]: I0214 19:54:49.630713 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mr64\" (UniqueName: \"kubernetes.io/projected/02270ee3-594d-44a0-9ad0-2a9dbafa5717-kube-api-access-4mr64\") pod \"02270ee3-594d-44a0-9ad0-2a9dbafa5717\" (UID: \"02270ee3-594d-44a0-9ad0-2a9dbafa5717\") " Feb 14 19:54:49 crc kubenswrapper[4897]: I0214 19:54:49.630743 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02270ee3-594d-44a0-9ad0-2a9dbafa5717-catalog-content\") pod \"02270ee3-594d-44a0-9ad0-2a9dbafa5717\" (UID: \"02270ee3-594d-44a0-9ad0-2a9dbafa5717\") " Feb 14 19:54:49 crc kubenswrapper[4897]: I0214 19:54:49.631859 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02270ee3-594d-44a0-9ad0-2a9dbafa5717-utilities" (OuterVolumeSpecName: "utilities") pod "02270ee3-594d-44a0-9ad0-2a9dbafa5717" (UID: "02270ee3-594d-44a0-9ad0-2a9dbafa5717"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:54:49 crc kubenswrapper[4897]: I0214 19:54:49.637793 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02270ee3-594d-44a0-9ad0-2a9dbafa5717-kube-api-access-4mr64" (OuterVolumeSpecName: "kube-api-access-4mr64") pod "02270ee3-594d-44a0-9ad0-2a9dbafa5717" (UID: "02270ee3-594d-44a0-9ad0-2a9dbafa5717"). InnerVolumeSpecName "kube-api-access-4mr64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:54:49 crc kubenswrapper[4897]: I0214 19:54:49.732847 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02270ee3-594d-44a0-9ad0-2a9dbafa5717-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 19:54:49 crc kubenswrapper[4897]: I0214 19:54:49.732873 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mr64\" (UniqueName: \"kubernetes.io/projected/02270ee3-594d-44a0-9ad0-2a9dbafa5717-kube-api-access-4mr64\") on node \"crc\" DevicePath \"\"" Feb 14 19:54:49 crc kubenswrapper[4897]: I0214 19:54:49.761826 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02270ee3-594d-44a0-9ad0-2a9dbafa5717-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02270ee3-594d-44a0-9ad0-2a9dbafa5717" (UID: "02270ee3-594d-44a0-9ad0-2a9dbafa5717"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:54:49 crc kubenswrapper[4897]: I0214 19:54:49.834937 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02270ee3-594d-44a0-9ad0-2a9dbafa5717-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 19:54:49 crc kubenswrapper[4897]: I0214 19:54:49.944082 4897 generic.go:334] "Generic (PLEG): container finished" podID="02270ee3-594d-44a0-9ad0-2a9dbafa5717" containerID="23ef426ad8e9e8e64748c24dc396cb707789ec7c7bda1920284ccdc39de0944c" exitCode=0 Feb 14 19:54:49 crc kubenswrapper[4897]: I0214 19:54:49.944119 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29tcf" event={"ID":"02270ee3-594d-44a0-9ad0-2a9dbafa5717","Type":"ContainerDied","Data":"23ef426ad8e9e8e64748c24dc396cb707789ec7c7bda1920284ccdc39de0944c"} Feb 14 19:54:49 crc kubenswrapper[4897]: I0214 19:54:49.944164 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29tcf" event={"ID":"02270ee3-594d-44a0-9ad0-2a9dbafa5717","Type":"ContainerDied","Data":"4508c23eae99b6735f4b728fd1aab7a893fac97dbfe868c77b93d5c932cd3c37"} Feb 14 19:54:49 crc kubenswrapper[4897]: I0214 19:54:49.944180 4897 scope.go:117] "RemoveContainer" containerID="23ef426ad8e9e8e64748c24dc396cb707789ec7c7bda1920284ccdc39de0944c" Feb 14 19:54:49 crc kubenswrapper[4897]: I0214 19:54:49.944131 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-29tcf" Feb 14 19:54:49 crc kubenswrapper[4897]: I0214 19:54:49.968408 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-29tcf"] Feb 14 19:54:49 crc kubenswrapper[4897]: I0214 19:54:49.988703 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-29tcf"] Feb 14 19:54:49 crc kubenswrapper[4897]: I0214 19:54:49.996986 4897 scope.go:117] "RemoveContainer" containerID="21b4629f8aeb67421ef5772b47f2688301c3b9605b51cb476bc277834919eca2" Feb 14 19:54:50 crc kubenswrapper[4897]: I0214 19:54:50.028214 4897 scope.go:117] "RemoveContainer" containerID="d963878dbc2a569baff5874d4219b0183e8a958c2326c56391b98309475edfb6" Feb 14 19:54:50 crc kubenswrapper[4897]: I0214 19:54:50.067161 4897 scope.go:117] "RemoveContainer" containerID="23ef426ad8e9e8e64748c24dc396cb707789ec7c7bda1920284ccdc39de0944c" Feb 14 19:54:50 crc kubenswrapper[4897]: E0214 19:54:50.067622 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23ef426ad8e9e8e64748c24dc396cb707789ec7c7bda1920284ccdc39de0944c\": container with ID starting with 23ef426ad8e9e8e64748c24dc396cb707789ec7c7bda1920284ccdc39de0944c not found: ID does not exist" containerID="23ef426ad8e9e8e64748c24dc396cb707789ec7c7bda1920284ccdc39de0944c" Feb 14 19:54:50 crc kubenswrapper[4897]: I0214 19:54:50.067662 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23ef426ad8e9e8e64748c24dc396cb707789ec7c7bda1920284ccdc39de0944c"} err="failed to get container status \"23ef426ad8e9e8e64748c24dc396cb707789ec7c7bda1920284ccdc39de0944c\": rpc error: code = NotFound desc = could not find container \"23ef426ad8e9e8e64748c24dc396cb707789ec7c7bda1920284ccdc39de0944c\": container with ID starting with 23ef426ad8e9e8e64748c24dc396cb707789ec7c7bda1920284ccdc39de0944c not found: ID does not exist" Feb 14 19:54:50 crc kubenswrapper[4897]: I0214 19:54:50.067688 4897 scope.go:117] "RemoveContainer" containerID="21b4629f8aeb67421ef5772b47f2688301c3b9605b51cb476bc277834919eca2" Feb 14 19:54:50 crc kubenswrapper[4897]: E0214 19:54:50.068101 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21b4629f8aeb67421ef5772b47f2688301c3b9605b51cb476bc277834919eca2\": container with ID starting with 21b4629f8aeb67421ef5772b47f2688301c3b9605b51cb476bc277834919eca2 not found: ID does not exist" containerID="21b4629f8aeb67421ef5772b47f2688301c3b9605b51cb476bc277834919eca2" Feb 14 19:54:50 crc kubenswrapper[4897]: I0214 19:54:50.068156 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21b4629f8aeb67421ef5772b47f2688301c3b9605b51cb476bc277834919eca2"} err="failed to get container status \"21b4629f8aeb67421ef5772b47f2688301c3b9605b51cb476bc277834919eca2\": rpc error: code = NotFound desc = could not find container \"21b4629f8aeb67421ef5772b47f2688301c3b9605b51cb476bc277834919eca2\": container with ID starting with 21b4629f8aeb67421ef5772b47f2688301c3b9605b51cb476bc277834919eca2 not found: ID does not exist" Feb 14 19:54:50 crc kubenswrapper[4897]: I0214 19:54:50.068180 4897 scope.go:117] "RemoveContainer" containerID="d963878dbc2a569baff5874d4219b0183e8a958c2326c56391b98309475edfb6" Feb 14 19:54:50 crc kubenswrapper[4897]: E0214 19:54:50.068557 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d963878dbc2a569baff5874d4219b0183e8a958c2326c56391b98309475edfb6\": container with ID starting with d963878dbc2a569baff5874d4219b0183e8a958c2326c56391b98309475edfb6 not found: ID does not exist" containerID="d963878dbc2a569baff5874d4219b0183e8a958c2326c56391b98309475edfb6" Feb 14 19:54:50 crc kubenswrapper[4897]: I0214 19:54:50.068586 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d963878dbc2a569baff5874d4219b0183e8a958c2326c56391b98309475edfb6"} err="failed to get container status \"d963878dbc2a569baff5874d4219b0183e8a958c2326c56391b98309475edfb6\": rpc error: code = NotFound desc = could not find container \"d963878dbc2a569baff5874d4219b0183e8a958c2326c56391b98309475edfb6\": container with ID starting with d963878dbc2a569baff5874d4219b0183e8a958c2326c56391b98309475edfb6 not found: ID does not exist" Feb 14 19:54:51 crc kubenswrapper[4897]: I0214 19:54:51.814621 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02270ee3-594d-44a0-9ad0-2a9dbafa5717" path="/var/lib/kubelet/pods/02270ee3-594d-44a0-9ad0-2a9dbafa5717/volumes" Feb 14 19:57:01 crc kubenswrapper[4897]: I0214 19:57:01.726398 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:57:01 crc kubenswrapper[4897]: I0214 19:57:01.726928 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:57:28 crc kubenswrapper[4897]: I0214 19:57:28.659096 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-brfbr"] Feb 14 19:57:28 crc kubenswrapper[4897]: E0214 19:57:28.660744 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02270ee3-594d-44a0-9ad0-2a9dbafa5717" containerName="extract-utilities" Feb 14 19:57:28 crc kubenswrapper[4897]: I0214 19:57:28.660768 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="02270ee3-594d-44a0-9ad0-2a9dbafa5717" containerName="extract-utilities" Feb 14 19:57:28 crc kubenswrapper[4897]: E0214 19:57:28.660834 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02270ee3-594d-44a0-9ad0-2a9dbafa5717" containerName="registry-server" Feb 14 19:57:28 crc kubenswrapper[4897]: I0214 19:57:28.660845 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="02270ee3-594d-44a0-9ad0-2a9dbafa5717" containerName="registry-server" Feb 14 19:57:28 crc kubenswrapper[4897]: E0214 19:57:28.660875 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02270ee3-594d-44a0-9ad0-2a9dbafa5717" containerName="extract-content" Feb 14 19:57:28 crc kubenswrapper[4897]: I0214 19:57:28.660888 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="02270ee3-594d-44a0-9ad0-2a9dbafa5717" containerName="extract-content" Feb 14 19:57:28 crc kubenswrapper[4897]: I0214 19:57:28.661290 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="02270ee3-594d-44a0-9ad0-2a9dbafa5717" containerName="registry-server" Feb 14 19:57:28 crc kubenswrapper[4897]: I0214 19:57:28.665688 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brfbr" Feb 14 19:57:28 crc kubenswrapper[4897]: I0214 19:57:28.676486 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-brfbr"] Feb 14 19:57:28 crc kubenswrapper[4897]: I0214 19:57:28.721417 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8d334a-3632-4974-ac3c-cfeb1864b1be-utilities\") pod \"redhat-marketplace-brfbr\" (UID: \"da8d334a-3632-4974-ac3c-cfeb1864b1be\") " pod="openshift-marketplace/redhat-marketplace-brfbr" Feb 14 19:57:28 crc kubenswrapper[4897]: I0214 19:57:28.721465 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8d334a-3632-4974-ac3c-cfeb1864b1be-catalog-content\") pod \"redhat-marketplace-brfbr\" (UID: \"da8d334a-3632-4974-ac3c-cfeb1864b1be\") " pod="openshift-marketplace/redhat-marketplace-brfbr" Feb 14 19:57:28 crc kubenswrapper[4897]: I0214 19:57:28.721670 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdkkf\" (UniqueName: \"kubernetes.io/projected/da8d334a-3632-4974-ac3c-cfeb1864b1be-kube-api-access-xdkkf\") pod \"redhat-marketplace-brfbr\" (UID: \"da8d334a-3632-4974-ac3c-cfeb1864b1be\") " pod="openshift-marketplace/redhat-marketplace-brfbr" Feb 14 19:57:28 crc kubenswrapper[4897]: I0214 19:57:28.824587 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8d334a-3632-4974-ac3c-cfeb1864b1be-utilities\") pod \"redhat-marketplace-brfbr\" (UID: \"da8d334a-3632-4974-ac3c-cfeb1864b1be\") " pod="openshift-marketplace/redhat-marketplace-brfbr" Feb 14 19:57:28 crc kubenswrapper[4897]: I0214 19:57:28.824645 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8d334a-3632-4974-ac3c-cfeb1864b1be-catalog-content\") pod \"redhat-marketplace-brfbr\" (UID: \"da8d334a-3632-4974-ac3c-cfeb1864b1be\") " pod="openshift-marketplace/redhat-marketplace-brfbr" Feb 14 19:57:28 crc kubenswrapper[4897]: I0214 19:57:28.824716 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdkkf\" (UniqueName: \"kubernetes.io/projected/da8d334a-3632-4974-ac3c-cfeb1864b1be-kube-api-access-xdkkf\") pod \"redhat-marketplace-brfbr\" (UID: \"da8d334a-3632-4974-ac3c-cfeb1864b1be\") " pod="openshift-marketplace/redhat-marketplace-brfbr" Feb 14 19:57:28 crc kubenswrapper[4897]: I0214 19:57:28.825260 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8d334a-3632-4974-ac3c-cfeb1864b1be-utilities\") pod \"redhat-marketplace-brfbr\" (UID: \"da8d334a-3632-4974-ac3c-cfeb1864b1be\") " pod="openshift-marketplace/redhat-marketplace-brfbr" Feb 14 19:57:28 crc kubenswrapper[4897]: I0214 19:57:28.825582 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8d334a-3632-4974-ac3c-cfeb1864b1be-catalog-content\") pod \"redhat-marketplace-brfbr\" (UID: \"da8d334a-3632-4974-ac3c-cfeb1864b1be\") " pod="openshift-marketplace/redhat-marketplace-brfbr" Feb 14 19:57:28 crc kubenswrapper[4897]: I0214 19:57:28.842918 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdkkf\" (UniqueName: \"kubernetes.io/projected/da8d334a-3632-4974-ac3c-cfeb1864b1be-kube-api-access-xdkkf\") pod \"redhat-marketplace-brfbr\" (UID: \"da8d334a-3632-4974-ac3c-cfeb1864b1be\") " pod="openshift-marketplace/redhat-marketplace-brfbr" Feb 14 19:57:28 crc kubenswrapper[4897]: I0214 19:57:28.998442 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brfbr" Feb 14 19:57:29 crc kubenswrapper[4897]: I0214 19:57:29.559020 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-brfbr"] Feb 14 19:57:29 crc kubenswrapper[4897]: I0214 19:57:29.957801 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brfbr" event={"ID":"da8d334a-3632-4974-ac3c-cfeb1864b1be","Type":"ContainerStarted","Data":"8c68ecc633022f0f3589ad492becf4b4f25fdd702607fb5fbbc2439718ae5a2c"} Feb 14 19:57:29 crc kubenswrapper[4897]: I0214 19:57:29.958167 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brfbr" event={"ID":"da8d334a-3632-4974-ac3c-cfeb1864b1be","Type":"ContainerStarted","Data":"eeb81fad5247d8cb228b8258e496148b5e597bc4da672c67682b509279655c27"} Feb 14 19:57:30 crc kubenswrapper[4897]: I0214 19:57:30.972544 4897 generic.go:334] "Generic (PLEG): container finished" podID="da8d334a-3632-4974-ac3c-cfeb1864b1be" containerID="8c68ecc633022f0f3589ad492becf4b4f25fdd702607fb5fbbc2439718ae5a2c" exitCode=0 Feb 14 19:57:30 crc kubenswrapper[4897]: I0214 19:57:30.972612 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brfbr" event={"ID":"da8d334a-3632-4974-ac3c-cfeb1864b1be","Type":"ContainerDied","Data":"8c68ecc633022f0f3589ad492becf4b4f25fdd702607fb5fbbc2439718ae5a2c"} Feb 14 19:57:31 crc kubenswrapper[4897]: I0214 19:57:31.726534 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:57:31 crc kubenswrapper[4897]: I0214 19:57:31.726881 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:57:31 crc kubenswrapper[4897]: I0214 19:57:31.989093 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brfbr" event={"ID":"da8d334a-3632-4974-ac3c-cfeb1864b1be","Type":"ContainerStarted","Data":"98c45da15d2b4d6e1efafb04f315dfba161f8804f135b8aafe0ddd1ba5765d82"} Feb 14 19:57:33 crc kubenswrapper[4897]: I0214 19:57:33.004437 4897 generic.go:334] "Generic (PLEG): container finished" podID="da8d334a-3632-4974-ac3c-cfeb1864b1be" containerID="98c45da15d2b4d6e1efafb04f315dfba161f8804f135b8aafe0ddd1ba5765d82" exitCode=0 Feb 14 19:57:33 crc kubenswrapper[4897]: I0214 19:57:33.004509 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brfbr" event={"ID":"da8d334a-3632-4974-ac3c-cfeb1864b1be","Type":"ContainerDied","Data":"98c45da15d2b4d6e1efafb04f315dfba161f8804f135b8aafe0ddd1ba5765d82"} Feb 14 19:57:35 crc kubenswrapper[4897]: I0214 19:57:35.026700 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brfbr" event={"ID":"da8d334a-3632-4974-ac3c-cfeb1864b1be","Type":"ContainerStarted","Data":"5f5ec5e6684f9b3b75f24e58f8d675a0dcd903faf80f3adaa332a9b69049c88b"} Feb 14 19:57:35 crc kubenswrapper[4897]: I0214 19:57:35.056166 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-brfbr" podStartSLOduration=4.577451374 podStartE2EDuration="7.056144861s" podCreationTimestamp="2026-02-14 19:57:28 +0000 UTC" firstStartedPulling="2026-02-14 19:57:30.974654985 +0000 UTC m=+4503.951063508" lastFinishedPulling="2026-02-14 19:57:33.453348482 +0000 UTC m=+4506.429756995" observedRunningTime="2026-02-14 19:57:35.048276187 +0000 UTC m=+4508.024684680" watchObservedRunningTime="2026-02-14 19:57:35.056144861 +0000 UTC m=+4508.032553334" Feb 14 19:57:38 crc kubenswrapper[4897]: I0214 19:57:38.998874 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-brfbr" Feb 14 19:57:38 crc kubenswrapper[4897]: I0214 19:57:38.999382 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-brfbr" Feb 14 19:57:39 crc kubenswrapper[4897]: I0214 19:57:39.148390 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-brfbr" Feb 14 19:57:39 crc kubenswrapper[4897]: I0214 19:57:39.214804 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-brfbr" Feb 14 19:57:39 crc kubenswrapper[4897]: I0214 19:57:39.401814 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-brfbr"] Feb 14 19:57:41 crc kubenswrapper[4897]: I0214 19:57:41.156017 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-brfbr" podUID="da8d334a-3632-4974-ac3c-cfeb1864b1be" containerName="registry-server" containerID="cri-o://5f5ec5e6684f9b3b75f24e58f8d675a0dcd903faf80f3adaa332a9b69049c88b" gracePeriod=2 Feb 14 19:57:41 crc kubenswrapper[4897]: I0214 19:57:41.742935 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brfbr" Feb 14 19:57:41 crc kubenswrapper[4897]: I0214 19:57:41.794140 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdkkf\" (UniqueName: \"kubernetes.io/projected/da8d334a-3632-4974-ac3c-cfeb1864b1be-kube-api-access-xdkkf\") pod \"da8d334a-3632-4974-ac3c-cfeb1864b1be\" (UID: \"da8d334a-3632-4974-ac3c-cfeb1864b1be\") " Feb 14 19:57:41 crc kubenswrapper[4897]: I0214 19:57:41.794880 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8d334a-3632-4974-ac3c-cfeb1864b1be-utilities\") pod \"da8d334a-3632-4974-ac3c-cfeb1864b1be\" (UID: \"da8d334a-3632-4974-ac3c-cfeb1864b1be\") " Feb 14 19:57:41 crc kubenswrapper[4897]: I0214 19:57:41.795055 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8d334a-3632-4974-ac3c-cfeb1864b1be-catalog-content\") pod \"da8d334a-3632-4974-ac3c-cfeb1864b1be\" (UID: \"da8d334a-3632-4974-ac3c-cfeb1864b1be\") " Feb 14 19:57:41 crc kubenswrapper[4897]: I0214 19:57:41.796128 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da8d334a-3632-4974-ac3c-cfeb1864b1be-utilities" (OuterVolumeSpecName: "utilities") pod "da8d334a-3632-4974-ac3c-cfeb1864b1be" (UID: "da8d334a-3632-4974-ac3c-cfeb1864b1be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:57:41 crc kubenswrapper[4897]: I0214 19:57:41.801070 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da8d334a-3632-4974-ac3c-cfeb1864b1be-kube-api-access-xdkkf" (OuterVolumeSpecName: "kube-api-access-xdkkf") pod "da8d334a-3632-4974-ac3c-cfeb1864b1be" (UID: "da8d334a-3632-4974-ac3c-cfeb1864b1be"). InnerVolumeSpecName "kube-api-access-xdkkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:57:41 crc kubenswrapper[4897]: I0214 19:57:41.822271 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da8d334a-3632-4974-ac3c-cfeb1864b1be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da8d334a-3632-4974-ac3c-cfeb1864b1be" (UID: "da8d334a-3632-4974-ac3c-cfeb1864b1be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:57:41 crc kubenswrapper[4897]: I0214 19:57:41.897286 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8d334a-3632-4974-ac3c-cfeb1864b1be-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 19:57:41 crc kubenswrapper[4897]: I0214 19:57:41.897313 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8d334a-3632-4974-ac3c-cfeb1864b1be-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 19:57:41 crc kubenswrapper[4897]: I0214 19:57:41.897325 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdkkf\" (UniqueName: \"kubernetes.io/projected/da8d334a-3632-4974-ac3c-cfeb1864b1be-kube-api-access-xdkkf\") on node \"crc\" DevicePath \"\"" Feb 14 19:57:42 crc kubenswrapper[4897]: I0214 19:57:42.170975 4897 generic.go:334] "Generic (PLEG): container finished" podID="da8d334a-3632-4974-ac3c-cfeb1864b1be" containerID="5f5ec5e6684f9b3b75f24e58f8d675a0dcd903faf80f3adaa332a9b69049c88b" exitCode=0 Feb 14 19:57:42 crc kubenswrapper[4897]: I0214 19:57:42.171061 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brfbr" event={"ID":"da8d334a-3632-4974-ac3c-cfeb1864b1be","Type":"ContainerDied","Data":"5f5ec5e6684f9b3b75f24e58f8d675a0dcd903faf80f3adaa332a9b69049c88b"} Feb 14 19:57:42 crc kubenswrapper[4897]: I0214 19:57:42.171417 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brfbr" event={"ID":"da8d334a-3632-4974-ac3c-cfeb1864b1be","Type":"ContainerDied","Data":"eeb81fad5247d8cb228b8258e496148b5e597bc4da672c67682b509279655c27"} Feb 14 19:57:42 crc kubenswrapper[4897]: I0214 19:57:42.171450 4897 scope.go:117] "RemoveContainer" containerID="5f5ec5e6684f9b3b75f24e58f8d675a0dcd903faf80f3adaa332a9b69049c88b" Feb 14 19:57:42 crc kubenswrapper[4897]: I0214 19:57:42.171108 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brfbr" Feb 14 19:57:42 crc kubenswrapper[4897]: I0214 19:57:42.197878 4897 scope.go:117] "RemoveContainer" containerID="98c45da15d2b4d6e1efafb04f315dfba161f8804f135b8aafe0ddd1ba5765d82" Feb 14 19:57:42 crc kubenswrapper[4897]: I0214 19:57:42.245690 4897 scope.go:117] "RemoveContainer" containerID="8c68ecc633022f0f3589ad492becf4b4f25fdd702607fb5fbbc2439718ae5a2c" Feb 14 19:57:42 crc kubenswrapper[4897]: I0214 19:57:42.252492 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-brfbr"] Feb 14 19:57:42 crc kubenswrapper[4897]: I0214 19:57:42.269072 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-brfbr"] Feb 14 19:57:42 crc kubenswrapper[4897]: I0214 19:57:42.298837 4897 scope.go:117] "RemoveContainer" containerID="5f5ec5e6684f9b3b75f24e58f8d675a0dcd903faf80f3adaa332a9b69049c88b" Feb 14 19:57:42 crc kubenswrapper[4897]: E0214 19:57:42.299244 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f5ec5e6684f9b3b75f24e58f8d675a0dcd903faf80f3adaa332a9b69049c88b\": container with ID starting with 5f5ec5e6684f9b3b75f24e58f8d675a0dcd903faf80f3adaa332a9b69049c88b not found: ID does not exist" containerID="5f5ec5e6684f9b3b75f24e58f8d675a0dcd903faf80f3adaa332a9b69049c88b" Feb 14 19:57:42 crc kubenswrapper[4897]: I0214 19:57:42.299281 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f5ec5e6684f9b3b75f24e58f8d675a0dcd903faf80f3adaa332a9b69049c88b"} err="failed to get container status \"5f5ec5e6684f9b3b75f24e58f8d675a0dcd903faf80f3adaa332a9b69049c88b\": rpc error: code = NotFound desc = could not find container \"5f5ec5e6684f9b3b75f24e58f8d675a0dcd903faf80f3adaa332a9b69049c88b\": container with ID starting with 5f5ec5e6684f9b3b75f24e58f8d675a0dcd903faf80f3adaa332a9b69049c88b not found: ID does not exist" Feb 14 19:57:42 crc kubenswrapper[4897]: I0214 19:57:42.299331 4897 scope.go:117] "RemoveContainer" containerID="98c45da15d2b4d6e1efafb04f315dfba161f8804f135b8aafe0ddd1ba5765d82" Feb 14 19:57:42 crc kubenswrapper[4897]: E0214 19:57:42.299619 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98c45da15d2b4d6e1efafb04f315dfba161f8804f135b8aafe0ddd1ba5765d82\": container with ID starting with 98c45da15d2b4d6e1efafb04f315dfba161f8804f135b8aafe0ddd1ba5765d82 not found: ID does not exist" containerID="98c45da15d2b4d6e1efafb04f315dfba161f8804f135b8aafe0ddd1ba5765d82" Feb 14 19:57:42 crc kubenswrapper[4897]: I0214 19:57:42.299663 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98c45da15d2b4d6e1efafb04f315dfba161f8804f135b8aafe0ddd1ba5765d82"} err="failed to get container status \"98c45da15d2b4d6e1efafb04f315dfba161f8804f135b8aafe0ddd1ba5765d82\": rpc error: code = NotFound desc = could not find container \"98c45da15d2b4d6e1efafb04f315dfba161f8804f135b8aafe0ddd1ba5765d82\": container with ID starting with 98c45da15d2b4d6e1efafb04f315dfba161f8804f135b8aafe0ddd1ba5765d82 not found: ID does not exist" Feb 14 19:57:42 crc kubenswrapper[4897]: I0214 19:57:42.299693 4897 scope.go:117] "RemoveContainer" containerID="8c68ecc633022f0f3589ad492becf4b4f25fdd702607fb5fbbc2439718ae5a2c" Feb 14 19:57:42 crc kubenswrapper[4897]: E0214 19:57:42.299956 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c68ecc633022f0f3589ad492becf4b4f25fdd702607fb5fbbc2439718ae5a2c\": container with ID starting with 8c68ecc633022f0f3589ad492becf4b4f25fdd702607fb5fbbc2439718ae5a2c not found: ID does not exist" containerID="8c68ecc633022f0f3589ad492becf4b4f25fdd702607fb5fbbc2439718ae5a2c" Feb 14 19:57:42 crc kubenswrapper[4897]: I0214 19:57:42.299981 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c68ecc633022f0f3589ad492becf4b4f25fdd702607fb5fbbc2439718ae5a2c"} err="failed to get container status \"8c68ecc633022f0f3589ad492becf4b4f25fdd702607fb5fbbc2439718ae5a2c\": rpc error: code = NotFound desc = could not find container \"8c68ecc633022f0f3589ad492becf4b4f25fdd702607fb5fbbc2439718ae5a2c\": container with ID starting with 8c68ecc633022f0f3589ad492becf4b4f25fdd702607fb5fbbc2439718ae5a2c not found: ID does not exist" Feb 14 19:57:43 crc kubenswrapper[4897]: I0214 19:57:43.810768 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da8d334a-3632-4974-ac3c-cfeb1864b1be" path="/var/lib/kubelet/pods/da8d334a-3632-4974-ac3c-cfeb1864b1be/volumes" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.424842 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 14 19:57:53 crc kubenswrapper[4897]: E0214 19:57:53.426331 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8d334a-3632-4974-ac3c-cfeb1864b1be" containerName="extract-content" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.426359 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8d334a-3632-4974-ac3c-cfeb1864b1be" containerName="extract-content" Feb 14 19:57:53 crc kubenswrapper[4897]: E0214 19:57:53.426413 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8d334a-3632-4974-ac3c-cfeb1864b1be" containerName="registry-server" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.426429 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8d334a-3632-4974-ac3c-cfeb1864b1be" containerName="registry-server" Feb 14 19:57:53 crc kubenswrapper[4897]: E0214 19:57:53.426468 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8d334a-3632-4974-ac3c-cfeb1864b1be" containerName="extract-utilities" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.426483 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8d334a-3632-4974-ac3c-cfeb1864b1be" containerName="extract-utilities" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.426962 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="da8d334a-3632-4974-ac3c-cfeb1864b1be" containerName="registry-server" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.428341 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.430853 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.431928 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-mgspz" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.432132 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.432203 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.446613 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.592873 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1ccac56d-8e29-4241-99ef-bb65d3ff373f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.593228 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4j7r\" (UniqueName: \"kubernetes.io/projected/1ccac56d-8e29-4241-99ef-bb65d3ff373f-kube-api-access-n4j7r\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.593279 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ccac56d-8e29-4241-99ef-bb65d3ff373f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.593303 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.593328 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ccac56d-8e29-4241-99ef-bb65d3ff373f-config-data\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.593354 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1ccac56d-8e29-4241-99ef-bb65d3ff373f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.593373 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1ccac56d-8e29-4241-99ef-bb65d3ff373f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.593538 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1ccac56d-8e29-4241-99ef-bb65d3ff373f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.593559 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1ccac56d-8e29-4241-99ef-bb65d3ff373f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.695893 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1ccac56d-8e29-4241-99ef-bb65d3ff373f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.695953 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1ccac56d-8e29-4241-99ef-bb65d3ff373f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.695985 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1ccac56d-8e29-4241-99ef-bb65d3ff373f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.696044 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4j7r\" (UniqueName: \"kubernetes.io/projected/1ccac56d-8e29-4241-99ef-bb65d3ff373f-kube-api-access-n4j7r\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.696105 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ccac56d-8e29-4241-99ef-bb65d3ff373f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.696134 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.696164 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ccac56d-8e29-4241-99ef-bb65d3ff373f-config-data\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.696193 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1ccac56d-8e29-4241-99ef-bb65d3ff373f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.696222 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1ccac56d-8e29-4241-99ef-bb65d3ff373f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.697965 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1ccac56d-8e29-4241-99ef-bb65d3ff373f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.698499 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1ccac56d-8e29-4241-99ef-bb65d3ff373f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.698893 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1ccac56d-8e29-4241-99ef-bb65d3ff373f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.701263 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.701921 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ccac56d-8e29-4241-99ef-bb65d3ff373f-config-data\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.705097 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1ccac56d-8e29-4241-99ef-bb65d3ff373f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.706477 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ccac56d-8e29-4241-99ef-bb65d3ff373f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.709591 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1ccac56d-8e29-4241-99ef-bb65d3ff373f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.716452 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4j7r\" (UniqueName: \"kubernetes.io/projected/1ccac56d-8e29-4241-99ef-bb65d3ff373f-kube-api-access-n4j7r\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.736163 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " pod="openstack/tempest-tests-tempest" Feb 14 19:57:53 crc kubenswrapper[4897]: I0214 19:57:53.759179 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 14 19:57:54 crc kubenswrapper[4897]: I0214 19:57:54.302579 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 14 19:57:54 crc kubenswrapper[4897]: I0214 19:57:54.328405 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1ccac56d-8e29-4241-99ef-bb65d3ff373f","Type":"ContainerStarted","Data":"b3cf627af92b6e17d2bb6f392528c27fca45bdb935b0a4b28404284d663dac28"} Feb 14 19:58:01 crc kubenswrapper[4897]: I0214 19:58:01.725570 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 19:58:01 crc kubenswrapper[4897]: I0214 19:58:01.726179 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 19:58:01 crc kubenswrapper[4897]: I0214 19:58:01.726219 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 19:58:01 crc kubenswrapper[4897]: I0214 19:58:01.727021 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d"} pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 19:58:01 crc kubenswrapper[4897]: I0214 19:58:01.727083 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" containerID="cri-o://a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" gracePeriod=600 Feb 14 19:58:01 crc kubenswrapper[4897]: E0214 19:58:01.870265 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:58:02 crc kubenswrapper[4897]: I0214 19:58:02.453750 4897 generic.go:334] "Generic (PLEG): container finished" podID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerID="a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" exitCode=0 Feb 14 19:58:02 crc kubenswrapper[4897]: I0214 19:58:02.453799 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerDied","Data":"a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d"} Feb 14 19:58:02 crc kubenswrapper[4897]: I0214 19:58:02.453838 4897 scope.go:117] "RemoveContainer" containerID="7d0e93917dd8a36f9df22083fb12bdf30d6b7b30575e0be385fe1a6647406065" Feb 14 19:58:02 crc kubenswrapper[4897]: I0214 19:58:02.454623 4897 scope.go:117] "RemoveContainer" containerID="a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" Feb 14 19:58:02 crc kubenswrapper[4897]: E0214 19:58:02.454943 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:58:15 crc kubenswrapper[4897]: I0214 19:58:15.794131 4897 scope.go:117] "RemoveContainer" containerID="a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" Feb 14 19:58:15 crc kubenswrapper[4897]: E0214 19:58:15.795411 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:58:25 crc kubenswrapper[4897]: E0214 19:58:25.637745 4897 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 14 19:58:25 crc kubenswrapper[4897]: E0214 19:58:25.643231 4897 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4j7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(1ccac56d-8e29-4241-99ef-bb65d3ff373f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 19:58:25 crc kubenswrapper[4897]: E0214 19:58:25.644407 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="1ccac56d-8e29-4241-99ef-bb65d3ff373f" Feb 14 19:58:25 crc kubenswrapper[4897]: E0214 19:58:25.779716 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="1ccac56d-8e29-4241-99ef-bb65d3ff373f" Feb 14 19:58:28 crc kubenswrapper[4897]: I0214 19:58:28.794696 4897 scope.go:117] "RemoveContainer" containerID="a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" Feb 14 19:58:28 crc kubenswrapper[4897]: E0214 19:58:28.795870 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:58:37 crc kubenswrapper[4897]: I0214 19:58:37.289857 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 14 19:58:38 crc kubenswrapper[4897]: I0214 19:58:38.954824 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1ccac56d-8e29-4241-99ef-bb65d3ff373f","Type":"ContainerStarted","Data":"8a1b545ff788f34630c4b7e32a6ca1975abf41bf1e0380280f254144e849184b"} Feb 14 19:58:38 crc kubenswrapper[4897]: I0214 19:58:38.978913 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.99879807 podStartE2EDuration="46.978897813s" podCreationTimestamp="2026-02-14 19:57:52 +0000 UTC" firstStartedPulling="2026-02-14 19:57:54.306723291 +0000 UTC m=+4527.283131814" lastFinishedPulling="2026-02-14 19:58:37.286823074 +0000 UTC m=+4570.263231557" observedRunningTime="2026-02-14 19:58:38.975830758 +0000 UTC m=+4571.952239241" watchObservedRunningTime="2026-02-14 19:58:38.978897813 +0000 UTC m=+4571.955306286" Feb 14 19:58:41 crc kubenswrapper[4897]: I0214 19:58:41.795246 4897 scope.go:117] "RemoveContainer" containerID="a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" Feb 14 19:58:41 crc kubenswrapper[4897]: E0214 19:58:41.796306 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:58:43 crc kubenswrapper[4897]: I0214 19:58:43.709574 4897 scope.go:117] "RemoveContainer" containerID="d2e6ccec80e90d29c047560ccb0c9a62c88e0395ff65475d3fc2173c8cedfde9" Feb 14 19:58:43 crc kubenswrapper[4897]: I0214 19:58:43.756472 4897 scope.go:117] "RemoveContainer" containerID="fa3a70bd09472fb1c04d2a53c4fa934ce795efa21c4f6d38657700896cf9da12" Feb 14 19:58:43 crc kubenswrapper[4897]: I0214 19:58:43.803425 4897 scope.go:117] "RemoveContainer" containerID="e61c44d8f34363cad5df3bf0a498faeaafecb9de467b8200c6c1e0f7124fa57e" Feb 14 19:58:54 crc kubenswrapper[4897]: I0214 19:58:54.794534 4897 scope.go:117] "RemoveContainer" containerID="a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" Feb 14 19:58:54 crc kubenswrapper[4897]: E0214 19:58:54.795581 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:59:06 crc kubenswrapper[4897]: I0214 19:59:06.794751 4897 scope.go:117] "RemoveContainer" containerID="a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" Feb 14 19:59:06 crc kubenswrapper[4897]: E0214 19:59:06.796323 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:59:19 crc kubenswrapper[4897]: I0214 19:59:19.794787 4897 scope.go:117] "RemoveContainer" containerID="a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" Feb 14 19:59:19 crc kubenswrapper[4897]: E0214 19:59:19.796290 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:59:29 crc kubenswrapper[4897]: I0214 19:59:29.593272 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nb67n"] Feb 14 19:59:29 crc kubenswrapper[4897]: I0214 19:59:29.624648 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nb67n" Feb 14 19:59:29 crc kubenswrapper[4897]: I0214 19:59:29.738625 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4165041-91e4-45f5-8af3-3ac1b9e73ea3-utilities\") pod \"community-operators-nb67n\" (UID: \"b4165041-91e4-45f5-8af3-3ac1b9e73ea3\") " pod="openshift-marketplace/community-operators-nb67n" Feb 14 19:59:29 crc kubenswrapper[4897]: I0214 19:59:29.739844 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-544cv\" (UniqueName: \"kubernetes.io/projected/b4165041-91e4-45f5-8af3-3ac1b9e73ea3-kube-api-access-544cv\") pod \"community-operators-nb67n\" (UID: \"b4165041-91e4-45f5-8af3-3ac1b9e73ea3\") " pod="openshift-marketplace/community-operators-nb67n" Feb 14 19:59:29 crc kubenswrapper[4897]: I0214 19:59:29.740153 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4165041-91e4-45f5-8af3-3ac1b9e73ea3-catalog-content\") pod \"community-operators-nb67n\" (UID: \"b4165041-91e4-45f5-8af3-3ac1b9e73ea3\") " pod="openshift-marketplace/community-operators-nb67n" Feb 14 19:59:29 crc kubenswrapper[4897]: I0214 19:59:29.746585 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nb67n"] Feb 14 19:59:29 crc kubenswrapper[4897]: I0214 19:59:29.844407 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4165041-91e4-45f5-8af3-3ac1b9e73ea3-utilities\") pod \"community-operators-nb67n\" (UID: \"b4165041-91e4-45f5-8af3-3ac1b9e73ea3\") " pod="openshift-marketplace/community-operators-nb67n" Feb 14 19:59:29 crc kubenswrapper[4897]: I0214 19:59:29.844475 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-544cv\" (UniqueName: \"kubernetes.io/projected/b4165041-91e4-45f5-8af3-3ac1b9e73ea3-kube-api-access-544cv\") pod \"community-operators-nb67n\" (UID: \"b4165041-91e4-45f5-8af3-3ac1b9e73ea3\") " pod="openshift-marketplace/community-operators-nb67n" Feb 14 19:59:29 crc kubenswrapper[4897]: I0214 19:59:29.844708 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4165041-91e4-45f5-8af3-3ac1b9e73ea3-catalog-content\") pod \"community-operators-nb67n\" (UID: \"b4165041-91e4-45f5-8af3-3ac1b9e73ea3\") " pod="openshift-marketplace/community-operators-nb67n" Feb 14 19:59:29 crc kubenswrapper[4897]: I0214 19:59:29.854782 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4165041-91e4-45f5-8af3-3ac1b9e73ea3-utilities\") pod \"community-operators-nb67n\" (UID: \"b4165041-91e4-45f5-8af3-3ac1b9e73ea3\") " pod="openshift-marketplace/community-operators-nb67n" Feb 14 19:59:29 crc kubenswrapper[4897]: I0214 19:59:29.854826 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4165041-91e4-45f5-8af3-3ac1b9e73ea3-catalog-content\") pod \"community-operators-nb67n\" (UID: \"b4165041-91e4-45f5-8af3-3ac1b9e73ea3\") " pod="openshift-marketplace/community-operators-nb67n" Feb 14 19:59:29 crc kubenswrapper[4897]: I0214 19:59:29.884616 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-544cv\" (UniqueName: \"kubernetes.io/projected/b4165041-91e4-45f5-8af3-3ac1b9e73ea3-kube-api-access-544cv\") pod \"community-operators-nb67n\" (UID: \"b4165041-91e4-45f5-8af3-3ac1b9e73ea3\") " pod="openshift-marketplace/community-operators-nb67n" Feb 14 19:59:29 crc kubenswrapper[4897]: I0214 19:59:29.957239 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nb67n" Feb 14 19:59:31 crc kubenswrapper[4897]: I0214 19:59:31.235257 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nb67n"] Feb 14 19:59:31 crc kubenswrapper[4897]: W0214 19:59:31.253640 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4165041_91e4_45f5_8af3_3ac1b9e73ea3.slice/crio-da12cf640a0181b53c18045e0350c9d0513dee1389cbe54986863ef46b80d9df WatchSource:0}: Error finding container da12cf640a0181b53c18045e0350c9d0513dee1389cbe54986863ef46b80d9df: Status 404 returned error can't find the container with id da12cf640a0181b53c18045e0350c9d0513dee1389cbe54986863ef46b80d9df Feb 14 19:59:31 crc kubenswrapper[4897]: I0214 19:59:31.635839 4897 generic.go:334] "Generic (PLEG): container finished" podID="b4165041-91e4-45f5-8af3-3ac1b9e73ea3" containerID="91ba7817f6d7c50104b6d284536ebb12a7ac1e04c1783c83c838705fe066c41e" exitCode=0 Feb 14 19:59:31 crc kubenswrapper[4897]: I0214 19:59:31.636616 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nb67n" event={"ID":"b4165041-91e4-45f5-8af3-3ac1b9e73ea3","Type":"ContainerDied","Data":"91ba7817f6d7c50104b6d284536ebb12a7ac1e04c1783c83c838705fe066c41e"} Feb 14 19:59:31 crc kubenswrapper[4897]: I0214 19:59:31.636932 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nb67n" event={"ID":"b4165041-91e4-45f5-8af3-3ac1b9e73ea3","Type":"ContainerStarted","Data":"da12cf640a0181b53c18045e0350c9d0513dee1389cbe54986863ef46b80d9df"} Feb 14 19:59:31 crc kubenswrapper[4897]: I0214 19:59:31.642237 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 19:59:33 crc kubenswrapper[4897]: I0214 19:59:33.660057 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nb67n" event={"ID":"b4165041-91e4-45f5-8af3-3ac1b9e73ea3","Type":"ContainerStarted","Data":"22295872c2844194e7c9f74cd5c94276bd12352c93eb4b4fb2eca17c6a9c4d2f"} Feb 14 19:59:33 crc kubenswrapper[4897]: I0214 19:59:33.795299 4897 scope.go:117] "RemoveContainer" containerID="a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" Feb 14 19:59:33 crc kubenswrapper[4897]: E0214 19:59:33.795626 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:59:35 crc kubenswrapper[4897]: I0214 19:59:35.685813 4897 generic.go:334] "Generic (PLEG): container finished" podID="b4165041-91e4-45f5-8af3-3ac1b9e73ea3" containerID="22295872c2844194e7c9f74cd5c94276bd12352c93eb4b4fb2eca17c6a9c4d2f" exitCode=0 Feb 14 19:59:35 crc kubenswrapper[4897]: I0214 19:59:35.685889 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nb67n" event={"ID":"b4165041-91e4-45f5-8af3-3ac1b9e73ea3","Type":"ContainerDied","Data":"22295872c2844194e7c9f74cd5c94276bd12352c93eb4b4fb2eca17c6a9c4d2f"} Feb 14 19:59:36 crc kubenswrapper[4897]: I0214 19:59:36.698247 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nb67n" event={"ID":"b4165041-91e4-45f5-8af3-3ac1b9e73ea3","Type":"ContainerStarted","Data":"eb5311705a278eb72a2964896c3be48a51f990ec2fc2fd23445a55cd4c11f616"} Feb 14 19:59:36 crc kubenswrapper[4897]: I0214 19:59:36.730415 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nb67n" podStartSLOduration=3.2312404040000002 podStartE2EDuration="7.724197095s" podCreationTimestamp="2026-02-14 19:59:29 +0000 UTC" firstStartedPulling="2026-02-14 19:59:31.639654921 +0000 UTC m=+4624.616063404" lastFinishedPulling="2026-02-14 19:59:36.132611622 +0000 UTC m=+4629.109020095" observedRunningTime="2026-02-14 19:59:36.714156923 +0000 UTC m=+4629.690565426" watchObservedRunningTime="2026-02-14 19:59:36.724197095 +0000 UTC m=+4629.700605588" Feb 14 19:59:39 crc kubenswrapper[4897]: I0214 19:59:39.961472 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nb67n" Feb 14 19:59:39 crc kubenswrapper[4897]: I0214 19:59:39.962132 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nb67n" Feb 14 19:59:41 crc kubenswrapper[4897]: I0214 19:59:41.017510 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-nb67n" podUID="b4165041-91e4-45f5-8af3-3ac1b9e73ea3" containerName="registry-server" probeResult="failure" output=< Feb 14 19:59:41 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 19:59:41 crc kubenswrapper[4897]: > Feb 14 19:59:46 crc kubenswrapper[4897]: I0214 19:59:46.794907 4897 scope.go:117] "RemoveContainer" containerID="a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" Feb 14 19:59:46 crc kubenswrapper[4897]: E0214 19:59:46.795964 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 19:59:50 crc kubenswrapper[4897]: I0214 19:59:50.012751 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nb67n" Feb 14 19:59:50 crc kubenswrapper[4897]: I0214 19:59:50.088428 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nb67n" Feb 14 19:59:50 crc kubenswrapper[4897]: I0214 19:59:50.350368 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nb67n"] Feb 14 19:59:51 crc kubenswrapper[4897]: I0214 19:59:51.924969 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nb67n" podUID="b4165041-91e4-45f5-8af3-3ac1b9e73ea3" containerName="registry-server" containerID="cri-o://eb5311705a278eb72a2964896c3be48a51f990ec2fc2fd23445a55cd4c11f616" gracePeriod=2 Feb 14 19:59:52 crc kubenswrapper[4897]: I0214 19:59:52.952884 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nb67n" event={"ID":"b4165041-91e4-45f5-8af3-3ac1b9e73ea3","Type":"ContainerDied","Data":"eb5311705a278eb72a2964896c3be48a51f990ec2fc2fd23445a55cd4c11f616"} Feb 14 19:59:52 crc kubenswrapper[4897]: I0214 19:59:52.952483 4897 generic.go:334] "Generic (PLEG): container finished" podID="b4165041-91e4-45f5-8af3-3ac1b9e73ea3" containerID="eb5311705a278eb72a2964896c3be48a51f990ec2fc2fd23445a55cd4c11f616" exitCode=0 Feb 14 19:59:53 crc kubenswrapper[4897]: I0214 19:59:53.307846 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nb67n" Feb 14 19:59:53 crc kubenswrapper[4897]: I0214 19:59:53.345997 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4165041-91e4-45f5-8af3-3ac1b9e73ea3-utilities\") pod \"b4165041-91e4-45f5-8af3-3ac1b9e73ea3\" (UID: \"b4165041-91e4-45f5-8af3-3ac1b9e73ea3\") " Feb 14 19:59:53 crc kubenswrapper[4897]: I0214 19:59:53.346088 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-544cv\" (UniqueName: \"kubernetes.io/projected/b4165041-91e4-45f5-8af3-3ac1b9e73ea3-kube-api-access-544cv\") pod \"b4165041-91e4-45f5-8af3-3ac1b9e73ea3\" (UID: \"b4165041-91e4-45f5-8af3-3ac1b9e73ea3\") " Feb 14 19:59:53 crc kubenswrapper[4897]: I0214 19:59:53.346277 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4165041-91e4-45f5-8af3-3ac1b9e73ea3-catalog-content\") pod \"b4165041-91e4-45f5-8af3-3ac1b9e73ea3\" (UID: \"b4165041-91e4-45f5-8af3-3ac1b9e73ea3\") " Feb 14 19:59:53 crc kubenswrapper[4897]: I0214 19:59:53.352564 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4165041-91e4-45f5-8af3-3ac1b9e73ea3-utilities" (OuterVolumeSpecName: "utilities") pod "b4165041-91e4-45f5-8af3-3ac1b9e73ea3" (UID: "b4165041-91e4-45f5-8af3-3ac1b9e73ea3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:59:53 crc kubenswrapper[4897]: I0214 19:59:53.385162 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4165041-91e4-45f5-8af3-3ac1b9e73ea3-kube-api-access-544cv" (OuterVolumeSpecName: "kube-api-access-544cv") pod "b4165041-91e4-45f5-8af3-3ac1b9e73ea3" (UID: "b4165041-91e4-45f5-8af3-3ac1b9e73ea3"). InnerVolumeSpecName "kube-api-access-544cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 19:59:53 crc kubenswrapper[4897]: I0214 19:59:53.452559 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4165041-91e4-45f5-8af3-3ac1b9e73ea3-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 19:59:53 crc kubenswrapper[4897]: I0214 19:59:53.452600 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-544cv\" (UniqueName: \"kubernetes.io/projected/b4165041-91e4-45f5-8af3-3ac1b9e73ea3-kube-api-access-544cv\") on node \"crc\" DevicePath \"\"" Feb 14 19:59:53 crc kubenswrapper[4897]: I0214 19:59:53.471541 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4165041-91e4-45f5-8af3-3ac1b9e73ea3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4165041-91e4-45f5-8af3-3ac1b9e73ea3" (UID: "b4165041-91e4-45f5-8af3-3ac1b9e73ea3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 19:59:53 crc kubenswrapper[4897]: I0214 19:59:53.554940 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4165041-91e4-45f5-8af3-3ac1b9e73ea3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 19:59:53 crc kubenswrapper[4897]: I0214 19:59:53.964518 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nb67n" event={"ID":"b4165041-91e4-45f5-8af3-3ac1b9e73ea3","Type":"ContainerDied","Data":"da12cf640a0181b53c18045e0350c9d0513dee1389cbe54986863ef46b80d9df"} Feb 14 19:59:53 crc kubenswrapper[4897]: I0214 19:59:53.964552 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nb67n" Feb 14 19:59:53 crc kubenswrapper[4897]: I0214 19:59:53.968357 4897 scope.go:117] "RemoveContainer" containerID="eb5311705a278eb72a2964896c3be48a51f990ec2fc2fd23445a55cd4c11f616" Feb 14 19:59:54 crc kubenswrapper[4897]: I0214 19:59:54.017572 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nb67n"] Feb 14 19:59:54 crc kubenswrapper[4897]: I0214 19:59:54.030690 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nb67n"] Feb 14 19:59:54 crc kubenswrapper[4897]: I0214 19:59:54.038709 4897 scope.go:117] "RemoveContainer" containerID="22295872c2844194e7c9f74cd5c94276bd12352c93eb4b4fb2eca17c6a9c4d2f" Feb 14 19:59:54 crc kubenswrapper[4897]: I0214 19:59:54.061443 4897 scope.go:117] "RemoveContainer" containerID="91ba7817f6d7c50104b6d284536ebb12a7ac1e04c1783c83c838705fe066c41e" Feb 14 19:59:55 crc kubenswrapper[4897]: I0214 19:59:55.825684 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4165041-91e4-45f5-8af3-3ac1b9e73ea3" path="/var/lib/kubelet/pods/b4165041-91e4-45f5-8af3-3ac1b9e73ea3/volumes" Feb 14 20:00:00 crc kubenswrapper[4897]: I0214 20:00:00.798585 4897 scope.go:117] "RemoveContainer" containerID="a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" Feb 14 20:00:00 crc kubenswrapper[4897]: E0214 20:00:00.808674 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:00:01 crc kubenswrapper[4897]: I0214 20:00:01.076720 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518320-2tbwr"] Feb 14 20:00:01 crc kubenswrapper[4897]: E0214 20:00:01.091794 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4165041-91e4-45f5-8af3-3ac1b9e73ea3" containerName="extract-content" Feb 14 20:00:01 crc kubenswrapper[4897]: I0214 20:00:01.091837 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4165041-91e4-45f5-8af3-3ac1b9e73ea3" containerName="extract-content" Feb 14 20:00:01 crc kubenswrapper[4897]: E0214 20:00:01.091874 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4165041-91e4-45f5-8af3-3ac1b9e73ea3" containerName="extract-utilities" Feb 14 20:00:01 crc kubenswrapper[4897]: I0214 20:00:01.091882 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4165041-91e4-45f5-8af3-3ac1b9e73ea3" containerName="extract-utilities" Feb 14 20:00:01 crc kubenswrapper[4897]: E0214 20:00:01.091913 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4165041-91e4-45f5-8af3-3ac1b9e73ea3" containerName="registry-server" Feb 14 20:00:01 crc kubenswrapper[4897]: I0214 20:00:01.091931 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4165041-91e4-45f5-8af3-3ac1b9e73ea3" containerName="registry-server" Feb 14 20:00:01 crc kubenswrapper[4897]: I0214 20:00:01.097634 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4165041-91e4-45f5-8af3-3ac1b9e73ea3" containerName="registry-server" Feb 14 20:00:01 crc kubenswrapper[4897]: I0214 20:00:01.111995 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518320-2tbwr" Feb 14 20:00:01 crc kubenswrapper[4897]: I0214 20:00:01.143606 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d570fe1-d9f5-4d80-baf9-17877fd99929-secret-volume\") pod \"collect-profiles-29518320-2tbwr\" (UID: \"0d570fe1-d9f5-4d80-baf9-17877fd99929\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518320-2tbwr" Feb 14 20:00:01 crc kubenswrapper[4897]: I0214 20:00:01.143699 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdltb\" (UniqueName: \"kubernetes.io/projected/0d570fe1-d9f5-4d80-baf9-17877fd99929-kube-api-access-fdltb\") pod \"collect-profiles-29518320-2tbwr\" (UID: \"0d570fe1-d9f5-4d80-baf9-17877fd99929\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518320-2tbwr" Feb 14 20:00:01 crc kubenswrapper[4897]: I0214 20:00:01.144177 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d570fe1-d9f5-4d80-baf9-17877fd99929-config-volume\") pod \"collect-profiles-29518320-2tbwr\" (UID: \"0d570fe1-d9f5-4d80-baf9-17877fd99929\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518320-2tbwr" Feb 14 20:00:01 crc kubenswrapper[4897]: I0214 20:00:01.146189 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 20:00:01 crc kubenswrapper[4897]: I0214 20:00:01.146195 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 20:00:01 crc kubenswrapper[4897]: I0214 20:00:01.246967 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d570fe1-d9f5-4d80-baf9-17877fd99929-config-volume\") pod \"collect-profiles-29518320-2tbwr\" (UID: \"0d570fe1-d9f5-4d80-baf9-17877fd99929\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518320-2tbwr" Feb 14 20:00:01 crc kubenswrapper[4897]: I0214 20:00:01.247214 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d570fe1-d9f5-4d80-baf9-17877fd99929-secret-volume\") pod \"collect-profiles-29518320-2tbwr\" (UID: \"0d570fe1-d9f5-4d80-baf9-17877fd99929\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518320-2tbwr" Feb 14 20:00:01 crc kubenswrapper[4897]: I0214 20:00:01.247267 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdltb\" (UniqueName: \"kubernetes.io/projected/0d570fe1-d9f5-4d80-baf9-17877fd99929-kube-api-access-fdltb\") pod \"collect-profiles-29518320-2tbwr\" (UID: \"0d570fe1-d9f5-4d80-baf9-17877fd99929\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518320-2tbwr" Feb 14 20:00:01 crc kubenswrapper[4897]: I0214 20:00:01.303354 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d570fe1-d9f5-4d80-baf9-17877fd99929-config-volume\") pod \"collect-profiles-29518320-2tbwr\" (UID: \"0d570fe1-d9f5-4d80-baf9-17877fd99929\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518320-2tbwr" Feb 14 20:00:01 crc kubenswrapper[4897]: I0214 20:00:01.427371 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518320-2tbwr"] Feb 14 20:00:01 crc kubenswrapper[4897]: I0214 20:00:01.582144 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d570fe1-d9f5-4d80-baf9-17877fd99929-secret-volume\") pod \"collect-profiles-29518320-2tbwr\" (UID: \"0d570fe1-d9f5-4d80-baf9-17877fd99929\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518320-2tbwr" Feb 14 20:00:01 crc kubenswrapper[4897]: I0214 20:00:01.585215 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdltb\" (UniqueName: \"kubernetes.io/projected/0d570fe1-d9f5-4d80-baf9-17877fd99929-kube-api-access-fdltb\") pod \"collect-profiles-29518320-2tbwr\" (UID: \"0d570fe1-d9f5-4d80-baf9-17877fd99929\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518320-2tbwr" Feb 14 20:00:01 crc kubenswrapper[4897]: I0214 20:00:01.832735 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518320-2tbwr" Feb 14 20:00:03 crc kubenswrapper[4897]: I0214 20:00:03.724285 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518320-2tbwr"] Feb 14 20:00:05 crc kubenswrapper[4897]: I0214 20:00:05.121515 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518320-2tbwr" event={"ID":"0d570fe1-d9f5-4d80-baf9-17877fd99929","Type":"ContainerStarted","Data":"056ce3e4fa6621e1adf777f475e35f601c9bf56e2e3c06a4812ca4a87b199ab1"} Feb 14 20:00:05 crc kubenswrapper[4897]: I0214 20:00:05.122058 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518320-2tbwr" event={"ID":"0d570fe1-d9f5-4d80-baf9-17877fd99929","Type":"ContainerStarted","Data":"799a2cf02c5d80661b3e36a24b3a6ad2011c0e3eda62510ce76fe43281cedfd3"} Feb 14 20:00:05 crc kubenswrapper[4897]: I0214 20:00:05.301331 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29518320-2tbwr" podStartSLOduration=5.299201888 podStartE2EDuration="5.299201888s" podCreationTimestamp="2026-02-14 20:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 20:00:05.288838297 +0000 UTC m=+4658.265246790" watchObservedRunningTime="2026-02-14 20:00:05.299201888 +0000 UTC m=+4658.275610371" Feb 14 20:00:05 crc kubenswrapper[4897]: I0214 20:00:05.524244 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-ks77p" podUID="1b139a41-dd2e-42ba-a86d-01ade60da46f" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:05 crc kubenswrapper[4897]: I0214 20:00:05.784306 4897 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ndtpt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:05 crc kubenswrapper[4897]: I0214 20:00:05.784381 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ndtpt" podUID="c87321f8-a781-4a08-93e8-2280f2ee57b8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.66:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:05 crc kubenswrapper[4897]: I0214 20:00:05.784318 4897 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ndtpt container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:05 crc kubenswrapper[4897]: I0214 20:00:05.784618 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-ndtpt" podUID="c87321f8-a781-4a08-93e8-2280f2ee57b8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.66:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:05 crc kubenswrapper[4897]: I0214 20:00:05.810077 4897 patch_prober.go:28] interesting pod/logging-loki-gateway-c7757d78c-ctkkw container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:05 crc kubenswrapper[4897]: I0214 20:00:05.810136 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" podUID="969ba5ce-9b29-41f2-ba75-76f548daa534" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:05 crc kubenswrapper[4897]: I0214 20:00:05.810181 4897 patch_prober.go:28] interesting pod/logging-loki-gateway-c7757d78c-ctkkw container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:05 crc kubenswrapper[4897]: I0214 20:00:05.810243 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" podUID="969ba5ce-9b29-41f2-ba75-76f548daa534" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:05 crc kubenswrapper[4897]: I0214 20:00:05.826376 4897 patch_prober.go:28] interesting pod/logging-loki-gateway-c7757d78c-fb7zn container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:05 crc kubenswrapper[4897]: I0214 20:00:05.826414 4897 patch_prober.go:28] interesting pod/logging-loki-gateway-c7757d78c-fb7zn container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:05 crc kubenswrapper[4897]: I0214 20:00:05.826465 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" podUID="cec4c0da-107d-4f6d-946d-2ffe925883e4" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:05 crc kubenswrapper[4897]: I0214 20:00:05.826476 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" podUID="cec4c0da-107d-4f6d-946d-2ffe925883e4" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:05 crc kubenswrapper[4897]: I0214 20:00:05.962291 4897 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-q66h9 container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.5:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:05 crc kubenswrapper[4897]: I0214 20:00:05.962299 4897 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-q66h9 container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.5:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:05 crc kubenswrapper[4897]: I0214 20:00:05.962361 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-q66h9" podUID="b37fa061-9005-4aec-8681-c1107aad5075" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.5:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:05 crc kubenswrapper[4897]: I0214 20:00:05.962393 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-q66h9" podUID="b37fa061-9005-4aec-8681-c1107aad5075" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.5:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:06 crc kubenswrapper[4897]: I0214 20:00:06.015754 4897 patch_prober.go:28] interesting pod/metrics-server-7cfcf6657f-wsnmf container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.79:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:06 crc kubenswrapper[4897]: I0214 20:00:06.015767 4897 patch_prober.go:28] interesting pod/metrics-server-7cfcf6657f-wsnmf container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:06 crc kubenswrapper[4897]: I0214 20:00:06.015835 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" podUID="9748a754-75f5-4f7d-9e7b-a6135dd3778d" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.79:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:06 crc kubenswrapper[4897]: I0214 20:00:06.015888 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" podUID="9748a754-75f5-4f7d-9e7b-a6135dd3778d" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.79:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:06 crc kubenswrapper[4897]: I0214 20:00:06.430586 4897 patch_prober.go:28] interesting pod/monitoring-plugin-79d749bcb5-rfm5g container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:06 crc kubenswrapper[4897]: I0214 20:00:06.430995 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-79d749bcb5-rfm5g" podUID="959d187e-bbbf-4e61-b0d7-67a6b30529a4" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.80:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:06 crc kubenswrapper[4897]: I0214 20:00:06.784922 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9a8b3d12-d5db-435a-ba48-fbe1e31fef96" containerName="galera" probeResult="failure" output="command timed out" Feb 14 20:00:06 crc kubenswrapper[4897]: I0214 20:00:06.785063 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9a8b3d12-d5db-435a-ba48-fbe1e31fef96" containerName="galera" probeResult="failure" output="command timed out" Feb 14 20:00:07 crc kubenswrapper[4897]: I0214 20:00:07.779424 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="fdda6cd9-a603-4bb0-8595-3d128fc9e324" containerName="galera" probeResult="failure" output="command timed out" Feb 14 20:00:07 crc kubenswrapper[4897]: I0214 20:00:07.779830 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="fdda6cd9-a603-4bb0-8595-3d128fc9e324" containerName="galera" probeResult="failure" output="command timed out" Feb 14 20:00:08 crc kubenswrapper[4897]: I0214 20:00:08.450258 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" podUID="afb3d9d3-a3e1-4aac-89ef-a7128579e6e9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:08 crc kubenswrapper[4897]: I0214 20:00:08.894324 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-bdg8n" podUID="afb2923f-489f-4ce0-bd55-f95a6c59f809" containerName="registry-server" probeResult="failure" output=< Feb 14 20:00:08 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 20:00:08 crc kubenswrapper[4897]: > Feb 14 20:00:08 crc kubenswrapper[4897]: I0214 20:00:08.894381 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-bdg8n" podUID="afb2923f-489f-4ce0-bd55-f95a6c59f809" containerName="registry-server" probeResult="failure" output=< Feb 14 20:00:08 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 20:00:08 crc kubenswrapper[4897]: > Feb 14 20:00:08 crc kubenswrapper[4897]: I0214 20:00:08.935834 4897 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:08 crc kubenswrapper[4897]: I0214 20:00:08.935907 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:09 crc kubenswrapper[4897]: I0214 20:00:09.168624 4897 patch_prober.go:28] interesting pod/controller-manager-86b69bbd49-9rnzb container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:09 crc kubenswrapper[4897]: I0214 20:00:09.168715 4897 patch_prober.go:28] interesting pod/controller-manager-86b69bbd49-9rnzb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:09 crc kubenswrapper[4897]: I0214 20:00:09.168920 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" podUID="7325d839-07ed-4966-bb45-10719d4ec580" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:09 crc kubenswrapper[4897]: I0214 20:00:09.169045 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" podUID="7325d839-07ed-4966-bb45-10719d4ec580" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:09 crc kubenswrapper[4897]: I0214 20:00:09.265216 4897 patch_prober.go:28] interesting pod/route-controller-manager-66464749f5-tftwf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:09 crc kubenswrapper[4897]: I0214 20:00:09.265523 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" podUID="7e892adf-50be-43db-bfb6-6ad0530bf7a5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:09 crc kubenswrapper[4897]: I0214 20:00:09.265414 4897 patch_prober.go:28] interesting pod/route-controller-manager-66464749f5-tftwf container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:09 crc kubenswrapper[4897]: I0214 20:00:09.266019 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" podUID="7e892adf-50be-43db-bfb6-6ad0530bf7a5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:10 crc kubenswrapper[4897]: I0214 20:00:10.270864 4897 patch_prober.go:28] interesting pod/console-operator-58897d9998-62b7q container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:10 crc kubenswrapper[4897]: I0214 20:00:10.271175 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-62b7q" podUID="0cd062a1-246d-4ad6-b81a-a9f103576a32" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:10 crc kubenswrapper[4897]: I0214 20:00:10.270907 4897 patch_prober.go:28] interesting pod/console-operator-58897d9998-62b7q container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:10 crc kubenswrapper[4897]: I0214 20:00:10.271254 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-62b7q" podUID="0cd062a1-246d-4ad6-b81a-a9f103576a32" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:10 crc kubenswrapper[4897]: I0214 20:00:10.462244 4897 patch_prober.go:28] interesting pod/loki-operator-controller-manager-78d86b9dcc-fgbpn container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.49:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:10 crc kubenswrapper[4897]: I0214 20:00:10.462316 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn" podUID="ab082f7b-c89d-4db4-a04f-e2db844fa022" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.49:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:10 crc kubenswrapper[4897]: I0214 20:00:10.520109 4897 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jh8w7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:10 crc kubenswrapper[4897]: I0214 20:00:10.520191 4897 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jh8w7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:10 crc kubenswrapper[4897]: I0214 20:00:10.520307 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" podUID="faa970d9-b5d7-49a1-b162-2bed0f528b71" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:10 crc kubenswrapper[4897]: I0214 20:00:10.520259 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" podUID="faa970d9-b5d7-49a1-b162-2bed0f528b71" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:10 crc kubenswrapper[4897]: I0214 20:00:10.771208 4897 patch_prober.go:28] interesting pod/router-default-5444994796-c5z8g container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:10 crc kubenswrapper[4897]: I0214 20:00:10.771636 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-c5z8g" podUID="9c34acbe-6a2d-446a-b2e2-5fc5a4130deb" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:10 crc kubenswrapper[4897]: I0214 20:00:10.813329 4897 patch_prober.go:28] interesting pod/router-default-5444994796-c5z8g container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:10 crc kubenswrapper[4897]: I0214 20:00:10.813385 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-c5z8g" podUID="9c34acbe-6a2d-446a-b2e2-5fc5a4130deb" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:10 crc kubenswrapper[4897]: I0214 20:00:10.854250 4897 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-rx2r9 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:10 crc kubenswrapper[4897]: I0214 20:00:10.854324 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" podUID="2fd14f21-0836-40b2-b509-ec296556f45c" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:10 crc kubenswrapper[4897]: I0214 20:00:10.899675 4897 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9pw99 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:10 crc kubenswrapper[4897]: I0214 20:00:10.899727 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" podUID="0f237a59-0e7e-4ae0-94c9-c6d451224a27" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:10 crc kubenswrapper[4897]: I0214 20:00:10.899739 4897 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9pw99 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:10 crc kubenswrapper[4897]: I0214 20:00:10.899799 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" podUID="0f237a59-0e7e-4ae0-94c9-c6d451224a27" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:10 crc kubenswrapper[4897]: I0214 20:00:10.919453 4897 patch_prober.go:28] interesting pod/console-7f7fb6d64c-hkskf container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:10 crc kubenswrapper[4897]: I0214 20:00:10.919508 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-7f7fb6d64c-hkskf" podUID="e77572d7-6aef-4c6c-bb23-bdb47d9d28ee" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:11 crc kubenswrapper[4897]: I0214 20:00:11.259217 4897 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-gvc49 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:11 crc kubenswrapper[4897]: I0214 20:00:11.259264 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" podUID="15fa65ae-a663-434d-9d2d-2a69a3f7d81c" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:11 crc kubenswrapper[4897]: I0214 20:00:11.301227 4897 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-tllh7 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.72:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:11 crc kubenswrapper[4897]: I0214 20:00:11.301287 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" podUID="4dab2db8-b8bf-4421-a71e-fb52c69e8a8e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.72:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:11 crc kubenswrapper[4897]: I0214 20:00:11.301366 4897 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-tllh7 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.72:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:11 crc kubenswrapper[4897]: I0214 20:00:11.301418 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" podUID="4dab2db8-b8bf-4421-a71e-fb52c69e8a8e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.72:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:11 crc kubenswrapper[4897]: I0214 20:00:11.301454 4897 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-gvc49 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:11 crc kubenswrapper[4897]: I0214 20:00:11.301496 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" podUID="15fa65ae-a663-434d-9d2d-2a69a3f7d81c" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:11 crc kubenswrapper[4897]: I0214 20:00:11.301972 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-99cb98555-5nrbh" podUID="55ee13ff-72a6-4bdb-8461-fb545f66b881" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:12 crc kubenswrapper[4897]: I0214 20:00:12.393321 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ts22t" podUID="8dffc7df-2563-4f02-8dfc-83ab824af909" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:12 crc kubenswrapper[4897]: I0214 20:00:12.393610 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ts22t" podUID="8dffc7df-2563-4f02-8dfc-83ab824af909" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:13 crc kubenswrapper[4897]: I0214 20:00:13.803304 4897 scope.go:117] "RemoveContainer" containerID="a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" Feb 14 20:00:13 crc kubenswrapper[4897]: E0214 20:00:13.809185 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:00:15 crc kubenswrapper[4897]: I0214 20:00:15.531889 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-ks77p" podUID="1b139a41-dd2e-42ba-a86d-01ade60da46f" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:15 crc kubenswrapper[4897]: I0214 20:00:15.594377 4897 trace.go:236] Trace[587663384]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-0" (14-Feb-2026 20:00:14.125) (total time: 1458ms): Feb 14 20:00:15 crc kubenswrapper[4897]: Trace[587663384]: [1.458600313s] [1.458600313s] END Feb 14 20:00:16 crc kubenswrapper[4897]: I0214 20:00:16.017005 4897 patch_prober.go:28] interesting pod/metrics-server-7cfcf6657f-wsnmf container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.79:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:16 crc kubenswrapper[4897]: I0214 20:00:16.017203 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" podUID="9748a754-75f5-4f7d-9e7b-a6135dd3778d" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.79:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:16 crc kubenswrapper[4897]: I0214 20:00:16.433127 4897 patch_prober.go:28] interesting pod/monitoring-plugin-79d749bcb5-rfm5g container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:16 crc kubenswrapper[4897]: I0214 20:00:16.433201 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-79d749bcb5-rfm5g" podUID="959d187e-bbbf-4e61-b0d7-67a6b30529a4" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.80:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:16 crc kubenswrapper[4897]: I0214 20:00:16.566215 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-4r6x6" podUID="ae82eac1-c909-47f2-b4b5-2f3f1267345e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:16 crc kubenswrapper[4897]: I0214 20:00:16.566255 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-4r6x6" podUID="ae82eac1-c909-47f2-b4b5-2f3f1267345e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:16 crc kubenswrapper[4897]: I0214 20:00:16.795613 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9a8b3d12-d5db-435a-ba48-fbe1e31fef96" containerName="galera" probeResult="failure" output="command timed out" Feb 14 20:00:16 crc kubenswrapper[4897]: I0214 20:00:16.795617 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9a8b3d12-d5db-435a-ba48-fbe1e31fef96" containerName="galera" probeResult="failure" output="command timed out" Feb 14 20:00:17 crc kubenswrapper[4897]: I0214 20:00:17.566582 4897 patch_prober.go:28] interesting pod/nmstate-webhook-866bcb46dc-tf6nv container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:17 crc kubenswrapper[4897]: I0214 20:00:17.566953 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-tf6nv" podUID="c70ba798-8c12-43e8-a0e2-d54617b6bb84" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:17 crc kubenswrapper[4897]: I0214 20:00:17.642194 4897 patch_prober.go:28] interesting pod/thanos-querier-86c7f7cb9c-fsl5c container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:17 crc kubenswrapper[4897]: I0214 20:00:17.642256 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" podUID="15de099a-88c7-4c7c-9b4e-8d10c1e392f3" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.77:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:17 crc kubenswrapper[4897]: I0214 20:00:17.790941 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="fdda6cd9-a603-4bb0-8595-3d128fc9e324" containerName="galera" probeResult="failure" output="command timed out" Feb 14 20:00:17 crc kubenswrapper[4897]: I0214 20:00:17.791134 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="fdda6cd9-a603-4bb0-8595-3d128fc9e324" containerName="galera" probeResult="failure" output="command timed out" Feb 14 20:00:18 crc kubenswrapper[4897]: I0214 20:00:18.059894 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="3c77ebc2-8dc3-4b0f-8f95-b3208b853935" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:18 crc kubenswrapper[4897]: I0214 20:00:18.059894 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="3c77ebc2-8dc3-4b0f-8f95-b3208b853935" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:18 crc kubenswrapper[4897]: I0214 20:00:18.408283 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-9ht86" podUID="bd9aef55-ad36-4675-a79a-a1829c9b3b3e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:18 crc kubenswrapper[4897]: I0214 20:00:18.451285 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" podUID="afb3d9d3-a3e1-4aac-89ef-a7128579e6e9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:18 crc kubenswrapper[4897]: I0214 20:00:18.935498 4897 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:18 crc kubenswrapper[4897]: I0214 20:00:18.935557 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:19 crc kubenswrapper[4897]: I0214 20:00:19.168687 4897 patch_prober.go:28] interesting pod/controller-manager-86b69bbd49-9rnzb container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:19 crc kubenswrapper[4897]: I0214 20:00:19.168748 4897 patch_prober.go:28] interesting pod/controller-manager-86b69bbd49-9rnzb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:19 crc kubenswrapper[4897]: I0214 20:00:19.168790 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" podUID="7325d839-07ed-4966-bb45-10719d4ec580" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:19 crc kubenswrapper[4897]: I0214 20:00:19.168791 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" podUID="7325d839-07ed-4966-bb45-10719d4ec580" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:19 crc kubenswrapper[4897]: I0214 20:00:19.263278 4897 patch_prober.go:28] interesting pod/route-controller-manager-66464749f5-tftwf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:19 crc kubenswrapper[4897]: I0214 20:00:19.263324 4897 patch_prober.go:28] interesting pod/route-controller-manager-66464749f5-tftwf container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:19 crc kubenswrapper[4897]: I0214 20:00:19.263409 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" podUID="7e892adf-50be-43db-bfb6-6ad0530bf7a5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:19 crc kubenswrapper[4897]: I0214 20:00:19.263414 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" podUID="7e892adf-50be-43db-bfb6-6ad0530bf7a5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:19 crc kubenswrapper[4897]: I0214 20:00:19.779256 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="d2bac22b-985e-423c-8765-df9df37cee02" containerName="prometheus" probeResult="failure" output="command timed out" Feb 14 20:00:19 crc kubenswrapper[4897]: I0214 20:00:19.779922 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="d2bac22b-985e-423c-8765-df9df37cee02" containerName="prometheus" probeResult="failure" output="command timed out" Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.271185 4897 patch_prober.go:28] interesting pod/console-operator-58897d9998-62b7q container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.271279 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-62b7q" podUID="0cd062a1-246d-4ad6-b81a-a9f103576a32" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.271202 4897 patch_prober.go:28] interesting pod/console-operator-58897d9998-62b7q container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.271547 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-62b7q" podUID="0cd062a1-246d-4ad6-b81a-a9f103576a32" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.373745 4897 patch_prober.go:28] interesting pod/downloads-7954f5f757-9kvql container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.373840 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9kvql" podUID="7ec1f803-3889-4483-87ae-9a38bd020818" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.373753 4897 patch_prober.go:28] interesting pod/downloads-7954f5f757-9kvql container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.373898 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9kvql" podUID="7ec1f803-3889-4483-87ae-9a38bd020818" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.462239 4897 patch_prober.go:28] interesting pod/loki-operator-controller-manager-78d86b9dcc-fgbpn container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.49:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.462305 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn" podUID="ab082f7b-c89d-4db4-a04f-e2db844fa022" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.49:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.520552 4897 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jh8w7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.520566 4897 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jh8w7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.520618 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" podUID="faa970d9-b5d7-49a1-b162-2bed0f528b71" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.520697 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" podUID="faa970d9-b5d7-49a1-b162-2bed0f528b71" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.817345 4897 patch_prober.go:28] interesting pod/router-default-5444994796-c5z8g container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.817730 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-c5z8g" podUID="9c34acbe-6a2d-446a-b2e2-5fc5a4130deb" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.817365 4897 patch_prober.go:28] interesting pod/router-default-5444994796-c5z8g container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.817800 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-c5z8g" podUID="9c34acbe-6a2d-446a-b2e2-5fc5a4130deb" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.817474 4897 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-rx2r9 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.817836 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" podUID="2fd14f21-0836-40b2-b509-ec296556f45c" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.817488 4897 patch_prober.go:28] interesting pod/logging-loki-gateway-c7757d78c-ctkkw container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.817871 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" podUID="969ba5ce-9b29-41f2-ba75-76f548daa534" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.817501 4897 patch_prober.go:28] interesting pod/logging-loki-gateway-c7757d78c-ctkkw container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.817904 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" podUID="969ba5ce-9b29-41f2-ba75-76f548daa534" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.826872 4897 patch_prober.go:28] interesting pod/logging-loki-gateway-c7757d78c-fb7zn container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.826936 4897 patch_prober.go:28] interesting pod/logging-loki-gateway-c7757d78c-fb7zn container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.826936 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" podUID="cec4c0da-107d-4f6d-946d-2ffe925883e4" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.827016 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" podUID="cec4c0da-107d-4f6d-946d-2ffe925883e4" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.899140 4897 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9pw99 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.899210 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" podUID="0f237a59-0e7e-4ae0-94c9-c6d451224a27" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.899243 4897 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9pw99 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.899339 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" podUID="0f237a59-0e7e-4ae0-94c9-c6d451224a27" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.920220 4897 patch_prober.go:28] interesting pod/console-7f7fb6d64c-hkskf container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:20 crc kubenswrapper[4897]: I0214 20:00:20.920275 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-7f7fb6d64c-hkskf" podUID="e77572d7-6aef-4c6c-bb23-bdb47d9d28ee" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:21 crc kubenswrapper[4897]: I0214 20:00:20.997317 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-bdg8n" podUID="afb2923f-489f-4ce0-bd55-f95a6c59f809" containerName="registry-server" probeResult="failure" output=< Feb 14 20:00:21 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 20:00:21 crc kubenswrapper[4897]: > Feb 14 20:00:21 crc kubenswrapper[4897]: I0214 20:00:20.997335 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-bdg8n" podUID="afb2923f-489f-4ce0-bd55-f95a6c59f809" containerName="registry-server" probeResult="failure" output=< Feb 14 20:00:21 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 20:00:21 crc kubenswrapper[4897]: > Feb 14 20:00:21 crc kubenswrapper[4897]: I0214 20:00:21.075389 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-pmlmt" podUID="0b1febb3-dc70-4cd5-9a48-024547405da7" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:21 crc kubenswrapper[4897]: I0214 20:00:21.115182 4897 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5wxpc container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:21 crc kubenswrapper[4897]: I0214 20:00:21.115256 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5wxpc" podUID="10c2cb4a-c03b-49ca-a6ca-1b5637923932" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:21 crc kubenswrapper[4897]: I0214 20:00:21.115183 4897 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5wxpc container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:21 crc kubenswrapper[4897]: I0214 20:00:21.115389 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5wxpc" podUID="10c2cb4a-c03b-49ca-a6ca-1b5637923932" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:21 crc kubenswrapper[4897]: I0214 20:00:21.256262 4897 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-gvc49 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:21 crc kubenswrapper[4897]: I0214 20:00:21.257092 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" podUID="15fa65ae-a663-434d-9d2d-2a69a3f7d81c" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:21 crc kubenswrapper[4897]: I0214 20:00:21.279192 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-w9dlm" podUID="93aca208-9cef-49a3-917c-2bb7c314d537" containerName="registry-server" probeResult="failure" output=< Feb 14 20:00:21 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 20:00:21 crc kubenswrapper[4897]: > Feb 14 20:00:21 crc kubenswrapper[4897]: I0214 20:00:21.288261 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-79v5s" podUID="170e914d-6f55-4d61-bb7d-36dae4e4b002" containerName="registry-server" probeResult="failure" output=< Feb 14 20:00:21 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 20:00:21 crc kubenswrapper[4897]: > Feb 14 20:00:21 crc kubenswrapper[4897]: I0214 20:00:21.338238 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-99cb98555-5nrbh" podUID="55ee13ff-72a6-4bdb-8461-fb545f66b881" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:21 crc kubenswrapper[4897]: I0214 20:00:21.338300 4897 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-tllh7 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.72:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:21 crc kubenswrapper[4897]: I0214 20:00:21.338370 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" podUID="4dab2db8-b8bf-4421-a71e-fb52c69e8a8e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.72:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:21 crc kubenswrapper[4897]: I0214 20:00:21.338406 4897 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-tllh7 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.72:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:21 crc kubenswrapper[4897]: I0214 20:00:21.338431 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" podUID="4dab2db8-b8bf-4421-a71e-fb52c69e8a8e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.72:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:21 crc kubenswrapper[4897]: I0214 20:00:21.338467 4897 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-gvc49 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:21 crc kubenswrapper[4897]: I0214 20:00:21.338482 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" podUID="15fa65ae-a663-434d-9d2d-2a69a3f7d81c" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:21 crc kubenswrapper[4897]: I0214 20:00:21.338619 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-99cb98555-5nrbh" podUID="55ee13ff-72a6-4bdb-8461-fb545f66b881" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:21 crc kubenswrapper[4897]: I0214 20:00:21.551085 4897 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:21 crc kubenswrapper[4897]: I0214 20:00:21.551165 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:21 crc kubenswrapper[4897]: I0214 20:00:21.673974 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-zqcpc" podUID="ac059afa-1f7b-480b-8650-c227c33ba696" containerName="registry-server" probeResult="failure" output=< Feb 14 20:00:21 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 20:00:21 crc kubenswrapper[4897]: > Feb 14 20:00:21 crc kubenswrapper[4897]: I0214 20:00:21.674190 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-vgcv6" podUID="3e2a05b2-5d93-4252-a08b-6b35f225e167" containerName="registry-server" probeResult="failure" output=< Feb 14 20:00:21 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 20:00:21 crc kubenswrapper[4897]: > Feb 14 20:00:21 crc kubenswrapper[4897]: I0214 20:00:21.674959 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-zqcpc" podUID="ac059afa-1f7b-480b-8650-c227c33ba696" containerName="registry-server" probeResult="failure" output=< Feb 14 20:00:21 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 20:00:21 crc kubenswrapper[4897]: > Feb 14 20:00:21 crc kubenswrapper[4897]: I0214 20:00:21.676896 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-w9dlm" podUID="93aca208-9cef-49a3-917c-2bb7c314d537" containerName="registry-server" probeResult="failure" output=< Feb 14 20:00:21 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 20:00:21 crc kubenswrapper[4897]: > Feb 14 20:00:21 crc kubenswrapper[4897]: I0214 20:00:21.676997 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-vgcv6" podUID="3e2a05b2-5d93-4252-a08b-6b35f225e167" containerName="registry-server" probeResult="failure" output=< Feb 14 20:00:21 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 20:00:21 crc kubenswrapper[4897]: > Feb 14 20:00:21 crc kubenswrapper[4897]: I0214 20:00:21.677000 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-79v5s" podUID="170e914d-6f55-4d61-bb7d-36dae4e4b002" containerName="registry-server" probeResult="failure" output=< Feb 14 20:00:21 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 20:00:21 crc kubenswrapper[4897]: > Feb 14 20:00:22 crc kubenswrapper[4897]: I0214 20:00:22.297263 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2lvr5" podUID="48e0b91f-f946-4ecc-b36c-fc280e728f77" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:22 crc kubenswrapper[4897]: I0214 20:00:22.389223 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ts22t" podUID="8dffc7df-2563-4f02-8dfc-83ab824af909" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:22 crc kubenswrapper[4897]: I0214 20:00:22.389275 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-drm7d" podUID="fe513351-3f7b-436d-9218-a66a6f579948" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:22 crc kubenswrapper[4897]: I0214 20:00:22.471254 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-77987464f4-wsghb" podUID="0128668e-be83-412e-96e6-8c158ab45cc5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:22 crc kubenswrapper[4897]: I0214 20:00:22.471361 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5v2tq" podUID="10c98e4f-ae22-481b-992d-6804a1b5d0cc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:22 crc kubenswrapper[4897]: I0214 20:00:22.623234 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nwjnd" podUID="5e11063d-aac7-4fea-91d9-0b560622ccb9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:22 crc kubenswrapper[4897]: I0214 20:00:22.641133 4897 patch_prober.go:28] interesting pod/thanos-querier-86c7f7cb9c-fsl5c container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:22 crc kubenswrapper[4897]: I0214 20:00:22.641191 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" podUID="15de099a-88c7-4c7c-9b4e-8d10c1e392f3" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.77:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:22 crc kubenswrapper[4897]: I0214 20:00:22.641138 4897 patch_prober.go:28] interesting pod/thanos-querier-86c7f7cb9c-fsl5c container/kube-rbac-proxy-web namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.77:9091/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:22 crc kubenswrapper[4897]: I0214 20:00:22.641240 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" podUID="15de099a-88c7-4c7c-9b4e-8d10c1e392f3" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.77:9091/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:22 crc kubenswrapper[4897]: I0214 20:00:22.688503 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rtvvf" podUID="8238fbef-1e59-4430-af92-1be3d70c4d84" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:22 crc kubenswrapper[4897]: I0214 20:00:22.764273 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bl5g8" podUID="7c6ab7c6-c333-41db-ba23-f89b3eff3eef" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:22 crc kubenswrapper[4897]: I0214 20:00:22.805312 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gvcdc" podUID="088b6f9e-b5ec-48f5-a7bb-4d9ef0db9820" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:22 crc kubenswrapper[4897]: I0214 20:00:22.846245 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-qbz5t" podUID="fd2bec18-d3b4-4ee2-bfb9-34e5d1ddff3a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:22 crc kubenswrapper[4897]: I0214 20:00:22.887441 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bh95f" podUID="d2543021-51cc-4cbe-9293-a6e02894e1f4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:22 crc kubenswrapper[4897]: I0214 20:00:22.904084 4897 trace.go:236] Trace[780049080]: "Calculate volume metrics of glance for pod openstack/glance-default-external-api-0" (14-Feb-2026 20:00:18.916) (total time: 3982ms): Feb 14 20:00:22 crc kubenswrapper[4897]: Trace[780049080]: [3.982903027s] [3.982903027s] END Feb 14 20:00:22 crc kubenswrapper[4897]: I0214 20:00:22.928208 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-gfrd9" podUID="cd0646ca-c695-4387-ba4b-cc9a3d85b460" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:22 crc kubenswrapper[4897]: I0214 20:00:22.969244 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-m5nfk" podUID="0c6cb6a4-76e1-4568-9fc5-6069cc28b9e6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:23 crc kubenswrapper[4897]: I0214 20:00:23.060323 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="3c77ebc2-8dc3-4b0f-8f95-b3208b853935" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:23 crc kubenswrapper[4897]: I0214 20:00:23.060352 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="3c77ebc2-8dc3-4b0f-8f95-b3208b853935" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:23 crc kubenswrapper[4897]: I0214 20:00:23.285264 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-7866795846-7fnnb" podUID="f8e83507-87e8-44e6-a08d-f1f45f8b4ee0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:23 crc kubenswrapper[4897]: I0214 20:00:23.285283 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vv2k7" podUID="26f58f32-c15c-49c7-8756-fc2bae972a2d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:23 crc kubenswrapper[4897]: I0214 20:00:23.787408 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-7cc9d46ffd-mbftl" podUID="1ef9cd33-5ad0-494f-9d50-177eadf0483f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:23 crc kubenswrapper[4897]: I0214 20:00:23.798027 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="289311f5-ac62-4fe6-b260-8bda0a09331b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 14 20:00:24 crc kubenswrapper[4897]: I0214 20:00:24.088345 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-c8d485b4-vdmjx" podUID="de593d8b-e41e-4a52-bead-28e46be05e4d" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:24 crc kubenswrapper[4897]: I0214 20:00:24.088455 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-c8d485b4-vdmjx" podUID="de593d8b-e41e-4a52-bead-28e46be05e4d" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:24 crc kubenswrapper[4897]: I0214 20:00:24.571275 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" podUID="68eb569a-ca5d-4eef-a936-fd697b26d0be" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.159258 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-69bbfbf88f-mdj4b" podUID="4bfd25e6-cec1-440a-8bfb-3ce2bcb4ab8b" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.159628 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" podUID="4243feec-23ed-4292-9291-7ad01f7d12a6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.160009 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" podUID="4243feec-23ed-4292-9291-7ad01f7d12a6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.160082 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-69bbfbf88f-mdj4b" podUID="4bfd25e6-cec1-440a-8bfb-3ce2bcb4ab8b" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.160115 4897 patch_prober.go:28] interesting pod/oauth-openshift-868547c79-t4b6c container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.160147 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" podUID="f5d97820-5ed5-4374-a152-5097c22fbe8b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.160194 4897 patch_prober.go:28] interesting pod/oauth-openshift-868547c79-t4b6c container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.160219 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" podUID="f5d97820-5ed5-4374-a152-5097c22fbe8b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.429431 4897 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-lx9b2 container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.429570 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2" podUID="0f4eb68c-7592-4025-a9a0-d5ed85aeec3c" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.555313 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n6ptt" podUID="7ea0a9e9-940c-4856-8fd0-f19994e3b810" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.678317 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-ks77p" podUID="1b139a41-dd2e-42ba-a86d-01ade60da46f" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.678500 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n6ptt" podUID="7ea0a9e9-940c-4856-8fd0-f19994e3b810" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.678750 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-ks77p" podUID="1b139a41-dd2e-42ba-a86d-01ade60da46f" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.679284 4897 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-jw9nh container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.679308 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" podUID="74485545-1349-4cd2-9764-72af83ba9aa1" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.679341 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-ks77p" podUID="1b139a41-dd2e-42ba-a86d-01ade60da46f" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.679678 4897 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-zhtld container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.679701 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-zhtld" podUID="fed2ea1c-038a-40eb-a753-68705d1ae150" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.685121 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-ks77p" Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.692531 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"f5267356897e21abb6ac6f691db815dc5386d4bddbd5b8b5c76c31d53c208242"} pod="metallb-system/frr-k8s-ks77p" containerMessage="Container frr failed liveness probe, will be restarted" Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.693418 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-ks77p" podUID="1b139a41-dd2e-42ba-a86d-01ade60da46f" containerName="frr" containerID="cri-o://f5267356897e21abb6ac6f691db815dc5386d4bddbd5b8b5c76c31d53c208242" gracePeriod=2 Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.805271 4897 scope.go:117] "RemoveContainer" containerID="a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.808533 4897 patch_prober.go:28] interesting pod/logging-loki-gateway-c7757d78c-ctkkw container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.808594 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" podUID="969ba5ce-9b29-41f2-ba75-76f548daa534" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.808596 4897 patch_prober.go:28] interesting pod/logging-loki-gateway-c7757d78c-ctkkw container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.808638 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" podUID="969ba5ce-9b29-41f2-ba75-76f548daa534" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:25 crc kubenswrapper[4897]: E0214 20:00:25.808995 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.827477 4897 patch_prober.go:28] interesting pod/logging-loki-gateway-c7757d78c-fb7zn container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.827483 4897 patch_prober.go:28] interesting pod/logging-loki-gateway-c7757d78c-fb7zn container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.827790 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" podUID="cec4c0da-107d-4f6d-946d-2ffe925883e4" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.827823 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" podUID="cec4c0da-107d-4f6d-946d-2ffe925883e4" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.886342 4897 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9t57n container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.35:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.886414 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-9t57n" podUID="7f9fcba2-5e97-421b-8868-b497df246731" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.35:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.886572 4897 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9t57n container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.35:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.886690 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-9t57n" podUID="7f9fcba2-5e97-421b-8868-b497df246731" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.35:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.927415 4897 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-q66h9 container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.5:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.927485 4897 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-q66h9 container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.5:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.927491 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-q66h9" podUID="b37fa061-9005-4aec-8681-c1107aad5075" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.5:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:25 crc kubenswrapper[4897]: I0214 20:00:25.927583 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-q66h9" podUID="b37fa061-9005-4aec-8681-c1107aad5075" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.5:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.061168 4897 patch_prober.go:28] interesting pod/metrics-server-7cfcf6657f-wsnmf container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.061508 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" podUID="9748a754-75f5-4f7d-9e7b-a6135dd3778d" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.79:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.061574 4897 patch_prober.go:28] interesting pod/metrics-server-7cfcf6657f-wsnmf container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.79:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.061592 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" podUID="9748a754-75f5-4f7d-9e7b-a6135dd3778d" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.79:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.061624 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.079922 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="metrics-server" containerStatusID={"Type":"cri-o","ID":"4a2ab4d17858d582748edaafa439d45f133d6351b1fe7558ddad33188c7b1b13"} pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" containerMessage="Container metrics-server failed liveness probe, will be restarted" Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.079987 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" podUID="9748a754-75f5-4f7d-9e7b-a6135dd3778d" containerName="metrics-server" containerID="cri-o://4a2ab4d17858d582748edaafa439d45f133d6351b1fe7558ddad33188c7b1b13" gracePeriod=170 Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.368862 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-jmbj5" podUID="68eb569a-ca5d-4eef-a936-fd697b26d0be" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.430586 4897 patch_prober.go:28] interesting pod/monitoring-plugin-79d749bcb5-rfm5g container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.430661 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-79d749bcb5-rfm5g" podUID="959d187e-bbbf-4e61-b0d7-67a6b30529a4" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.80:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.430757 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-79d749bcb5-rfm5g" Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.431257 4897 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-lx9b2 container/loki-distributor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.51:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.431318 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2" podUID="0f4eb68c-7592-4025-a9a0-d5ed85aeec3c" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.51:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.555813 4897 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.555898 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="740f1f83-6c75-4e47-a5c5-6a0ef1d40cca" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.56:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.565200 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-4r6x6" podUID="ae82eac1-c909-47f2-b4b5-2f3f1267345e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.565194 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-4r6x6" podUID="ae82eac1-c909-47f2-b4b5-2f3f1267345e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.597914 4897 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-klcwn container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.597940 4897 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-klcwn container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.598004 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" podUID="3b9a689e-54e3-48df-a102-500878c35aa2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.598006 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" podUID="3b9a689e-54e3-48df-a102-500878c35aa2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.630067 4897 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-jw9nh container/loki-querier namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.630136 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" podUID="74485545-1349-4cd2-9764-72af83ba9aa1" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.639238 4897 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.639297 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="9e988817-cbfc-4faf-a31e-bf357c1c4691" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.678264 4897 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-zhtld container/loki-query-frontend namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.678355 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-zhtld" podUID="fed2ea1c-038a-40eb-a753-68705d1ae150" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.739926 4897 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.740324 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="62b896b4-5861-4fa8-ac40-642f2d8688b5" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.776824 4897 patch_prober.go:28] interesting pod/logging-loki-gateway-c7757d78c-ctkkw container/opa namespace/openshift-logging: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={ Feb 14 20:00:26 crc kubenswrapper[4897]: "http": "Get \"http://localhost:8082\": context deadline exceeded" Feb 14 20:00:26 crc kubenswrapper[4897]: } Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.776903 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" podUID="969ba5ce-9b29-41f2-ba75-76f548daa534" containerName="opa" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.777245 4897 patch_prober.go:28] interesting pod/logging-loki-gateway-c7757d78c-fb7zn container/opa namespace/openshift-logging: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={ Feb 14 20:00:26 crc kubenswrapper[4897]: "http": "Get \"http://localhost:8082\": context deadline exceeded" Feb 14 20:00:26 crc kubenswrapper[4897]: } Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.777348 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" podUID="cec4c0da-107d-4f6d-946d-2ffe925883e4" containerName="opa" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.793293 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9a8b3d12-d5db-435a-ba48-fbe1e31fef96" containerName="galera" probeResult="failure" output="command timed out" Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.793315 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9a8b3d12-d5db-435a-ba48-fbe1e31fef96" containerName="galera" probeResult="failure" output="command timed out" Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.793487 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.793622 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.795663 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"1ae909fc87abca6b70a54edb63d7f2c825f62160862049babf6d8c6c86b0dc8d"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.809260 4897 patch_prober.go:28] interesting pod/logging-loki-gateway-c7757d78c-ctkkw container/gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.54:8081/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.809326 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" podUID="969ba5ce-9b29-41f2-ba75-76f548daa534" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.827235 4897 patch_prober.go:28] interesting pod/logging-loki-gateway-c7757d78c-fb7zn container/gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.55:8081/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:26 crc kubenswrapper[4897]: I0214 20:00:26.827333 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" podUID="cec4c0da-107d-4f6d-946d-2ffe925883e4" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:27 crc kubenswrapper[4897]: I0214 20:00:27.431498 4897 patch_prober.go:28] interesting pod/monitoring-plugin-79d749bcb5-rfm5g container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:27 crc kubenswrapper[4897]: I0214 20:00:27.431899 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-79d749bcb5-rfm5g" podUID="959d187e-bbbf-4e61-b0d7-67a6b30529a4" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.80:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:27 crc kubenswrapper[4897]: I0214 20:00:27.441184 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ks77p" event={"ID":"1b139a41-dd2e-42ba-a86d-01ade60da46f","Type":"ContainerDied","Data":"f5267356897e21abb6ac6f691db815dc5386d4bddbd5b8b5c76c31d53c208242"} Feb 14 20:00:27 crc kubenswrapper[4897]: I0214 20:00:27.441870 4897 generic.go:334] "Generic (PLEG): container finished" podID="1b139a41-dd2e-42ba-a86d-01ade60da46f" containerID="f5267356897e21abb6ac6f691db815dc5386d4bddbd5b8b5c76c31d53c208242" exitCode=143 Feb 14 20:00:27 crc kubenswrapper[4897]: I0214 20:00:27.566931 4897 patch_prober.go:28] interesting pod/nmstate-webhook-866bcb46dc-tf6nv container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:27 crc kubenswrapper[4897]: I0214 20:00:27.567519 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-tf6nv" podUID="c70ba798-8c12-43e8-a0e2-d54617b6bb84" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:27 crc kubenswrapper[4897]: I0214 20:00:27.592284 4897 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-zrmdr container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.65:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:27 crc kubenswrapper[4897]: I0214 20:00:27.592316 4897 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-zrmdr container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.65:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:27 crc kubenswrapper[4897]: I0214 20:00:27.592342 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" podUID="04a49346-5e0b-4511-8879-6d60e76e2464" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.65:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:27 crc kubenswrapper[4897]: I0214 20:00:27.592367 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-zrmdr" podUID="04a49346-5e0b-4511-8879-6d60e76e2464" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.65:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:27 crc kubenswrapper[4897]: I0214 20:00:27.640817 4897 patch_prober.go:28] interesting pod/thanos-querier-86c7f7cb9c-fsl5c container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:27 crc kubenswrapper[4897]: I0214 20:00:27.640888 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" podUID="15de099a-88c7-4c7c-9b4e-8d10c1e392f3" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.77:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:27 crc kubenswrapper[4897]: I0214 20:00:27.780911 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="fdda6cd9-a603-4bb0-8595-3d128fc9e324" containerName="galera" probeResult="failure" output="command timed out" Feb 14 20:00:27 crc kubenswrapper[4897]: I0214 20:00:27.780986 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 14 20:00:27 crc kubenswrapper[4897]: I0214 20:00:27.781953 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="fdda6cd9-a603-4bb0-8595-3d128fc9e324" containerName="galera" probeResult="failure" output="command timed out" Feb 14 20:00:27 crc kubenswrapper[4897]: I0214 20:00:27.782015 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 14 20:00:27 crc kubenswrapper[4897]: I0214 20:00:27.782062 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9a8b3d12-d5db-435a-ba48-fbe1e31fef96" containerName="galera" probeResult="failure" output="command timed out" Feb 14 20:00:27 crc kubenswrapper[4897]: I0214 20:00:27.789859 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"582f26b3a97ae333b48f26dba8219d84d182c93c5c493e55ab1ff1f207357838"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Feb 14 20:00:27 crc kubenswrapper[4897]: I0214 20:00:27.913689 4897 patch_prober.go:28] interesting pod/apiserver-76f77b778f-tndnf container/openshift-apiserver namespace/openshift-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.6:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:27 crc kubenswrapper[4897]: I0214 20:00:27.913738 4897 patch_prober.go:28] interesting pod/apiserver-76f77b778f-tndnf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:27 crc kubenswrapper[4897]: I0214 20:00:27.913760 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-apiserver/apiserver-76f77b778f-tndnf" podUID="5c5ace00-d072-440a-bc7b-982b96f636e7" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:27 crc kubenswrapper[4897]: I0214 20:00:27.913799 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-tndnf" podUID="5c5ace00-d072-440a-bc7b-982b96f636e7" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:28 crc kubenswrapper[4897]: I0214 20:00:28.060847 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="3c77ebc2-8dc3-4b0f-8f95-b3208b853935" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:28 crc kubenswrapper[4897]: I0214 20:00:28.060953 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 14 20:00:28 crc kubenswrapper[4897]: I0214 20:00:28.061063 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="3c77ebc2-8dc3-4b0f-8f95-b3208b853935" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:28 crc kubenswrapper[4897]: I0214 20:00:28.450218 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-9ht86" podUID="bd9aef55-ad36-4675-a79a-a1829c9b3b3e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:28 crc kubenswrapper[4897]: I0214 20:00:28.450246 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-9ht86" podUID="bd9aef55-ad36-4675-a79a-a1829c9b3b3e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:28 crc kubenswrapper[4897]: I0214 20:00:28.460413 4897 generic.go:334] "Generic (PLEG): container finished" podID="a2a15c49-cac6-4772-be07-69fd7597b692" containerID="0698836e5504838594407acba9499d8c3798184b5cfbc432ffa6becfee9c828f" exitCode=1 Feb 14 20:00:28 crc kubenswrapper[4897]: I0214 20:00:28.460495 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fzgws" event={"ID":"a2a15c49-cac6-4772-be07-69fd7597b692","Type":"ContainerDied","Data":"0698836e5504838594407acba9499d8c3798184b5cfbc432ffa6becfee9c828f"} Feb 14 20:00:28 crc kubenswrapper[4897]: I0214 20:00:28.471622 4897 scope.go:117] "RemoveContainer" containerID="0698836e5504838594407acba9499d8c3798184b5cfbc432ffa6becfee9c828f" Feb 14 20:00:28 crc kubenswrapper[4897]: I0214 20:00:28.478356 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ks77p" event={"ID":"1b139a41-dd2e-42ba-a86d-01ade60da46f","Type":"ContainerStarted","Data":"6cc62fa0aab92abb5e8264d424b1125ab6793b188b1f43582bb9900cfb843c15"} Feb 14 20:00:28 crc kubenswrapper[4897]: I0214 20:00:28.533215 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" podUID="afb3d9d3-a3e1-4aac-89ef-a7128579e6e9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:28 crc kubenswrapper[4897]: I0214 20:00:28.533263 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" podUID="afb3d9d3-a3e1-4aac-89ef-a7128579e6e9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:28 crc kubenswrapper[4897]: I0214 20:00:28.533325 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" Feb 14 20:00:28 crc kubenswrapper[4897]: I0214 20:00:28.781857 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-d5lnt" podUID="ff7e179e-a00c-436b-bf50-c14810288beb" containerName="nmstate-handler" probeResult="failure" output="command timed out" Feb 14 20:00:28 crc kubenswrapper[4897]: I0214 20:00:28.785944 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="289311f5-ac62-4fe6-b260-8bda0a09331b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 14 20:00:28 crc kubenswrapper[4897]: I0214 20:00:28.935280 4897 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:28 crc kubenswrapper[4897]: I0214 20:00:28.935412 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:28 crc kubenswrapper[4897]: I0214 20:00:28.935564 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 20:00:29 crc kubenswrapper[4897]: I0214 20:00:29.167489 4897 patch_prober.go:28] interesting pod/controller-manager-86b69bbd49-9rnzb container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:29 crc kubenswrapper[4897]: I0214 20:00:29.167552 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" podUID="7325d839-07ed-4966-bb45-10719d4ec580" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:29 crc kubenswrapper[4897]: I0214 20:00:29.167596 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" Feb 14 20:00:29 crc kubenswrapper[4897]: I0214 20:00:29.168846 4897 patch_prober.go:28] interesting pod/controller-manager-86b69bbd49-9rnzb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:29 crc kubenswrapper[4897]: I0214 20:00:29.168922 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" podUID="7325d839-07ed-4966-bb45-10719d4ec580" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:29 crc kubenswrapper[4897]: I0214 20:00:29.173877 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller-manager" containerStatusID={"Type":"cri-o","ID":"b5a7994574aca1091156dc54e21e19937c01fd33af545851e0560dafb8bc8803"} pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" containerMessage="Container controller-manager failed liveness probe, will be restarted" Feb 14 20:00:29 crc kubenswrapper[4897]: I0214 20:00:29.173930 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" podUID="7325d839-07ed-4966-bb45-10719d4ec580" containerName="controller-manager" containerID="cri-o://b5a7994574aca1091156dc54e21e19937c01fd33af545851e0560dafb8bc8803" gracePeriod=30 Feb 14 20:00:29 crc kubenswrapper[4897]: I0214 20:00:29.181960 4897 patch_prober.go:28] interesting pod/route-controller-manager-66464749f5-tftwf container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:29 crc kubenswrapper[4897]: I0214 20:00:29.182010 4897 patch_prober.go:28] interesting pod/route-controller-manager-66464749f5-tftwf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:29 crc kubenswrapper[4897]: I0214 20:00:29.182149 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" podUID="7e892adf-50be-43db-bfb6-6ad0530bf7a5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:29 crc kubenswrapper[4897]: I0214 20:00:29.182211 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" Feb 14 20:00:29 crc kubenswrapper[4897]: I0214 20:00:29.187992 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="route-controller-manager" containerStatusID={"Type":"cri-o","ID":"d4a5ac44915f8d2ec150972798e573aedc58e447d8751f83be105c48b10327a2"} pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" containerMessage="Container route-controller-manager failed liveness probe, will be restarted" Feb 14 20:00:29 crc kubenswrapper[4897]: I0214 20:00:29.188070 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" podUID="7e892adf-50be-43db-bfb6-6ad0530bf7a5" containerName="route-controller-manager" containerID="cri-o://d4a5ac44915f8d2ec150972798e573aedc58e447d8751f83be105c48b10327a2" gracePeriod=30 Feb 14 20:00:29 crc kubenswrapper[4897]: I0214 20:00:29.182075 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" podUID="7e892adf-50be-43db-bfb6-6ad0530bf7a5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:29 crc kubenswrapper[4897]: I0214 20:00:29.273311 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csghqz" Feb 14 20:00:29 crc kubenswrapper[4897]: I0214 20:00:29.483292 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-ks77p" Feb 14 20:00:29 crc kubenswrapper[4897]: I0214 20:00:29.492259 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fzgws" event={"ID":"a2a15c49-cac6-4772-be07-69fd7597b692","Type":"ContainerStarted","Data":"b85d8fdf064aa5ca698a7b79a43b4d359a1263fd9db2d1ba956cbf6902c6facf"} Feb 14 20:00:29 crc kubenswrapper[4897]: I0214 20:00:29.493489 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fzgws" Feb 14 20:00:29 crc kubenswrapper[4897]: I0214 20:00:29.679218 4897 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-klcwn container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:29 crc kubenswrapper[4897]: I0214 20:00:29.679236 4897 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-klcwn container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:29 crc kubenswrapper[4897]: I0214 20:00:29.679318 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" podUID="3b9a689e-54e3-48df-a102-500878c35aa2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:29 crc kubenswrapper[4897]: I0214 20:00:29.679375 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" podUID="3b9a689e-54e3-48df-a102-500878c35aa2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:29 crc kubenswrapper[4897]: I0214 20:00:29.783101 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="d2bac22b-985e-423c-8765-df9df37cee02" containerName="prometheus" probeResult="failure" output="command timed out" Feb 14 20:00:29 crc kubenswrapper[4897]: I0214 20:00:29.783571 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="d2bac22b-985e-423c-8765-df9df37cee02" containerName="prometheus" probeResult="failure" output="command timed out" Feb 14 20:00:29 crc kubenswrapper[4897]: I0214 20:00:29.886880 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.116299 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="48ec6bd3-236f-4982-8dfa-e5c72c4d67bc" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.16:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.116473 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="48ec6bd3-236f-4982-8dfa-e5c72c4d67bc" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.16:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.271556 4897 patch_prober.go:28] interesting pod/console-operator-58897d9998-62b7q container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.271640 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-62b7q" podUID="0cd062a1-246d-4ad6-b81a-a9f103576a32" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.271702 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-62b7q" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.271888 4897 patch_prober.go:28] interesting pod/console-operator-58897d9998-62b7q container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.271969 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-62b7q" podUID="0cd062a1-246d-4ad6-b81a-a9f103576a32" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.272151 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-62b7q" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.273089 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"7b3a5727ae9bc5b3d107f5c86405f5bcda06d06037b3d03a97b080d98c8fa2ce"} pod="openshift-console-operator/console-operator-58897d9998-62b7q" containerMessage="Container console-operator failed liveness probe, will be restarted" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.273156 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-62b7q" podUID="0cd062a1-246d-4ad6-b81a-a9f103576a32" containerName="console-operator" containerID="cri-o://7b3a5727ae9bc5b3d107f5c86405f5bcda06d06037b3d03a97b080d98c8fa2ce" gracePeriod=30 Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.402221 4897 patch_prober.go:28] interesting pod/downloads-7954f5f757-9kvql container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.402286 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9kvql" podUID="7ec1f803-3889-4483-87ae-9a38bd020818" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.402342 4897 patch_prober.go:28] interesting pod/downloads-7954f5f757-9kvql container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.402358 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9kvql" podUID="7ec1f803-3889-4483-87ae-9a38bd020818" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.463862 4897 patch_prober.go:28] interesting pod/loki-operator-controller-manager-78d86b9dcc-fgbpn container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.49:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.463941 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn" podUID="ab082f7b-c89d-4db4-a04f-e2db844fa022" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.49:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.464008 4897 patch_prober.go:28] interesting pod/loki-operator-controller-manager-78d86b9dcc-fgbpn container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.49:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.464043 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-78d86b9dcc-fgbpn" podUID="ab082f7b-c89d-4db4-a04f-e2db844fa022" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.49:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.525203 4897 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jh8w7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.525263 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" podUID="faa970d9-b5d7-49a1-b162-2bed0f528b71" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.525278 4897 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-ks77p" podUID="1b139a41-dd2e-42ba-a86d-01ade60da46f" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.525311 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.525290 4897 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jh8w7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.525385 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" podUID="faa970d9-b5d7-49a1-b162-2bed0f528b71" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.525454 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.526277 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="catalog-operator" containerStatusID={"Type":"cri-o","ID":"a96ee81cf06e3ed1c601b628c75da40c6ce9217d6b0638b32f9a1988b12d5537"} pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" containerMessage="Container catalog-operator failed liveness probe, will be restarted" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.526323 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" podUID="faa970d9-b5d7-49a1-b162-2bed0f528b71" containerName="catalog-operator" containerID="cri-o://a96ee81cf06e3ed1c601b628c75da40c6ce9217d6b0638b32f9a1988b12d5537" gracePeriod=30 Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.796873 4897 trace.go:236] Trace[920482322]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-2" (14-Feb-2026 20:00:27.641) (total time: 3153ms): Feb 14 20:00:30 crc kubenswrapper[4897]: Trace[920482322]: [3.153360939s] [3.153360939s] END Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.796875 4897 trace.go:236] Trace[1670675025]: "Calculate volume metrics of prometheus-metric-storage-db for pod openstack/prometheus-metric-storage-0" (14-Feb-2026 20:00:25.745) (total time: 5048ms): Feb 14 20:00:30 crc kubenswrapper[4897]: Trace[1670675025]: [5.048771615s] [5.048771615s] END Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.796875 4897 trace.go:236] Trace[1900344008]: "Calculate volume metrics of mysql-db for pod openstack/openstack-galera-0" (14-Feb-2026 20:00:27.246) (total time: 3548ms): Feb 14 20:00:30 crc kubenswrapper[4897]: Trace[1900344008]: [3.548018375s] [3.548018375s] END Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.809524 4897 patch_prober.go:28] interesting pod/logging-loki-gateway-c7757d78c-ctkkw container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.809592 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" podUID="969ba5ce-9b29-41f2-ba75-76f548daa534" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.815250 4897 patch_prober.go:28] interesting pod/router-default-5444994796-c5z8g container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.815278 4897 patch_prober.go:28] interesting pod/router-default-5444994796-c5z8g container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.815310 4897 patch_prober.go:28] interesting pod/logging-loki-gateway-c7757d78c-ctkkw container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.815320 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-c5z8g" podUID="9c34acbe-6a2d-446a-b2e2-5fc5a4130deb" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.815347 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-c5z8g" podUID="9c34acbe-6a2d-446a-b2e2-5fc5a4130deb" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.815335 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" podUID="969ba5ce-9b29-41f2-ba75-76f548daa534" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.815286 4897 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-rx2r9 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.815408 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-5444994796-c5z8g" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.815575 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-c5z8g" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.815756 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" podUID="2fd14f21-0836-40b2-b509-ec296556f45c" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.815816 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.816756 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"d933226f4d63a07e451f8a378c978db1eca0e13a3e5220d9f4b91a1a76177239"} pod="openshift-ingress/router-default-5444994796-c5z8g" containerMessage="Container router failed liveness probe, will be restarted" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.816814 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5444994796-c5z8g" podUID="9c34acbe-6a2d-446a-b2e2-5fc5a4130deb" containerName="router" containerID="cri-o://d933226f4d63a07e451f8a378c978db1eca0e13a3e5220d9f4b91a1a76177239" gracePeriod=10 Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.818218 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"fddf108cd303253b44fc2052b2e20b9f244304238688e02c64c1121f26c775ce"} pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.818269 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" podUID="2fd14f21-0836-40b2-b509-ec296556f45c" containerName="authentication-operator" containerID="cri-o://fddf108cd303253b44fc2052b2e20b9f244304238688e02c64c1121f26c775ce" gracePeriod=30 Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.827042 4897 patch_prober.go:28] interesting pod/logging-loki-gateway-c7757d78c-fb7zn container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.827201 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" podUID="cec4c0da-107d-4f6d-946d-2ffe925883e4" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.866188 4897 trace.go:236] Trace[1727778487]: "Calculate volume metrics of mysql-db for pod openstack/openstack-cell1-galera-0" (14-Feb-2026 20:00:27.749) (total time: 3117ms): Feb 14 20:00:30 crc kubenswrapper[4897]: Trace[1727778487]: [3.117048992s] [3.117048992s] END Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.898613 4897 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9pw99 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.898682 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" podUID="0f237a59-0e7e-4ae0-94c9-c6d451224a27" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.898711 4897 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9pw99 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.898774 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.898787 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" podUID="0f237a59-0e7e-4ae0-94c9-c6d451224a27" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.898853 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.899984 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"75fceb5d5e8fc027787b7299a8a4d700095bfcd2971ba6e358969b48557bcc33"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" containerMessage="Container packageserver failed liveness probe, will be restarted" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.900051 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" podUID="0f237a59-0e7e-4ae0-94c9-c6d451224a27" containerName="packageserver" containerID="cri-o://75fceb5d5e8fc027787b7299a8a4d700095bfcd2971ba6e358969b48557bcc33" gracePeriod=30 Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.919453 4897 patch_prober.go:28] interesting pod/console-7f7fb6d64c-hkskf container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.919505 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-7f7fb6d64c-hkskf" podUID="e77572d7-6aef-4c6c-bb23-bdb47d9d28ee" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:30 crc kubenswrapper[4897]: I0214 20:00:30.919586 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.061410 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="3c77ebc2-8dc3-4b0f-8f95-b3208b853935" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.114568 4897 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5wxpc container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.114601 4897 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5wxpc container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.114646 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5wxpc" podUID="10c2cb4a-c03b-49ca-a6ca-1b5637923932" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.114700 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5wxpc" podUID="10c2cb4a-c03b-49ca-a6ca-1b5637923932" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.302203 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-99cb98555-5nrbh" podUID="55ee13ff-72a6-4bdb-8461-fb545f66b881" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.302267 4897 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-tllh7 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.72:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.302679 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-99cb98555-5nrbh" Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.302729 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" podUID="4dab2db8-b8bf-4421-a71e-fb52c69e8a8e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.72:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.302289 4897 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-tllh7 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.72:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.302816 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.302808 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" podUID="4dab2db8-b8bf-4421-a71e-fb52c69e8a8e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.72:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.302300 4897 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-gvc49 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.302895 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.302917 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" podUID="15fa65ae-a663-434d-9d2d-2a69a3f7d81c" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.302350 4897 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-gvc49 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.302971 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" podUID="15fa65ae-a663-434d-9d2d-2a69a3f7d81c" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.303271 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.303294 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.304498 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="package-server-manager" containerStatusID={"Type":"cri-o","ID":"4470c0c809cae7d32f693d080d3b0047bfdb6b608e81a20d43877a4bdc32e360"} pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" containerMessage="Container package-server-manager failed liveness probe, will be restarted" Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.304547 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" podUID="15fa65ae-a663-434d-9d2d-2a69a3f7d81c" containerName="package-server-manager" containerID="cri-o://4470c0c809cae7d32f693d080d3b0047bfdb6b608e81a20d43877a4bdc32e360" gracePeriod=30 Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.304734 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="prometheus-operator-admission-webhook" containerStatusID={"Type":"cri-o","ID":"3380404fc0cd5c09902e963dbc200baed8bc7182fbd34afb88a9d5a09d0fc3b2"} pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" containerMessage="Container prometheus-operator-admission-webhook failed liveness probe, will be restarted" Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.304779 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" podUID="4dab2db8-b8bf-4421-a71e-fb52c69e8a8e" containerName="prometheus-operator-admission-webhook" containerID="cri-o://3380404fc0cd5c09902e963dbc200baed8bc7182fbd34afb88a9d5a09d0fc3b2" gracePeriod=30 Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.471140 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.784503 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-zqcpc" podUID="ac059afa-1f7b-480b-8650-c227c33ba696" containerName="registry-server" probeResult="failure" output="command timed out" Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.784591 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-zqcpc" podUID="ac059afa-1f7b-480b-8650-c227c33ba696" containerName="registry-server" probeResult="failure" output="command timed out" Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.784551 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-bdg8n" podUID="afb2923f-489f-4ce0-bd55-f95a6c59f809" containerName="registry-server" probeResult="failure" output="command timed out" Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.784713 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-index-bdg8n" Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.785056 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-bdg8n" podUID="afb2923f-489f-4ce0-bd55-f95a6c59f809" containerName="registry-server" probeResult="failure" output="command timed out" Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.785192 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-bdg8n" Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.789442 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"20cd88d7ef7068626c30ed5a8d5449d741b985e090e71376fa7e9b492a6417a3"} pod="openstack-operators/openstack-operator-index-bdg8n" containerMessage="Container registry-server failed liveness probe, will be restarted" Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.789509 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-bdg8n" podUID="afb2923f-489f-4ce0-bd55-f95a6c59f809" containerName="registry-server" containerID="cri-o://20cd88d7ef7068626c30ed5a8d5449d741b985e090e71376fa7e9b492a6417a3" gracePeriod=30 Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.857812 4897 patch_prober.go:28] interesting pod/router-default-5444994796-c5z8g container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.857877 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-c5z8g" podUID="9c34acbe-6a2d-446a-b2e2-5fc5a4130deb" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.899524 4897 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9pw99 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.899871 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" podUID="0f237a59-0e7e-4ae0-94c9-c6d451224a27" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.920074 4897 patch_prober.go:28] interesting pod/console-7f7fb6d64c-hkskf container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:31 crc kubenswrapper[4897]: I0214 20:00:31.920137 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-7f7fb6d64c-hkskf" podUID="e77572d7-6aef-4c6c-bb23-bdb47d9d28ee" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:32 crc kubenswrapper[4897]: I0214 20:00:32.338214 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2lvr5" podUID="48e0b91f-f946-4ecc-b36c-fc280e728f77" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:32 crc kubenswrapper[4897]: I0214 20:00:32.338232 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2lvr5" podUID="48e0b91f-f946-4ecc-b36c-fc280e728f77" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:32 crc kubenswrapper[4897]: I0214 20:00:32.379195 4897 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-tllh7 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.72:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:32 crc kubenswrapper[4897]: I0214 20:00:32.379215 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-99cb98555-5nrbh" podUID="55ee13ff-72a6-4bdb-8461-fb545f66b881" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:32 crc kubenswrapper[4897]: I0214 20:00:32.379261 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" podUID="4dab2db8-b8bf-4421-a71e-fb52c69e8a8e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.72:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:32 crc kubenswrapper[4897]: I0214 20:00:32.532405 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-62b7q_0cd062a1-246d-4ad6-b81a-a9f103576a32/console-operator/0.log" Feb 14 20:00:32 crc kubenswrapper[4897]: I0214 20:00:32.532763 4897 generic.go:334] "Generic (PLEG): container finished" podID="0cd062a1-246d-4ad6-b81a-a9f103576a32" containerID="7b3a5727ae9bc5b3d107f5c86405f5bcda06d06037b3d03a97b080d98c8fa2ce" exitCode=1 Feb 14 20:00:32 crc kubenswrapper[4897]: I0214 20:00:32.532886 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-62b7q" event={"ID":"0cd062a1-246d-4ad6-b81a-a9f103576a32","Type":"ContainerDied","Data":"7b3a5727ae9bc5b3d107f5c86405f5bcda06d06037b3d03a97b080d98c8fa2ce"} Feb 14 20:00:32 crc kubenswrapper[4897]: I0214 20:00:32.536845 4897 generic.go:334] "Generic (PLEG): container finished" podID="faa970d9-b5d7-49a1-b162-2bed0f528b71" containerID="a96ee81cf06e3ed1c601b628c75da40c6ce9217d6b0638b32f9a1988b12d5537" exitCode=0 Feb 14 20:00:32 crc kubenswrapper[4897]: I0214 20:00:32.536893 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" event={"ID":"faa970d9-b5d7-49a1-b162-2bed0f528b71","Type":"ContainerDied","Data":"a96ee81cf06e3ed1c601b628c75da40c6ce9217d6b0638b32f9a1988b12d5537"} Feb 14 20:00:32 crc kubenswrapper[4897]: I0214 20:00:32.640379 4897 patch_prober.go:28] interesting pod/thanos-querier-86c7f7cb9c-fsl5c container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:32 crc kubenswrapper[4897]: I0214 20:00:32.640462 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-86c7f7cb9c-fsl5c" podUID="15de099a-88c7-4c7c-9b4e-8d10c1e392f3" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.77:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:32 crc kubenswrapper[4897]: I0214 20:00:32.752257 4897 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-gvc49 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:32 crc kubenswrapper[4897]: I0214 20:00:32.752309 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" podUID="15fa65ae-a663-434d-9d2d-2a69a3f7d81c" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:32 crc kubenswrapper[4897]: I0214 20:00:32.752255 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-77987464f4-wsghb" podUID="0128668e-be83-412e-96e6-8c158ab45cc5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:32 crc kubenswrapper[4897]: I0214 20:00:32.783547 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-vgcv6" podUID="3e2a05b2-5d93-4252-a08b-6b35f225e167" containerName="registry-server" probeResult="failure" output="command timed out" Feb 14 20:00:32 crc kubenswrapper[4897]: I0214 20:00:32.784254 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-vgcv6" podUID="3e2a05b2-5d93-4252-a08b-6b35f225e167" containerName="registry-server" probeResult="failure" output="command timed out" Feb 14 20:00:32 crc kubenswrapper[4897]: I0214 20:00:32.834201 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ts22t" podUID="8dffc7df-2563-4f02-8dfc-83ab824af909" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:32 crc kubenswrapper[4897]: I0214 20:00:32.834243 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-tsqnc" podUID="de1e8e22-10a4-4d2a-855f-4c7bb6a49096" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:32 crc kubenswrapper[4897]: I0214 20:00:32.834300 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-drm7d" podUID="fe513351-3f7b-436d-9218-a66a6f579948" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:32 crc kubenswrapper[4897]: I0214 20:00:32.834331 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ts22t" podUID="8dffc7df-2563-4f02-8dfc-83ab824af909" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:32 crc kubenswrapper[4897]: I0214 20:00:32.834451 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ts22t" Feb 14 20:00:32 crc kubenswrapper[4897]: I0214 20:00:32.916196 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-drm7d" podUID="fe513351-3f7b-436d-9218-a66a6f579948" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:32 crc kubenswrapper[4897]: I0214 20:00:32.916204 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nwjnd" podUID="5e11063d-aac7-4fea-91d9-0b560622ccb9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:32.998217 4897 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-klcwn container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:32.998226 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5v2tq" podUID="10c98e4f-ae22-481b-992d-6804a1b5d0cc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:32.998266 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" podUID="3b9a689e-54e3-48df-a102-500878c35aa2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:32.998213 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5v2tq" podUID="10c98e4f-ae22-481b-992d-6804a1b5d0cc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:32.999096 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.000471 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"b4158e9aae62651f009339a55ec07df80d0c733231921cf08d84055037eca4bf"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.000611 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" podUID="3b9a689e-54e3-48df-a102-500878c35aa2" containerName="openshift-config-operator" containerID="cri-o://b4158e9aae62651f009339a55ec07df80d0c733231921cf08d84055037eca4bf" gracePeriod=30 Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.080530 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-77987464f4-wsghb" podUID="0128668e-be83-412e-96e6-8c158ab45cc5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.080853 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-5dg28" podUID="6fe73ade-8031-493c-9628-018ad436c7a5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.164209 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bl5g8" podUID="7c6ab7c6-c333-41db-ba23-f89b3eff3eef" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.247249 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gvcdc" podUID="088b6f9e-b5ec-48f5-a7bb-4d9ef0db9820" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.329246 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-qbz5t" podUID="fd2bec18-d3b4-4ee2-bfb9-34e5d1ddff3a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.329345 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-tsqnc" podUID="de1e8e22-10a4-4d2a-855f-4c7bb6a49096" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.390878 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bh95f" podUID="d2543021-51cc-4cbe-9293-a6e02894e1f4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.432176 4897 trace.go:236] Trace[158326497]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-index-gateway-0" (14-Feb-2026 20:00:31.706) (total time: 1725ms): Feb 14 20:00:33 crc kubenswrapper[4897]: Trace[158326497]: [1.725901312s] [1.725901312s] END Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.474663 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-gfrd9" podUID="cd0646ca-c695-4387-ba4b-cc9a3d85b460" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.475818 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nwjnd" podUID="5e11063d-aac7-4fea-91d9-0b560622ccb9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.476328 4897 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-klcwn container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.476358 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" podUID="3b9a689e-54e3-48df-a102-500878c35aa2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.476413 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.477017 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rtvvf" podUID="8238fbef-1e59-4430-af92-1be3d70c4d84" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.477077 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rtvvf" podUID="8238fbef-1e59-4430-af92-1be3d70c4d84" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.477111 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bl5g8" podUID="7c6ab7c6-c333-41db-ba23-f89b3eff3eef" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.477291 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-58f847fcbd-9djqq" podUID="949ed147-ec0c-4e17-bc34-4d27018a9567" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.478070 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gvcdc" podUID="088b6f9e-b5ec-48f5-a7bb-4d9ef0db9820" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.554048 4897 generic.go:334] "Generic (PLEG): container finished" podID="7325d839-07ed-4966-bb45-10719d4ec580" containerID="b5a7994574aca1091156dc54e21e19937c01fd33af545851e0560dafb8bc8803" exitCode=0 Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.554098 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" event={"ID":"7325d839-07ed-4966-bb45-10719d4ec580","Type":"ContainerDied","Data":"b5a7994574aca1091156dc54e21e19937c01fd33af545851e0560dafb8bc8803"} Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.562205 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-qbz5t" podUID="fd2bec18-d3b4-4ee2-bfb9-34e5d1ddff3a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.562229 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vv2k7" podUID="26f58f32-c15c-49c7-8756-fc2bae972a2d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.562346 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bh95f" podUID="d2543021-51cc-4cbe-9293-a6e02894e1f4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.562472 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-gfrd9" podUID="cd0646ca-c695-4387-ba4b-cc9a3d85b460" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.562930 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vv2k7" podUID="26f58f32-c15c-49c7-8756-fc2bae972a2d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.774524 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-ts22t" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.785440 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-79v5s" podUID="170e914d-6f55-4d61-bb7d-36dae4e4b002" containerName="registry-server" probeResult="failure" output="command timed out" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.785768 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-79v5s" podUID="170e914d-6f55-4d61-bb7d-36dae4e4b002" containerName="registry-server" probeResult="failure" output="command timed out" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.790109 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="289311f5-ac62-4fe6-b260-8bda0a09331b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.790187 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.791884 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"4f46f557febbe5e70794375605b160aeb9b01adc4005b88dc9ae36489b9cb612"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.792021 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="289311f5-ac62-4fe6-b260-8bda0a09331b" containerName="ceilometer-central-agent" containerID="cri-o://4f46f557febbe5e70794375605b160aeb9b01adc4005b88dc9ae36489b9cb612" gracePeriod=30 Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.793619 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-w9dlm" podUID="93aca208-9cef-49a3-917c-2bb7c314d537" containerName="registry-server" probeResult="failure" output=< Feb 14 20:00:33 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 20:00:33 crc kubenswrapper[4897]: > Feb 14 20:00:33 crc kubenswrapper[4897]: I0214 20:00:33.793718 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-w9dlm" podUID="93aca208-9cef-49a3-917c-2bb7c314d537" containerName="registry-server" probeResult="failure" output=< Feb 14 20:00:33 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 20:00:33 crc kubenswrapper[4897]: > Feb 14 20:00:34 crc kubenswrapper[4897]: E0214 20:00:34.072940 4897 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e892adf_50be_43db_bfb6_6ad0530bf7a5.slice/crio-conmon-d4a5ac44915f8d2ec150972798e573aedc58e447d8751f83be105c48b10327a2.scope\": RecentStats: unable to find data in memory cache]" Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.092018 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-c8d485b4-vdmjx" podUID="de593d8b-e41e-4a52-bead-28e46be05e4d" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.092163 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-c8d485b4-vdmjx" podUID="de593d8b-e41e-4a52-bead-28e46be05e4d" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.565441 4897 generic.go:334] "Generic (PLEG): container finished" podID="4dab2db8-b8bf-4421-a71e-fb52c69e8a8e" containerID="3380404fc0cd5c09902e963dbc200baed8bc7182fbd34afb88a9d5a09d0fc3b2" exitCode=0 Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.565776 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" event={"ID":"4dab2db8-b8bf-4421-a71e-fb52c69e8a8e","Type":"ContainerDied","Data":"3380404fc0cd5c09902e963dbc200baed8bc7182fbd34afb88a9d5a09d0fc3b2"} Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.565811 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" event={"ID":"4dab2db8-b8bf-4421-a71e-fb52c69e8a8e","Type":"ContainerStarted","Data":"2d54a6187666c3c1ae6e0a5f77bc68fef18b3526b376ef42fee11b2a87c7e198"} Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.566495 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.566653 4897 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-tllh7 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.72:8443/healthz\": dial tcp 10.217.0.72:8443: connect: connection refused" start-of-body= Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.566686 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" podUID="4dab2db8-b8bf-4421-a71e-fb52c69e8a8e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.72:8443/healthz\": dial tcp 10.217.0.72:8443: connect: connection refused" Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.568988 4897 generic.go:334] "Generic (PLEG): container finished" podID="7e892adf-50be-43db-bfb6-6ad0530bf7a5" containerID="d4a5ac44915f8d2ec150972798e573aedc58e447d8751f83be105c48b10327a2" exitCode=0 Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.569097 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" event={"ID":"7e892adf-50be-43db-bfb6-6ad0530bf7a5","Type":"ContainerDied","Data":"d4a5ac44915f8d2ec150972798e573aedc58e447d8751f83be105c48b10327a2"} Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.590174 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-62b7q_0cd062a1-246d-4ad6-b81a-a9f103576a32/console-operator/0.log" Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.590630 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-62b7q" event={"ID":"0cd062a1-246d-4ad6-b81a-a9f103576a32","Type":"ContainerStarted","Data":"86b62f264bf1a084f2bd7850bb4d15de032afcbb1616f5a19ac3d1fdcba606ed"} Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.590905 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-62b7q" Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.591407 4897 patch_prober.go:28] interesting pod/console-operator-58897d9998-62b7q container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.591453 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-62b7q" podUID="0cd062a1-246d-4ad6-b81a-a9f103576a32" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.594586 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" event={"ID":"faa970d9-b5d7-49a1-b162-2bed0f528b71","Type":"ContainerStarted","Data":"1db62a908c8960b2a44b675aaa89c0d854c5ca925c8459d22449021947d80cb4"} Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.594800 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.595168 4897 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jh8w7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.595217 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" podUID="faa970d9-b5d7-49a1-b162-2bed0f528b71" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.597694 4897 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-klcwn container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.598072 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" podUID="3b9a689e-54e3-48df-a102-500878c35aa2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.599195 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" event={"ID":"7325d839-07ed-4966-bb45-10719d4ec580","Type":"ContainerStarted","Data":"fba3087ec7b40649098c2035ff2bccad9ff5c27686d80f89f8385e2d905012a1"} Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.599362 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.600045 4897 patch_prober.go:28] interesting pod/controller-manager-86b69bbd49-9rnzb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.600365 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" podUID="7325d839-07ed-4966-bb45-10719d4ec580" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.602437 4897 generic.go:334] "Generic (PLEG): container finished" podID="2fd14f21-0836-40b2-b509-ec296556f45c" containerID="fddf108cd303253b44fc2052b2e20b9f244304238688e02c64c1121f26c775ce" exitCode=0 Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.602487 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" event={"ID":"2fd14f21-0836-40b2-b509-ec296556f45c","Type":"ContainerDied","Data":"fddf108cd303253b44fc2052b2e20b9f244304238688e02c64c1121f26c775ce"} Feb 14 20:00:34 crc kubenswrapper[4897]: I0214 20:00:34.641061 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-ks77p" Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.082437 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-69bbfbf88f-mdj4b" podUID="4bfd25e6-cec1-440a-8bfb-3ce2bcb4ab8b" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.124431 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-69bbfbf88f-mdj4b" podUID="4bfd25e6-cec1-440a-8bfb-3ce2bcb4ab8b" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.125193 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-778945c4f9-cbw2h" podUID="4243feec-23ed-4292-9291-7ad01f7d12a6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.177277 4897 patch_prober.go:28] interesting pod/oauth-openshift-868547c79-t4b6c container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.177330 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" podUID="f5d97820-5ed5-4374-a152-5097c22fbe8b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.219679 4897 patch_prober.go:28] interesting pod/oauth-openshift-868547c79-t4b6c container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.219734 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-868547c79-t4b6c" podUID="f5d97820-5ed5-4374-a152-5097c22fbe8b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.428988 4897 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-lx9b2 container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.429376 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lx9b2" podUID="0f4eb68c-7592-4025-a9a0-d5ed85aeec3c" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.619260 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rx2r9" event={"ID":"2fd14f21-0836-40b2-b509-ec296556f45c","Type":"ContainerStarted","Data":"24d9e55f1c67129335ea63d74b7767d7c3663635419e20a5c41e086e28d5d692"} Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.621859 4897 generic.go:334] "Generic (PLEG): container finished" podID="0f237a59-0e7e-4ae0-94c9-c6d451224a27" containerID="75fceb5d5e8fc027787b7299a8a4d700095bfcd2971ba6e358969b48557bcc33" exitCode=0 Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.621913 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" event={"ID":"0f237a59-0e7e-4ae0-94c9-c6d451224a27","Type":"ContainerDied","Data":"75fceb5d5e8fc027787b7299a8a4d700095bfcd2971ba6e358969b48557bcc33"} Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.623732 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" event={"ID":"7e892adf-50be-43db-bfb6-6ad0530bf7a5","Type":"ContainerStarted","Data":"7a330175afdc406c52d5395dcea63c7865c19c26408f09a86847df417d613031"} Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.627101 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.629631 4897 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-jw9nh container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.629692 4897 patch_prober.go:28] interesting pod/route-controller-manager-66464749f5-tftwf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.629720 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-jw9nh" podUID="74485545-1349-4cd2-9764-72af83ba9aa1" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.629727 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" podUID="7e892adf-50be-43db-bfb6-6ad0530bf7a5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.631582 4897 generic.go:334] "Generic (PLEG): container finished" podID="afb2923f-489f-4ce0-bd55-f95a6c59f809" containerID="20cd88d7ef7068626c30ed5a8d5449d741b985e090e71376fa7e9b492a6417a3" exitCode=0 Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.631642 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bdg8n" event={"ID":"afb2923f-489f-4ce0-bd55-f95a6c59f809","Type":"ContainerDied","Data":"20cd88d7ef7068626c30ed5a8d5449d741b985e090e71376fa7e9b492a6417a3"} Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.636803 4897 generic.go:334] "Generic (PLEG): container finished" podID="3b9a689e-54e3-48df-a102-500878c35aa2" containerID="b4158e9aae62651f009339a55ec07df80d0c733231921cf08d84055037eca4bf" exitCode=0 Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.636863 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" event={"ID":"3b9a689e-54e3-48df-a102-500878c35aa2","Type":"ContainerDied","Data":"b4158e9aae62651f009339a55ec07df80d0c733231921cf08d84055037eca4bf"} Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.636909 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" event={"ID":"3b9a689e-54e3-48df-a102-500878c35aa2","Type":"ContainerStarted","Data":"a554c9473e8884fbfd426f0dcf00789baf498780354a75cca190f772e1e5fd0e"} Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.637777 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.639318 4897 patch_prober.go:28] interesting pod/console-operator-58897d9998-62b7q container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.639353 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-62b7q" podUID="0cd062a1-246d-4ad6-b81a-a9f103576a32" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.639354 4897 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-tllh7 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.72:8443/healthz\": dial tcp 10.217.0.72:8443: connect: connection refused" start-of-body= Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.639410 4897 patch_prober.go:28] interesting pod/controller-manager-86b69bbd49-9rnzb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.639407 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" podUID="4dab2db8-b8bf-4421-a71e-fb52c69e8a8e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.72:8443/healthz\": dial tcp 10.217.0.72:8443: connect: connection refused" Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.639425 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" podUID="7325d839-07ed-4966-bb45-10719d4ec580" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.639487 4897 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jh8w7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.639500 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" podUID="faa970d9-b5d7-49a1-b162-2bed0f528b71" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.785362 4897 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ndtpt container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.785707 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-ndtpt" podUID="c87321f8-a781-4a08-93e8-2280f2ee57b8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.66:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.785386 4897 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ndtpt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.785828 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ndtpt" podUID="c87321f8-a781-4a08-93e8-2280f2ee57b8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.66:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.809410 4897 patch_prober.go:28] interesting pod/logging-loki-gateway-c7757d78c-ctkkw container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.809460 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-c7757d78c-ctkkw" podUID="969ba5ce-9b29-41f2-ba75-76f548daa534" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.827896 4897 patch_prober.go:28] interesting pod/logging-loki-gateway-c7757d78c-fb7zn container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.827966 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-c7757d78c-fb7zn" podUID="cec4c0da-107d-4f6d-946d-2ffe925883e4" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.902884 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="fdda6cd9-a603-4bb0-8595-3d128fc9e324" containerName="galera" containerID="cri-o://582f26b3a97ae333b48f26dba8219d84d182c93c5c493e55ab1ff1f207357838" gracePeriod=22 Feb 14 20:00:35 crc kubenswrapper[4897]: I0214 20:00:35.942192 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="9a8b3d12-d5db-435a-ba48-fbe1e31fef96" containerName="galera" containerID="cri-o://1ae909fc87abca6b70a54edb63d7f2c825f62160862049babf6d8c6c86b0dc8d" gracePeriod=21 Feb 14 20:00:36 crc kubenswrapper[4897]: E0214 20:00:36.052313 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20cd88d7ef7068626c30ed5a8d5449d741b985e090e71376fa7e9b492a6417a3 is running failed: container process not found" containerID="20cd88d7ef7068626c30ed5a8d5449d741b985e090e71376fa7e9b492a6417a3" cmd=["grpc_health_probe","-addr=:50051"] Feb 14 20:00:36 crc kubenswrapper[4897]: E0214 20:00:36.052776 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20cd88d7ef7068626c30ed5a8d5449d741b985e090e71376fa7e9b492a6417a3 is running failed: container process not found" containerID="20cd88d7ef7068626c30ed5a8d5449d741b985e090e71376fa7e9b492a6417a3" cmd=["grpc_health_probe","-addr=:50051"] Feb 14 20:00:36 crc kubenswrapper[4897]: E0214 20:00:36.053049 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20cd88d7ef7068626c30ed5a8d5449d741b985e090e71376fa7e9b492a6417a3 is running failed: container process not found" containerID="20cd88d7ef7068626c30ed5a8d5449d741b985e090e71376fa7e9b492a6417a3" cmd=["grpc_health_probe","-addr=:50051"] Feb 14 20:00:36 crc kubenswrapper[4897]: E0214 20:00:36.053088 4897 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20cd88d7ef7068626c30ed5a8d5449d741b985e090e71376fa7e9b492a6417a3 is running failed: container process not found" probeType="Readiness" pod="openstack-operators/openstack-operator-index-bdg8n" podUID="afb2923f-489f-4ce0-bd55-f95a6c59f809" containerName="registry-server" Feb 14 20:00:36 crc kubenswrapper[4897]: I0214 20:00:36.077558 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-79d749bcb5-rfm5g" Feb 14 20:00:36 crc kubenswrapper[4897]: I0214 20:00:36.275671 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 14 20:00:36 crc kubenswrapper[4897]: E0214 20:00:36.483856 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="582f26b3a97ae333b48f26dba8219d84d182c93c5c493e55ab1ff1f207357838" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 14 20:00:36 crc kubenswrapper[4897]: E0214 20:00:36.486347 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="582f26b3a97ae333b48f26dba8219d84d182c93c5c493e55ab1ff1f207357838" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 14 20:00:36 crc kubenswrapper[4897]: E0214 20:00:36.488121 4897 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="582f26b3a97ae333b48f26dba8219d84d182c93c5c493e55ab1ff1f207357838" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 14 20:00:36 crc kubenswrapper[4897]: E0214 20:00:36.488208 4897 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="fdda6cd9-a603-4bb0-8595-3d128fc9e324" containerName="galera" Feb 14 20:00:36 crc kubenswrapper[4897]: I0214 20:00:36.649971 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" event={"ID":"0f237a59-0e7e-4ae0-94c9-c6d451224a27","Type":"ContainerStarted","Data":"335d1ac8e7db1a4d9a4f0a89a78113c6eec036ae18749e56aefbc0fda29d41fe"} Feb 14 20:00:36 crc kubenswrapper[4897]: I0214 20:00:36.650219 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" Feb 14 20:00:36 crc kubenswrapper[4897]: I0214 20:00:36.650479 4897 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9pw99 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Feb 14 20:00:36 crc kubenswrapper[4897]: I0214 20:00:36.650568 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" podUID="0f237a59-0e7e-4ae0-94c9-c6d451224a27" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Feb 14 20:00:36 crc kubenswrapper[4897]: I0214 20:00:36.654149 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bdg8n" event={"ID":"afb2923f-489f-4ce0-bd55-f95a6c59f809","Type":"ContainerStarted","Data":"fea82bb59d58a6553837304eabd24c7af0e80387beab6a535b182f4e8b0921ac"} Feb 14 20:00:36 crc kubenswrapper[4897]: I0214 20:00:36.656083 4897 generic.go:334] "Generic (PLEG): container finished" podID="0d570fe1-d9f5-4d80-baf9-17877fd99929" containerID="056ce3e4fa6621e1adf777f475e35f601c9bf56e2e3c06a4812ca4a87b199ab1" exitCode=0 Feb 14 20:00:36 crc kubenswrapper[4897]: I0214 20:00:36.656114 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518320-2tbwr" event={"ID":"0d570fe1-d9f5-4d80-baf9-17877fd99929","Type":"ContainerDied","Data":"056ce3e4fa6621e1adf777f475e35f601c9bf56e2e3c06a4812ca4a87b199ab1"} Feb 14 20:00:36 crc kubenswrapper[4897]: I0214 20:00:36.658612 4897 generic.go:334] "Generic (PLEG): container finished" podID="15fa65ae-a663-434d-9d2d-2a69a3f7d81c" containerID="4470c0c809cae7d32f693d080d3b0047bfdb6b608e81a20d43877a4bdc32e360" exitCode=0 Feb 14 20:00:36 crc kubenswrapper[4897]: I0214 20:00:36.658636 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" event={"ID":"15fa65ae-a663-434d-9d2d-2a69a3f7d81c","Type":"ContainerDied","Data":"4470c0c809cae7d32f693d080d3b0047bfdb6b608e81a20d43877a4bdc32e360"} Feb 14 20:00:36 crc kubenswrapper[4897]: I0214 20:00:36.659173 4897 patch_prober.go:28] interesting pod/route-controller-manager-66464749f5-tftwf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Feb 14 20:00:36 crc kubenswrapper[4897]: I0214 20:00:36.659221 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" podUID="7e892adf-50be-43db-bfb6-6ad0530bf7a5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Feb 14 20:00:37 crc kubenswrapper[4897]: I0214 20:00:37.582963 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="e95d0e1a-6046-4ec7-8422-0858aca3bca9" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 20:00:37 crc kubenswrapper[4897]: I0214 20:00:37.680124 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" event={"ID":"15fa65ae-a663-434d-9d2d-2a69a3f7d81c","Type":"ContainerStarted","Data":"de80505380e8171b625f19f283ee2167f26a5e5857e435b35c601ce1207fdaf4"} Feb 14 20:00:37 crc kubenswrapper[4897]: I0214 20:00:37.680848 4897 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9pw99 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Feb 14 20:00:37 crc kubenswrapper[4897]: I0214 20:00:37.680891 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" podUID="0f237a59-0e7e-4ae0-94c9-c6d451224a27" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Feb 14 20:00:37 crc kubenswrapper[4897]: I0214 20:00:37.680925 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" Feb 14 20:00:37 crc kubenswrapper[4897]: I0214 20:00:37.680941 4897 patch_prober.go:28] interesting pod/route-controller-manager-66464749f5-tftwf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Feb 14 20:00:37 crc kubenswrapper[4897]: I0214 20:00:37.680994 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" podUID="7e892adf-50be-43db-bfb6-6ad0530bf7a5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Feb 14 20:00:37 crc kubenswrapper[4897]: I0214 20:00:37.813930 4897 scope.go:117] "RemoveContainer" containerID="a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" Feb 14 20:00:37 crc kubenswrapper[4897]: E0214 20:00:37.816785 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:00:38 crc kubenswrapper[4897]: I0214 20:00:38.168309 4897 patch_prober.go:28] interesting pod/controller-manager-86b69bbd49-9rnzb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Feb 14 20:00:38 crc kubenswrapper[4897]: I0214 20:00:38.168692 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" podUID="7325d839-07ed-4966-bb45-10719d4ec580" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Feb 14 20:00:38 crc kubenswrapper[4897]: I0214 20:00:38.181070 4897 patch_prober.go:28] interesting pod/route-controller-manager-66464749f5-tftwf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Feb 14 20:00:38 crc kubenswrapper[4897]: I0214 20:00:38.181128 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" podUID="7e892adf-50be-43db-bfb6-6ad0530bf7a5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Feb 14 20:00:39 crc kubenswrapper[4897]: I0214 20:00:39.239080 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="e95d0e1a-6046-4ec7-8422-0858aca3bca9" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 20:00:39 crc kubenswrapper[4897]: I0214 20:00:39.270731 4897 patch_prober.go:28] interesting pod/console-operator-58897d9998-62b7q container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 14 20:00:39 crc kubenswrapper[4897]: I0214 20:00:39.270804 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-62b7q" podUID="0cd062a1-246d-4ad6-b81a-a9f103576a32" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 14 20:00:39 crc kubenswrapper[4897]: I0214 20:00:39.270736 4897 patch_prober.go:28] interesting pod/console-operator-58897d9998-62b7q container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 14 20:00:39 crc kubenswrapper[4897]: I0214 20:00:39.270901 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-62b7q" podUID="0cd062a1-246d-4ad6-b81a-a9f103576a32" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 14 20:00:39 crc kubenswrapper[4897]: I0214 20:00:39.521293 4897 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jh8w7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 14 20:00:39 crc kubenswrapper[4897]: I0214 20:00:39.521332 4897 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jh8w7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 14 20:00:39 crc kubenswrapper[4897]: I0214 20:00:39.521346 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" podUID="faa970d9-b5d7-49a1-b162-2bed0f528b71" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 14 20:00:39 crc kubenswrapper[4897]: I0214 20:00:39.521391 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" podUID="faa970d9-b5d7-49a1-b162-2bed0f528b71" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 14 20:00:39 crc kubenswrapper[4897]: I0214 20:00:39.702992 4897 generic.go:334] "Generic (PLEG): container finished" podID="289311f5-ac62-4fe6-b260-8bda0a09331b" containerID="4f46f557febbe5e70794375605b160aeb9b01adc4005b88dc9ae36489b9cb612" exitCode=0 Feb 14 20:00:39 crc kubenswrapper[4897]: I0214 20:00:39.703071 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"289311f5-ac62-4fe6-b260-8bda0a09331b","Type":"ContainerDied","Data":"4f46f557febbe5e70794375605b160aeb9b01adc4005b88dc9ae36489b9cb612"} Feb 14 20:00:39 crc kubenswrapper[4897]: I0214 20:00:39.703455 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"289311f5-ac62-4fe6-b260-8bda0a09331b","Type":"ContainerStarted","Data":"980285f2f371a8cc9825923ea1629ed0573a98bbf8eefd6df01c0ff225afe771"} Feb 14 20:00:39 crc kubenswrapper[4897]: I0214 20:00:39.707740 4897 generic.go:334] "Generic (PLEG): container finished" podID="fdda6cd9-a603-4bb0-8595-3d128fc9e324" containerID="582f26b3a97ae333b48f26dba8219d84d182c93c5c493e55ab1ff1f207357838" exitCode=0 Feb 14 20:00:39 crc kubenswrapper[4897]: I0214 20:00:39.707773 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fdda6cd9-a603-4bb0-8595-3d128fc9e324","Type":"ContainerDied","Data":"582f26b3a97ae333b48f26dba8219d84d182c93c5c493e55ab1ff1f207357838"} Feb 14 20:00:39 crc kubenswrapper[4897]: I0214 20:00:39.707805 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fdda6cd9-a603-4bb0-8595-3d128fc9e324","Type":"ContainerStarted","Data":"1ace6d50bfea60a96ac1a6f03f188a561edabdd7b70a9801bcb6d2137f26b442"} Feb 14 20:00:39 crc kubenswrapper[4897]: I0214 20:00:39.899392 4897 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9pw99 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Feb 14 20:00:39 crc kubenswrapper[4897]: I0214 20:00:39.899453 4897 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9pw99 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Feb 14 20:00:39 crc kubenswrapper[4897]: I0214 20:00:39.899500 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" podUID="0f237a59-0e7e-4ae0-94c9-c6d451224a27" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Feb 14 20:00:39 crc kubenswrapper[4897]: I0214 20:00:39.899450 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" podUID="0f237a59-0e7e-4ae0-94c9-c6d451224a27" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Feb 14 20:00:39 crc kubenswrapper[4897]: I0214 20:00:39.986191 4897 patch_prober.go:28] interesting pod/router-default-5444994796-c5z8g container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-http ok Feb 14 20:00:39 crc kubenswrapper[4897]: [+]has-synced ok Feb 14 20:00:39 crc kubenswrapper[4897]: [-]process-running failed: reason withheld Feb 14 20:00:39 crc kubenswrapper[4897]: healthz check failed Feb 14 20:00:39 crc kubenswrapper[4897]: I0214 20:00:39.986248 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-c5z8g" podUID="9c34acbe-6a2d-446a-b2e2-5fc5a4130deb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 20:00:40 crc kubenswrapper[4897]: I0214 20:00:40.063401 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7f7fb6d64c-hkskf" Feb 14 20:00:40 crc kubenswrapper[4897]: I0214 20:00:40.199580 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-99cb98555-5nrbh" Feb 14 20:00:40 crc kubenswrapper[4897]: I0214 20:00:40.204467 4897 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-tllh7 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.72:8443/healthz\": dial tcp 10.217.0.72:8443: connect: connection refused" start-of-body= Feb 14 20:00:40 crc kubenswrapper[4897]: I0214 20:00:40.204507 4897 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-tllh7 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.72:8443/healthz\": dial tcp 10.217.0.72:8443: connect: connection refused" start-of-body= Feb 14 20:00:40 crc kubenswrapper[4897]: I0214 20:00:40.204523 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" podUID="4dab2db8-b8bf-4421-a71e-fb52c69e8a8e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.72:8443/healthz\": dial tcp 10.217.0.72:8443: connect: connection refused" Feb 14 20:00:40 crc kubenswrapper[4897]: I0214 20:00:40.204554 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" podUID="4dab2db8-b8bf-4421-a71e-fb52c69e8a8e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.72:8443/healthz\": dial tcp 10.217.0.72:8443: connect: connection refused" Feb 14 20:00:40 crc kubenswrapper[4897]: I0214 20:00:40.597403 4897 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-klcwn container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 14 20:00:40 crc kubenswrapper[4897]: I0214 20:00:40.597421 4897 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-klcwn container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 14 20:00:40 crc kubenswrapper[4897]: I0214 20:00:40.597458 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" podUID="3b9a689e-54e3-48df-a102-500878c35aa2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 14 20:00:40 crc kubenswrapper[4897]: I0214 20:00:40.597482 4897 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" podUID="3b9a689e-54e3-48df-a102-500878c35aa2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 14 20:00:40 crc kubenswrapper[4897]: I0214 20:00:40.723997 4897 generic.go:334] "Generic (PLEG): container finished" podID="9a8b3d12-d5db-435a-ba48-fbe1e31fef96" containerID="1ae909fc87abca6b70a54edb63d7f2c825f62160862049babf6d8c6c86b0dc8d" exitCode=0 Feb 14 20:00:40 crc kubenswrapper[4897]: I0214 20:00:40.724062 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9a8b3d12-d5db-435a-ba48-fbe1e31fef96","Type":"ContainerDied","Data":"1ae909fc87abca6b70a54edb63d7f2c825f62160862049babf6d8c6c86b0dc8d"} Feb 14 20:00:41 crc kubenswrapper[4897]: I0214 20:00:41.495424 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fzgws" Feb 14 20:00:41 crc kubenswrapper[4897]: I0214 20:00:41.751259 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-c5z8g_9c34acbe-6a2d-446a-b2e2-5fc5a4130deb/router/0.log" Feb 14 20:00:41 crc kubenswrapper[4897]: I0214 20:00:41.751342 4897 generic.go:334] "Generic (PLEG): container finished" podID="9c34acbe-6a2d-446a-b2e2-5fc5a4130deb" containerID="d933226f4d63a07e451f8a378c978db1eca0e13a3e5220d9f4b91a1a76177239" exitCode=137 Feb 14 20:00:41 crc kubenswrapper[4897]: I0214 20:00:41.753850 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-c5z8g" event={"ID":"9c34acbe-6a2d-446a-b2e2-5fc5a4130deb","Type":"ContainerDied","Data":"d933226f4d63a07e451f8a378c978db1eca0e13a3e5220d9f4b91a1a76177239"} Feb 14 20:00:41 crc kubenswrapper[4897]: I0214 20:00:41.753914 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-c5z8g" event={"ID":"9c34acbe-6a2d-446a-b2e2-5fc5a4130deb","Type":"ContainerStarted","Data":"41abd9525d08d0a5c3fc8e3876c0b5a0ff77e6f281f7c1d8c5fb733dab506190"} Feb 14 20:00:41 crc kubenswrapper[4897]: I0214 20:00:41.790376 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9a8b3d12-d5db-435a-ba48-fbe1e31fef96","Type":"ContainerStarted","Data":"ec03185e9abc36673d823b0faa85f63ea4397f5a402bcb1343701127a56d7946"} Feb 14 20:00:41 crc kubenswrapper[4897]: I0214 20:00:41.877368 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518320-2tbwr" Feb 14 20:00:42 crc kubenswrapper[4897]: I0214 20:00:42.062640 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdltb\" (UniqueName: \"kubernetes.io/projected/0d570fe1-d9f5-4d80-baf9-17877fd99929-kube-api-access-fdltb\") pod \"0d570fe1-d9f5-4d80-baf9-17877fd99929\" (UID: \"0d570fe1-d9f5-4d80-baf9-17877fd99929\") " Feb 14 20:00:42 crc kubenswrapper[4897]: I0214 20:00:42.063067 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d570fe1-d9f5-4d80-baf9-17877fd99929-config-volume\") pod \"0d570fe1-d9f5-4d80-baf9-17877fd99929\" (UID: \"0d570fe1-d9f5-4d80-baf9-17877fd99929\") " Feb 14 20:00:42 crc kubenswrapper[4897]: I0214 20:00:42.063365 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d570fe1-d9f5-4d80-baf9-17877fd99929-secret-volume\") pod \"0d570fe1-d9f5-4d80-baf9-17877fd99929\" (UID: \"0d570fe1-d9f5-4d80-baf9-17877fd99929\") " Feb 14 20:00:42 crc kubenswrapper[4897]: I0214 20:00:42.064725 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d570fe1-d9f5-4d80-baf9-17877fd99929-config-volume" (OuterVolumeSpecName: "config-volume") pod "0d570fe1-d9f5-4d80-baf9-17877fd99929" (UID: "0d570fe1-d9f5-4d80-baf9-17877fd99929"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 20:00:42 crc kubenswrapper[4897]: I0214 20:00:42.115490 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d570fe1-d9f5-4d80-baf9-17877fd99929-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0d570fe1-d9f5-4d80-baf9-17877fd99929" (UID: "0d570fe1-d9f5-4d80-baf9-17877fd99929"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 20:00:42 crc kubenswrapper[4897]: I0214 20:00:42.116112 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d570fe1-d9f5-4d80-baf9-17877fd99929-kube-api-access-fdltb" (OuterVolumeSpecName: "kube-api-access-fdltb") pod "0d570fe1-d9f5-4d80-baf9-17877fd99929" (UID: "0d570fe1-d9f5-4d80-baf9-17877fd99929"). InnerVolumeSpecName "kube-api-access-fdltb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 20:00:42 crc kubenswrapper[4897]: I0214 20:00:42.166553 4897 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d570fe1-d9f5-4d80-baf9-17877fd99929-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 20:00:42 crc kubenswrapper[4897]: I0214 20:00:42.166592 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdltb\" (UniqueName: \"kubernetes.io/projected/0d570fe1-d9f5-4d80-baf9-17877fd99929-kube-api-access-fdltb\") on node \"crc\" DevicePath \"\"" Feb 14 20:00:42 crc kubenswrapper[4897]: I0214 20:00:42.166601 4897 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d570fe1-d9f5-4d80-baf9-17877fd99929-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 20:00:42 crc kubenswrapper[4897]: I0214 20:00:42.228868 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="e95d0e1a-6046-4ec7-8422-0858aca3bca9" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 20:00:42 crc kubenswrapper[4897]: I0214 20:00:42.228956 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 14 20:00:42 crc kubenswrapper[4897]: I0214 20:00:42.229856 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"cadf3a9715958640ff0d8bd4da07bbf6f34cde6693a386925b55064de2993c3b"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Feb 14 20:00:42 crc kubenswrapper[4897]: I0214 20:00:42.229903 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e95d0e1a-6046-4ec7-8422-0858aca3bca9" containerName="cinder-scheduler" containerID="cri-o://cadf3a9715958640ff0d8bd4da07bbf6f34cde6693a386925b55064de2993c3b" gracePeriod=30 Feb 14 20:00:42 crc kubenswrapper[4897]: I0214 20:00:42.730268 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-c5z8g" Feb 14 20:00:42 crc kubenswrapper[4897]: I0214 20:00:42.736657 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-c5z8g" Feb 14 20:00:42 crc kubenswrapper[4897]: I0214 20:00:42.804078 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518320-2tbwr" event={"ID":"0d570fe1-d9f5-4d80-baf9-17877fd99929","Type":"ContainerDied","Data":"799a2cf02c5d80661b3e36a24b3a6ad2011c0e3eda62510ce76fe43281cedfd3"} Feb 14 20:00:42 crc kubenswrapper[4897]: I0214 20:00:42.804286 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518320-2tbwr" Feb 14 20:00:42 crc kubenswrapper[4897]: I0214 20:00:42.805741 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="799a2cf02c5d80661b3e36a24b3a6ad2011c0e3eda62510ce76fe43281cedfd3" Feb 14 20:00:42 crc kubenswrapper[4897]: I0214 20:00:42.805776 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-c5z8g" Feb 14 20:00:42 crc kubenswrapper[4897]: I0214 20:00:42.807672 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-c5z8g" Feb 14 20:00:42 crc kubenswrapper[4897]: I0214 20:00:42.973103 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518275-96ngn"] Feb 14 20:00:42 crc kubenswrapper[4897]: I0214 20:00:42.990319 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518275-96ngn"] Feb 14 20:00:43 crc kubenswrapper[4897]: I0214 20:00:43.601373 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-klcwn" Feb 14 20:00:43 crc kubenswrapper[4897]: I0214 20:00:43.808657 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38610d34-ba7c-44fe-b975-6a8218c6937c" path="/var/lib/kubelet/pods/38610d34-ba7c-44fe-b975-6a8218c6937c/volumes" Feb 14 20:00:44 crc kubenswrapper[4897]: I0214 20:00:44.062043 4897 scope.go:117] "RemoveContainer" containerID="926fe056f2b2bd17d840f888e0e3c737732eaa76736dbf33d0f5c4ce42ccfeed" Feb 14 20:00:45 crc kubenswrapper[4897]: I0214 20:00:45.301386 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 14 20:00:45 crc kubenswrapper[4897]: I0214 20:00:45.301647 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 14 20:00:46 crc kubenswrapper[4897]: I0214 20:00:46.048880 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-bdg8n" Feb 14 20:00:46 crc kubenswrapper[4897]: I0214 20:00:46.049170 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-bdg8n" Feb 14 20:00:46 crc kubenswrapper[4897]: I0214 20:00:46.181779 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-bdg8n" Feb 14 20:00:46 crc kubenswrapper[4897]: I0214 20:00:46.463869 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 14 20:00:46 crc kubenswrapper[4897]: I0214 20:00:46.464287 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 14 20:00:46 crc kubenswrapper[4897]: I0214 20:00:46.565562 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 14 20:00:46 crc kubenswrapper[4897]: I0214 20:00:46.872251 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-bdg8n" Feb 14 20:00:46 crc kubenswrapper[4897]: I0214 20:00:46.935328 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 14 20:00:47 crc kubenswrapper[4897]: I0214 20:00:47.668648 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 14 20:00:47 crc kubenswrapper[4897]: I0214 20:00:47.761064 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 14 20:00:48 crc kubenswrapper[4897]: I0214 20:00:48.172764 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86b69bbd49-9rnzb" Feb 14 20:00:48 crc kubenswrapper[4897]: I0214 20:00:48.189165 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-66464749f5-tftwf" Feb 14 20:00:49 crc kubenswrapper[4897]: I0214 20:00:49.276921 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-62b7q" Feb 14 20:00:49 crc kubenswrapper[4897]: I0214 20:00:49.526016 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jh8w7" Feb 14 20:00:49 crc kubenswrapper[4897]: I0214 20:00:49.868914 4897 generic.go:334] "Generic (PLEG): container finished" podID="1ccac56d-8e29-4241-99ef-bb65d3ff373f" containerID="8a1b545ff788f34630c4b7e32a6ca1975abf41bf1e0380280f254144e849184b" exitCode=1 Feb 14 20:00:49 crc kubenswrapper[4897]: I0214 20:00:49.869007 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1ccac56d-8e29-4241-99ef-bb65d3ff373f","Type":"ContainerDied","Data":"8a1b545ff788f34630c4b7e32a6ca1975abf41bf1e0380280f254144e849184b"} Feb 14 20:00:49 crc kubenswrapper[4897]: I0214 20:00:49.873937 4897 generic.go:334] "Generic (PLEG): container finished" podID="e95d0e1a-6046-4ec7-8422-0858aca3bca9" containerID="cadf3a9715958640ff0d8bd4da07bbf6f34cde6693a386925b55064de2993c3b" exitCode=0 Feb 14 20:00:49 crc kubenswrapper[4897]: I0214 20:00:49.873980 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e95d0e1a-6046-4ec7-8422-0858aca3bca9","Type":"ContainerDied","Data":"cadf3a9715958640ff0d8bd4da07bbf6f34cde6693a386925b55064de2993c3b"} Feb 14 20:00:49 crc kubenswrapper[4897]: I0214 20:00:49.901785 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9pw99" Feb 14 20:00:50 crc kubenswrapper[4897]: I0214 20:00:50.210931 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tllh7" Feb 14 20:00:50 crc kubenswrapper[4897]: I0214 20:00:50.888005 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e95d0e1a-6046-4ec7-8422-0858aca3bca9","Type":"ContainerStarted","Data":"ce8e9f1efad3b731f39ac07ac44484ed6db50207677470d51c9cf0b79fed62b4"} Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.589656 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.719984 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ccac56d-8e29-4241-99ef-bb65d3ff373f-ssh-key\") pod \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.720290 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ccac56d-8e29-4241-99ef-bb65d3ff373f-config-data\") pod \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.720368 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1ccac56d-8e29-4241-99ef-bb65d3ff373f-openstack-config\") pod \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.720512 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.720542 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1ccac56d-8e29-4241-99ef-bb65d3ff373f-test-operator-ephemeral-workdir\") pod \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.720588 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1ccac56d-8e29-4241-99ef-bb65d3ff373f-openstack-config-secret\") pod \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.720608 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1ccac56d-8e29-4241-99ef-bb65d3ff373f-test-operator-ephemeral-temporary\") pod \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.720623 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4j7r\" (UniqueName: \"kubernetes.io/projected/1ccac56d-8e29-4241-99ef-bb65d3ff373f-kube-api-access-n4j7r\") pod \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.720642 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1ccac56d-8e29-4241-99ef-bb65d3ff373f-ca-certs\") pod \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\" (UID: \"1ccac56d-8e29-4241-99ef-bb65d3ff373f\") " Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.723440 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ccac56d-8e29-4241-99ef-bb65d3ff373f-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "1ccac56d-8e29-4241-99ef-bb65d3ff373f" (UID: "1ccac56d-8e29-4241-99ef-bb65d3ff373f"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.733206 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ccac56d-8e29-4241-99ef-bb65d3ff373f-config-data" (OuterVolumeSpecName: "config-data") pod "1ccac56d-8e29-4241-99ef-bb65d3ff373f" (UID: "1ccac56d-8e29-4241-99ef-bb65d3ff373f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.740554 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ccac56d-8e29-4241-99ef-bb65d3ff373f-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "1ccac56d-8e29-4241-99ef-bb65d3ff373f" (UID: "1ccac56d-8e29-4241-99ef-bb65d3ff373f"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.758558 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "1ccac56d-8e29-4241-99ef-bb65d3ff373f" (UID: "1ccac56d-8e29-4241-99ef-bb65d3ff373f"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.773129 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ccac56d-8e29-4241-99ef-bb65d3ff373f-kube-api-access-n4j7r" (OuterVolumeSpecName: "kube-api-access-n4j7r") pod "1ccac56d-8e29-4241-99ef-bb65d3ff373f" (UID: "1ccac56d-8e29-4241-99ef-bb65d3ff373f"). InnerVolumeSpecName "kube-api-access-n4j7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.781363 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ccac56d-8e29-4241-99ef-bb65d3ff373f-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "1ccac56d-8e29-4241-99ef-bb65d3ff373f" (UID: "1ccac56d-8e29-4241-99ef-bb65d3ff373f"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.785858 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ccac56d-8e29-4241-99ef-bb65d3ff373f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1ccac56d-8e29-4241-99ef-bb65d3ff373f" (UID: "1ccac56d-8e29-4241-99ef-bb65d3ff373f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.787633 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ccac56d-8e29-4241-99ef-bb65d3ff373f-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "1ccac56d-8e29-4241-99ef-bb65d3ff373f" (UID: "1ccac56d-8e29-4241-99ef-bb65d3ff373f"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.803859 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ccac56d-8e29-4241-99ef-bb65d3ff373f-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "1ccac56d-8e29-4241-99ef-bb65d3ff373f" (UID: "1ccac56d-8e29-4241-99ef-bb65d3ff373f"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.823445 4897 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1ccac56d-8e29-4241-99ef-bb65d3ff373f-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.823521 4897 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.823534 4897 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1ccac56d-8e29-4241-99ef-bb65d3ff373f-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.823561 4897 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1ccac56d-8e29-4241-99ef-bb65d3ff373f-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.823572 4897 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1ccac56d-8e29-4241-99ef-bb65d3ff373f-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.823583 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4j7r\" (UniqueName: \"kubernetes.io/projected/1ccac56d-8e29-4241-99ef-bb65d3ff373f-kube-api-access-n4j7r\") on node \"crc\" DevicePath \"\"" Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.823590 4897 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1ccac56d-8e29-4241-99ef-bb65d3ff373f-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.823599 4897 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ccac56d-8e29-4241-99ef-bb65d3ff373f-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.823618 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ccac56d-8e29-4241-99ef-bb65d3ff373f-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.852160 4897 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.900128 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.900121 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1ccac56d-8e29-4241-99ef-bb65d3ff373f","Type":"ContainerDied","Data":"b3cf627af92b6e17d2bb6f392528c27fca45bdb935b0a4b28404284d663dac28"} Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.900240 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cf627af92b6e17d2bb6f392528c27fca45bdb935b0a4b28404284d663dac28" Feb 14 20:00:51 crc kubenswrapper[4897]: I0214 20:00:51.925620 4897 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 14 20:00:52 crc kubenswrapper[4897]: I0214 20:00:52.793870 4897 scope.go:117] "RemoveContainer" containerID="a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" Feb 14 20:00:52 crc kubenswrapper[4897]: E0214 20:00:52.794454 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:00:53 crc kubenswrapper[4897]: I0214 20:00:53.199240 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 14 20:00:58 crc kubenswrapper[4897]: I0214 20:00:58.231448 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 14 20:00:59 crc kubenswrapper[4897]: I0214 20:00:59.659160 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 14 20:00:59 crc kubenswrapper[4897]: E0214 20:00:59.662767 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ccac56d-8e29-4241-99ef-bb65d3ff373f" containerName="tempest-tests-tempest-tests-runner" Feb 14 20:00:59 crc kubenswrapper[4897]: I0214 20:00:59.662912 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ccac56d-8e29-4241-99ef-bb65d3ff373f" containerName="tempest-tests-tempest-tests-runner" Feb 14 20:00:59 crc kubenswrapper[4897]: E0214 20:00:59.662986 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d570fe1-d9f5-4d80-baf9-17877fd99929" containerName="collect-profiles" Feb 14 20:00:59 crc kubenswrapper[4897]: I0214 20:00:59.663066 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d570fe1-d9f5-4d80-baf9-17877fd99929" containerName="collect-profiles" Feb 14 20:00:59 crc kubenswrapper[4897]: I0214 20:00:59.663821 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d570fe1-d9f5-4d80-baf9-17877fd99929" containerName="collect-profiles" Feb 14 20:00:59 crc kubenswrapper[4897]: I0214 20:00:59.663908 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ccac56d-8e29-4241-99ef-bb65d3ff373f" containerName="tempest-tests-tempest-tests-runner" Feb 14 20:00:59 crc kubenswrapper[4897]: I0214 20:00:59.666208 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 20:00:59 crc kubenswrapper[4897]: I0214 20:00:59.670710 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-mgspz" Feb 14 20:00:59 crc kubenswrapper[4897]: I0214 20:00:59.689735 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 14 20:00:59 crc kubenswrapper[4897]: I0214 20:00:59.803904 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0a79556a-2b24-4bba-a50a-87428533496f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 20:00:59 crc kubenswrapper[4897]: I0214 20:00:59.804119 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzjk8\" (UniqueName: \"kubernetes.io/projected/0a79556a-2b24-4bba-a50a-87428533496f-kube-api-access-bzjk8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0a79556a-2b24-4bba-a50a-87428533496f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 20:00:59 crc kubenswrapper[4897]: I0214 20:00:59.905800 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzjk8\" (UniqueName: \"kubernetes.io/projected/0a79556a-2b24-4bba-a50a-87428533496f-kube-api-access-bzjk8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0a79556a-2b24-4bba-a50a-87428533496f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 20:00:59 crc kubenswrapper[4897]: I0214 20:00:59.905901 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0a79556a-2b24-4bba-a50a-87428533496f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 20:00:59 crc kubenswrapper[4897]: I0214 20:00:59.907051 4897 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0a79556a-2b24-4bba-a50a-87428533496f\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 20:00:59 crc kubenswrapper[4897]: I0214 20:00:59.938313 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzjk8\" (UniqueName: \"kubernetes.io/projected/0a79556a-2b24-4bba-a50a-87428533496f-kube-api-access-bzjk8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0a79556a-2b24-4bba-a50a-87428533496f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 20:00:59 crc kubenswrapper[4897]: I0214 20:00:59.948155 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0a79556a-2b24-4bba-a50a-87428533496f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 20:01:00 crc kubenswrapper[4897]: I0214 20:01:00.004637 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 20:01:00 crc kubenswrapper[4897]: I0214 20:01:00.170563 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29518321-kgzj8"] Feb 14 20:01:00 crc kubenswrapper[4897]: I0214 20:01:00.172278 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29518321-kgzj8" Feb 14 20:01:00 crc kubenswrapper[4897]: I0214 20:01:00.222480 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29518321-kgzj8"] Feb 14 20:01:00 crc kubenswrapper[4897]: I0214 20:01:00.314833 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05076870-08f8-472c-bd61-0f74afdb9e47-combined-ca-bundle\") pod \"keystone-cron-29518321-kgzj8\" (UID: \"05076870-08f8-472c-bd61-0f74afdb9e47\") " pod="openstack/keystone-cron-29518321-kgzj8" Feb 14 20:01:00 crc kubenswrapper[4897]: I0214 20:01:00.314886 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05076870-08f8-472c-bd61-0f74afdb9e47-config-data\") pod \"keystone-cron-29518321-kgzj8\" (UID: \"05076870-08f8-472c-bd61-0f74afdb9e47\") " pod="openstack/keystone-cron-29518321-kgzj8" Feb 14 20:01:00 crc kubenswrapper[4897]: I0214 20:01:00.314965 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/05076870-08f8-472c-bd61-0f74afdb9e47-fernet-keys\") pod \"keystone-cron-29518321-kgzj8\" (UID: \"05076870-08f8-472c-bd61-0f74afdb9e47\") " pod="openstack/keystone-cron-29518321-kgzj8" Feb 14 20:01:00 crc kubenswrapper[4897]: I0214 20:01:00.315250 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqk4x\" (UniqueName: \"kubernetes.io/projected/05076870-08f8-472c-bd61-0f74afdb9e47-kube-api-access-gqk4x\") pod \"keystone-cron-29518321-kgzj8\" (UID: \"05076870-08f8-472c-bd61-0f74afdb9e47\") " pod="openstack/keystone-cron-29518321-kgzj8" Feb 14 20:01:00 crc kubenswrapper[4897]: I0214 20:01:00.417565 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05076870-08f8-472c-bd61-0f74afdb9e47-combined-ca-bundle\") pod \"keystone-cron-29518321-kgzj8\" (UID: \"05076870-08f8-472c-bd61-0f74afdb9e47\") " pod="openstack/keystone-cron-29518321-kgzj8" Feb 14 20:01:00 crc kubenswrapper[4897]: I0214 20:01:00.418380 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05076870-08f8-472c-bd61-0f74afdb9e47-config-data\") pod \"keystone-cron-29518321-kgzj8\" (UID: \"05076870-08f8-472c-bd61-0f74afdb9e47\") " pod="openstack/keystone-cron-29518321-kgzj8" Feb 14 20:01:00 crc kubenswrapper[4897]: I0214 20:01:00.418479 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/05076870-08f8-472c-bd61-0f74afdb9e47-fernet-keys\") pod \"keystone-cron-29518321-kgzj8\" (UID: \"05076870-08f8-472c-bd61-0f74afdb9e47\") " pod="openstack/keystone-cron-29518321-kgzj8" Feb 14 20:01:00 crc kubenswrapper[4897]: I0214 20:01:00.418556 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqk4x\" (UniqueName: \"kubernetes.io/projected/05076870-08f8-472c-bd61-0f74afdb9e47-kube-api-access-gqk4x\") pod \"keystone-cron-29518321-kgzj8\" (UID: \"05076870-08f8-472c-bd61-0f74afdb9e47\") " pod="openstack/keystone-cron-29518321-kgzj8" Feb 14 20:01:00 crc kubenswrapper[4897]: I0214 20:01:00.427128 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05076870-08f8-472c-bd61-0f74afdb9e47-combined-ca-bundle\") pod \"keystone-cron-29518321-kgzj8\" (UID: \"05076870-08f8-472c-bd61-0f74afdb9e47\") " pod="openstack/keystone-cron-29518321-kgzj8" Feb 14 20:01:00 crc kubenswrapper[4897]: I0214 20:01:00.438311 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/05076870-08f8-472c-bd61-0f74afdb9e47-fernet-keys\") pod \"keystone-cron-29518321-kgzj8\" (UID: \"05076870-08f8-472c-bd61-0f74afdb9e47\") " pod="openstack/keystone-cron-29518321-kgzj8" Feb 14 20:01:00 crc kubenswrapper[4897]: I0214 20:01:00.441523 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05076870-08f8-472c-bd61-0f74afdb9e47-config-data\") pod \"keystone-cron-29518321-kgzj8\" (UID: \"05076870-08f8-472c-bd61-0f74afdb9e47\") " pod="openstack/keystone-cron-29518321-kgzj8" Feb 14 20:01:00 crc kubenswrapper[4897]: I0214 20:01:00.443766 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqk4x\" (UniqueName: \"kubernetes.io/projected/05076870-08f8-472c-bd61-0f74afdb9e47-kube-api-access-gqk4x\") pod \"keystone-cron-29518321-kgzj8\" (UID: \"05076870-08f8-472c-bd61-0f74afdb9e47\") " pod="openstack/keystone-cron-29518321-kgzj8" Feb 14 20:01:00 crc kubenswrapper[4897]: I0214 20:01:00.504343 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29518321-kgzj8" Feb 14 20:01:00 crc kubenswrapper[4897]: I0214 20:01:00.617165 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 14 20:01:00 crc kubenswrapper[4897]: I0214 20:01:00.997410 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29518321-kgzj8"] Feb 14 20:01:01 crc kubenswrapper[4897]: I0214 20:01:01.002250 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"0a79556a-2b24-4bba-a50a-87428533496f","Type":"ContainerStarted","Data":"3b7eefd129a81099bf7c36186756898f1074a7ed0be6683e81b76278f79316d9"} Feb 14 20:01:02 crc kubenswrapper[4897]: I0214 20:01:02.014360 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"0a79556a-2b24-4bba-a50a-87428533496f","Type":"ContainerStarted","Data":"5530113418aa55e9203ecfeb062d6ddce36f8f844c26c632986f967845e8a0d4"} Feb 14 20:01:02 crc kubenswrapper[4897]: I0214 20:01:02.016474 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29518321-kgzj8" event={"ID":"05076870-08f8-472c-bd61-0f74afdb9e47","Type":"ContainerStarted","Data":"3662aee3a07f33ed68c88b6b9029bf427fe349f3ccd4acdb1e9439ff1941ce41"} Feb 14 20:01:02 crc kubenswrapper[4897]: I0214 20:01:02.016519 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29518321-kgzj8" event={"ID":"05076870-08f8-472c-bd61-0f74afdb9e47","Type":"ContainerStarted","Data":"38ab2b5f7bef02cd34c877ece408f1847e50e060e02596ffdd5b90d1e763a840"} Feb 14 20:01:02 crc kubenswrapper[4897]: I0214 20:01:02.031522 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.945109042 podStartE2EDuration="3.031492354s" podCreationTimestamp="2026-02-14 20:00:59 +0000 UTC" firstStartedPulling="2026-02-14 20:01:00.640606041 +0000 UTC m=+4713.617014524" lastFinishedPulling="2026-02-14 20:01:01.726989353 +0000 UTC m=+4714.703397836" observedRunningTime="2026-02-14 20:01:02.023805976 +0000 UTC m=+4715.000214459" watchObservedRunningTime="2026-02-14 20:01:02.031492354 +0000 UTC m=+4715.007900837" Feb 14 20:01:02 crc kubenswrapper[4897]: I0214 20:01:02.046603 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29518321-kgzj8" podStartSLOduration=2.046580182 podStartE2EDuration="2.046580182s" podCreationTimestamp="2026-02-14 20:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 20:01:02.037376817 +0000 UTC m=+4715.013785320" watchObservedRunningTime="2026-02-14 20:01:02.046580182 +0000 UTC m=+4715.022988655" Feb 14 20:01:04 crc kubenswrapper[4897]: I0214 20:01:04.794102 4897 scope.go:117] "RemoveContainer" containerID="a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" Feb 14 20:01:04 crc kubenswrapper[4897]: E0214 20:01:04.794770 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:01:06 crc kubenswrapper[4897]: I0214 20:01:06.078494 4897 generic.go:334] "Generic (PLEG): container finished" podID="05076870-08f8-472c-bd61-0f74afdb9e47" containerID="3662aee3a07f33ed68c88b6b9029bf427fe349f3ccd4acdb1e9439ff1941ce41" exitCode=0 Feb 14 20:01:06 crc kubenswrapper[4897]: I0214 20:01:06.078575 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29518321-kgzj8" event={"ID":"05076870-08f8-472c-bd61-0f74afdb9e47","Type":"ContainerDied","Data":"3662aee3a07f33ed68c88b6b9029bf427fe349f3ccd4acdb1e9439ff1941ce41"} Feb 14 20:01:08 crc kubenswrapper[4897]: I0214 20:01:08.104751 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29518321-kgzj8" event={"ID":"05076870-08f8-472c-bd61-0f74afdb9e47","Type":"ContainerDied","Data":"38ab2b5f7bef02cd34c877ece408f1847e50e060e02596ffdd5b90d1e763a840"} Feb 14 20:01:08 crc kubenswrapper[4897]: I0214 20:01:08.105551 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38ab2b5f7bef02cd34c877ece408f1847e50e060e02596ffdd5b90d1e763a840" Feb 14 20:01:08 crc kubenswrapper[4897]: I0214 20:01:08.111106 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29518321-kgzj8" Feb 14 20:01:08 crc kubenswrapper[4897]: I0214 20:01:08.249848 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqk4x\" (UniqueName: \"kubernetes.io/projected/05076870-08f8-472c-bd61-0f74afdb9e47-kube-api-access-gqk4x\") pod \"05076870-08f8-472c-bd61-0f74afdb9e47\" (UID: \"05076870-08f8-472c-bd61-0f74afdb9e47\") " Feb 14 20:01:08 crc kubenswrapper[4897]: I0214 20:01:08.250075 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/05076870-08f8-472c-bd61-0f74afdb9e47-fernet-keys\") pod \"05076870-08f8-472c-bd61-0f74afdb9e47\" (UID: \"05076870-08f8-472c-bd61-0f74afdb9e47\") " Feb 14 20:01:08 crc kubenswrapper[4897]: I0214 20:01:08.250150 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05076870-08f8-472c-bd61-0f74afdb9e47-combined-ca-bundle\") pod \"05076870-08f8-472c-bd61-0f74afdb9e47\" (UID: \"05076870-08f8-472c-bd61-0f74afdb9e47\") " Feb 14 20:01:08 crc kubenswrapper[4897]: I0214 20:01:08.250248 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05076870-08f8-472c-bd61-0f74afdb9e47-config-data\") pod \"05076870-08f8-472c-bd61-0f74afdb9e47\" (UID: \"05076870-08f8-472c-bd61-0f74afdb9e47\") " Feb 14 20:01:08 crc kubenswrapper[4897]: I0214 20:01:08.256514 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05076870-08f8-472c-bd61-0f74afdb9e47-kube-api-access-gqk4x" (OuterVolumeSpecName: "kube-api-access-gqk4x") pod "05076870-08f8-472c-bd61-0f74afdb9e47" (UID: "05076870-08f8-472c-bd61-0f74afdb9e47"). InnerVolumeSpecName "kube-api-access-gqk4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 20:01:08 crc kubenswrapper[4897]: I0214 20:01:08.257798 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05076870-08f8-472c-bd61-0f74afdb9e47-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "05076870-08f8-472c-bd61-0f74afdb9e47" (UID: "05076870-08f8-472c-bd61-0f74afdb9e47"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 20:01:08 crc kubenswrapper[4897]: I0214 20:01:08.287202 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05076870-08f8-472c-bd61-0f74afdb9e47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05076870-08f8-472c-bd61-0f74afdb9e47" (UID: "05076870-08f8-472c-bd61-0f74afdb9e47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 20:01:08 crc kubenswrapper[4897]: I0214 20:01:08.322865 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05076870-08f8-472c-bd61-0f74afdb9e47-config-data" (OuterVolumeSpecName: "config-data") pod "05076870-08f8-472c-bd61-0f74afdb9e47" (UID: "05076870-08f8-472c-bd61-0f74afdb9e47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 20:01:08 crc kubenswrapper[4897]: I0214 20:01:08.353008 4897 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/05076870-08f8-472c-bd61-0f74afdb9e47-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 14 20:01:08 crc kubenswrapper[4897]: I0214 20:01:08.353263 4897 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05076870-08f8-472c-bd61-0f74afdb9e47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 20:01:08 crc kubenswrapper[4897]: I0214 20:01:08.353363 4897 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05076870-08f8-472c-bd61-0f74afdb9e47-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 20:01:08 crc kubenswrapper[4897]: I0214 20:01:08.353446 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqk4x\" (UniqueName: \"kubernetes.io/projected/05076870-08f8-472c-bd61-0f74afdb9e47-kube-api-access-gqk4x\") on node \"crc\" DevicePath \"\"" Feb 14 20:01:09 crc kubenswrapper[4897]: I0214 20:01:09.116968 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29518321-kgzj8" Feb 14 20:01:10 crc kubenswrapper[4897]: I0214 20:01:10.177954 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvc49" Feb 14 20:01:16 crc kubenswrapper[4897]: I0214 20:01:16.793714 4897 scope.go:117] "RemoveContainer" containerID="a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" Feb 14 20:01:16 crc kubenswrapper[4897]: E0214 20:01:16.794676 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:01:30 crc kubenswrapper[4897]: I0214 20:01:30.795158 4897 scope.go:117] "RemoveContainer" containerID="a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" Feb 14 20:01:30 crc kubenswrapper[4897]: E0214 20:01:30.795848 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:01:42 crc kubenswrapper[4897]: I0214 20:01:42.674613 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wjwjd/must-gather-whhz4"] Feb 14 20:01:42 crc kubenswrapper[4897]: E0214 20:01:42.676663 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05076870-08f8-472c-bd61-0f74afdb9e47" containerName="keystone-cron" Feb 14 20:01:42 crc kubenswrapper[4897]: I0214 20:01:42.676685 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="05076870-08f8-472c-bd61-0f74afdb9e47" containerName="keystone-cron" Feb 14 20:01:42 crc kubenswrapper[4897]: I0214 20:01:42.677078 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="05076870-08f8-472c-bd61-0f74afdb9e47" containerName="keystone-cron" Feb 14 20:01:42 crc kubenswrapper[4897]: I0214 20:01:42.680851 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwjd/must-gather-whhz4" Feb 14 20:01:42 crc kubenswrapper[4897]: I0214 20:01:42.682872 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-wjwjd"/"default-dockercfg-twb4l" Feb 14 20:01:42 crc kubenswrapper[4897]: I0214 20:01:42.684590 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-wjwjd"/"openshift-service-ca.crt" Feb 14 20:01:42 crc kubenswrapper[4897]: I0214 20:01:42.688343 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-wjwjd"/"kube-root-ca.crt" Feb 14 20:01:42 crc kubenswrapper[4897]: I0214 20:01:42.699584 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wjwjd/must-gather-whhz4"] Feb 14 20:01:42 crc kubenswrapper[4897]: I0214 20:01:42.811523 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66rp4\" (UniqueName: \"kubernetes.io/projected/ea578f80-e5d1-4648-bd64-a8144b08671c-kube-api-access-66rp4\") pod \"must-gather-whhz4\" (UID: \"ea578f80-e5d1-4648-bd64-a8144b08671c\") " pod="openshift-must-gather-wjwjd/must-gather-whhz4" Feb 14 20:01:42 crc kubenswrapper[4897]: I0214 20:01:42.811579 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ea578f80-e5d1-4648-bd64-a8144b08671c-must-gather-output\") pod \"must-gather-whhz4\" (UID: \"ea578f80-e5d1-4648-bd64-a8144b08671c\") " pod="openshift-must-gather-wjwjd/must-gather-whhz4" Feb 14 20:01:42 crc kubenswrapper[4897]: I0214 20:01:42.914176 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66rp4\" (UniqueName: \"kubernetes.io/projected/ea578f80-e5d1-4648-bd64-a8144b08671c-kube-api-access-66rp4\") pod \"must-gather-whhz4\" (UID: \"ea578f80-e5d1-4648-bd64-a8144b08671c\") " pod="openshift-must-gather-wjwjd/must-gather-whhz4" Feb 14 20:01:42 crc kubenswrapper[4897]: I0214 20:01:42.914227 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ea578f80-e5d1-4648-bd64-a8144b08671c-must-gather-output\") pod \"must-gather-whhz4\" (UID: \"ea578f80-e5d1-4648-bd64-a8144b08671c\") " pod="openshift-must-gather-wjwjd/must-gather-whhz4" Feb 14 20:01:42 crc kubenswrapper[4897]: I0214 20:01:42.914641 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ea578f80-e5d1-4648-bd64-a8144b08671c-must-gather-output\") pod \"must-gather-whhz4\" (UID: \"ea578f80-e5d1-4648-bd64-a8144b08671c\") " pod="openshift-must-gather-wjwjd/must-gather-whhz4" Feb 14 20:01:42 crc kubenswrapper[4897]: I0214 20:01:42.933170 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66rp4\" (UniqueName: \"kubernetes.io/projected/ea578f80-e5d1-4648-bd64-a8144b08671c-kube-api-access-66rp4\") pod \"must-gather-whhz4\" (UID: \"ea578f80-e5d1-4648-bd64-a8144b08671c\") " pod="openshift-must-gather-wjwjd/must-gather-whhz4" Feb 14 20:01:42 crc kubenswrapper[4897]: I0214 20:01:42.999449 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwjd/must-gather-whhz4" Feb 14 20:01:43 crc kubenswrapper[4897]: I0214 20:01:43.749061 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wjwjd/must-gather-whhz4"] Feb 14 20:01:43 crc kubenswrapper[4897]: I0214 20:01:43.794164 4897 scope.go:117] "RemoveContainer" containerID="a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" Feb 14 20:01:43 crc kubenswrapper[4897]: E0214 20:01:43.794598 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:01:45 crc kubenswrapper[4897]: I0214 20:01:45.057271 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wjwjd/must-gather-whhz4" event={"ID":"ea578f80-e5d1-4648-bd64-a8144b08671c","Type":"ContainerStarted","Data":"14f832f93e1894a53b6456fe9029c6ac4ac23cfa57b64294da5729dbb3b99dff"} Feb 14 20:01:52 crc kubenswrapper[4897]: I0214 20:01:52.144456 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wjwjd/must-gather-whhz4" event={"ID":"ea578f80-e5d1-4648-bd64-a8144b08671c","Type":"ContainerStarted","Data":"73b5bb6ca2d0b021f05a26cece0dab82e874e1053214c682811180ebef7f88ec"} Feb 14 20:01:52 crc kubenswrapper[4897]: I0214 20:01:52.145077 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wjwjd/must-gather-whhz4" event={"ID":"ea578f80-e5d1-4648-bd64-a8144b08671c","Type":"ContainerStarted","Data":"6d7489eb91351e7bd3c419435d24d744a4cec100ec53dbb6fbfbdcc8064656c4"} Feb 14 20:01:52 crc kubenswrapper[4897]: I0214 20:01:52.169003 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wjwjd/must-gather-whhz4" podStartSLOduration=2.868809964 podStartE2EDuration="10.168979553s" podCreationTimestamp="2026-02-14 20:01:42 +0000 UTC" firstStartedPulling="2026-02-14 20:01:44.193463076 +0000 UTC m=+4757.169871559" lastFinishedPulling="2026-02-14 20:01:51.493632665 +0000 UTC m=+4764.470041148" observedRunningTime="2026-02-14 20:01:52.159686406 +0000 UTC m=+4765.136094889" watchObservedRunningTime="2026-02-14 20:01:52.168979553 +0000 UTC m=+4765.145388046" Feb 14 20:01:57 crc kubenswrapper[4897]: I0214 20:01:57.802841 4897 scope.go:117] "RemoveContainer" containerID="a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" Feb 14 20:01:57 crc kubenswrapper[4897]: E0214 20:01:57.803739 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:01:58 crc kubenswrapper[4897]: I0214 20:01:58.022638 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wjwjd/crc-debug-kjvm2"] Feb 14 20:01:58 crc kubenswrapper[4897]: I0214 20:01:58.024433 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwjd/crc-debug-kjvm2" Feb 14 20:01:58 crc kubenswrapper[4897]: I0214 20:01:58.128515 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d0498e9d-f018-4417-8a72-077d5d05bc45-host\") pod \"crc-debug-kjvm2\" (UID: \"d0498e9d-f018-4417-8a72-077d5d05bc45\") " pod="openshift-must-gather-wjwjd/crc-debug-kjvm2" Feb 14 20:01:58 crc kubenswrapper[4897]: I0214 20:01:58.128692 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9spt\" (UniqueName: \"kubernetes.io/projected/d0498e9d-f018-4417-8a72-077d5d05bc45-kube-api-access-g9spt\") pod \"crc-debug-kjvm2\" (UID: \"d0498e9d-f018-4417-8a72-077d5d05bc45\") " pod="openshift-must-gather-wjwjd/crc-debug-kjvm2" Feb 14 20:01:58 crc kubenswrapper[4897]: I0214 20:01:58.230323 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d0498e9d-f018-4417-8a72-077d5d05bc45-host\") pod \"crc-debug-kjvm2\" (UID: \"d0498e9d-f018-4417-8a72-077d5d05bc45\") " pod="openshift-must-gather-wjwjd/crc-debug-kjvm2" Feb 14 20:01:58 crc kubenswrapper[4897]: I0214 20:01:58.230481 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9spt\" (UniqueName: \"kubernetes.io/projected/d0498e9d-f018-4417-8a72-077d5d05bc45-kube-api-access-g9spt\") pod \"crc-debug-kjvm2\" (UID: \"d0498e9d-f018-4417-8a72-077d5d05bc45\") " pod="openshift-must-gather-wjwjd/crc-debug-kjvm2" Feb 14 20:01:58 crc kubenswrapper[4897]: I0214 20:01:58.231724 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d0498e9d-f018-4417-8a72-077d5d05bc45-host\") pod \"crc-debug-kjvm2\" (UID: \"d0498e9d-f018-4417-8a72-077d5d05bc45\") " pod="openshift-must-gather-wjwjd/crc-debug-kjvm2" Feb 14 20:01:58 crc kubenswrapper[4897]: I0214 20:01:58.258450 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9spt\" (UniqueName: \"kubernetes.io/projected/d0498e9d-f018-4417-8a72-077d5d05bc45-kube-api-access-g9spt\") pod \"crc-debug-kjvm2\" (UID: \"d0498e9d-f018-4417-8a72-077d5d05bc45\") " pod="openshift-must-gather-wjwjd/crc-debug-kjvm2" Feb 14 20:01:58 crc kubenswrapper[4897]: I0214 20:01:58.342607 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwjd/crc-debug-kjvm2" Feb 14 20:01:58 crc kubenswrapper[4897]: W0214 20:01:58.387966 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0498e9d_f018_4417_8a72_077d5d05bc45.slice/crio-006df0b3a8a54ff38273a3083472301e3c09d7aa258a4b14b5dfbfa2220f02ac WatchSource:0}: Error finding container 006df0b3a8a54ff38273a3083472301e3c09d7aa258a4b14b5dfbfa2220f02ac: Status 404 returned error can't find the container with id 006df0b3a8a54ff38273a3083472301e3c09d7aa258a4b14b5dfbfa2220f02ac Feb 14 20:01:59 crc kubenswrapper[4897]: I0214 20:01:59.223902 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wjwjd/crc-debug-kjvm2" event={"ID":"d0498e9d-f018-4417-8a72-077d5d05bc45","Type":"ContainerStarted","Data":"006df0b3a8a54ff38273a3083472301e3c09d7aa258a4b14b5dfbfa2220f02ac"} Feb 14 20:02:08 crc kubenswrapper[4897]: I0214 20:02:08.794681 4897 scope.go:117] "RemoveContainer" containerID="a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" Feb 14 20:02:08 crc kubenswrapper[4897]: E0214 20:02:08.795644 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:02:09 crc kubenswrapper[4897]: I0214 20:02:09.346472 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wjwjd/crc-debug-kjvm2" event={"ID":"d0498e9d-f018-4417-8a72-077d5d05bc45","Type":"ContainerStarted","Data":"6533b50994edfc3f46ec8281020f80245049cf1fc931aee8e2aaad8a07f26a1b"} Feb 14 20:02:09 crc kubenswrapper[4897]: I0214 20:02:09.367803 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wjwjd/crc-debug-kjvm2" podStartSLOduration=0.920339956 podStartE2EDuration="11.367784974s" podCreationTimestamp="2026-02-14 20:01:58 +0000 UTC" firstStartedPulling="2026-02-14 20:01:58.389749575 +0000 UTC m=+4771.366158058" lastFinishedPulling="2026-02-14 20:02:08.837194593 +0000 UTC m=+4781.813603076" observedRunningTime="2026-02-14 20:02:09.357102143 +0000 UTC m=+4782.333510636" watchObservedRunningTime="2026-02-14 20:02:09.367784974 +0000 UTC m=+4782.344193457" Feb 14 20:02:22 crc kubenswrapper[4897]: I0214 20:02:22.794645 4897 scope.go:117] "RemoveContainer" containerID="a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" Feb 14 20:02:22 crc kubenswrapper[4897]: E0214 20:02:22.795701 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:02:34 crc kubenswrapper[4897]: I0214 20:02:34.794215 4897 scope.go:117] "RemoveContainer" containerID="a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" Feb 14 20:02:34 crc kubenswrapper[4897]: E0214 20:02:34.795163 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:02:45 crc kubenswrapper[4897]: I0214 20:02:45.794334 4897 scope.go:117] "RemoveContainer" containerID="a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" Feb 14 20:02:45 crc kubenswrapper[4897]: E0214 20:02:45.795041 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:02:51 crc kubenswrapper[4897]: I0214 20:02:51.155903 4897 generic.go:334] "Generic (PLEG): container finished" podID="d0498e9d-f018-4417-8a72-077d5d05bc45" containerID="6533b50994edfc3f46ec8281020f80245049cf1fc931aee8e2aaad8a07f26a1b" exitCode=0 Feb 14 20:02:51 crc kubenswrapper[4897]: I0214 20:02:51.156121 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wjwjd/crc-debug-kjvm2" event={"ID":"d0498e9d-f018-4417-8a72-077d5d05bc45","Type":"ContainerDied","Data":"6533b50994edfc3f46ec8281020f80245049cf1fc931aee8e2aaad8a07f26a1b"} Feb 14 20:02:52 crc kubenswrapper[4897]: I0214 20:02:52.669652 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwjd/crc-debug-kjvm2" Feb 14 20:02:52 crc kubenswrapper[4897]: I0214 20:02:52.712116 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wjwjd/crc-debug-kjvm2"] Feb 14 20:02:52 crc kubenswrapper[4897]: I0214 20:02:52.725229 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wjwjd/crc-debug-kjvm2"] Feb 14 20:02:52 crc kubenswrapper[4897]: I0214 20:02:52.780971 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d0498e9d-f018-4417-8a72-077d5d05bc45-host\") pod \"d0498e9d-f018-4417-8a72-077d5d05bc45\" (UID: \"d0498e9d-f018-4417-8a72-077d5d05bc45\") " Feb 14 20:02:52 crc kubenswrapper[4897]: I0214 20:02:52.781119 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0498e9d-f018-4417-8a72-077d5d05bc45-host" (OuterVolumeSpecName: "host") pod "d0498e9d-f018-4417-8a72-077d5d05bc45" (UID: "d0498e9d-f018-4417-8a72-077d5d05bc45"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 20:02:52 crc kubenswrapper[4897]: I0214 20:02:52.781289 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9spt\" (UniqueName: \"kubernetes.io/projected/d0498e9d-f018-4417-8a72-077d5d05bc45-kube-api-access-g9spt\") pod \"d0498e9d-f018-4417-8a72-077d5d05bc45\" (UID: \"d0498e9d-f018-4417-8a72-077d5d05bc45\") " Feb 14 20:02:52 crc kubenswrapper[4897]: I0214 20:02:52.781784 4897 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d0498e9d-f018-4417-8a72-077d5d05bc45-host\") on node \"crc\" DevicePath \"\"" Feb 14 20:02:52 crc kubenswrapper[4897]: I0214 20:02:52.790668 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0498e9d-f018-4417-8a72-077d5d05bc45-kube-api-access-g9spt" (OuterVolumeSpecName: "kube-api-access-g9spt") pod "d0498e9d-f018-4417-8a72-077d5d05bc45" (UID: "d0498e9d-f018-4417-8a72-077d5d05bc45"). InnerVolumeSpecName "kube-api-access-g9spt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 20:02:52 crc kubenswrapper[4897]: I0214 20:02:52.883940 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9spt\" (UniqueName: \"kubernetes.io/projected/d0498e9d-f018-4417-8a72-077d5d05bc45-kube-api-access-g9spt\") on node \"crc\" DevicePath \"\"" Feb 14 20:02:53 crc kubenswrapper[4897]: I0214 20:02:53.187620 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="006df0b3a8a54ff38273a3083472301e3c09d7aa258a4b14b5dfbfa2220f02ac" Feb 14 20:02:53 crc kubenswrapper[4897]: I0214 20:02:53.187934 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwjd/crc-debug-kjvm2" Feb 14 20:02:53 crc kubenswrapper[4897]: I0214 20:02:53.811804 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0498e9d-f018-4417-8a72-077d5d05bc45" path="/var/lib/kubelet/pods/d0498e9d-f018-4417-8a72-077d5d05bc45/volumes" Feb 14 20:02:53 crc kubenswrapper[4897]: I0214 20:02:53.937024 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wjwjd/crc-debug-wjlcz"] Feb 14 20:02:53 crc kubenswrapper[4897]: E0214 20:02:53.937531 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0498e9d-f018-4417-8a72-077d5d05bc45" containerName="container-00" Feb 14 20:02:53 crc kubenswrapper[4897]: I0214 20:02:53.937546 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0498e9d-f018-4417-8a72-077d5d05bc45" containerName="container-00" Feb 14 20:02:53 crc kubenswrapper[4897]: I0214 20:02:53.937807 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0498e9d-f018-4417-8a72-077d5d05bc45" containerName="container-00" Feb 14 20:02:53 crc kubenswrapper[4897]: I0214 20:02:53.938712 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwjd/crc-debug-wjlcz" Feb 14 20:02:54 crc kubenswrapper[4897]: I0214 20:02:54.008682 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce5ee55e-d853-46af-ab2f-61184241d21f-host\") pod \"crc-debug-wjlcz\" (UID: \"ce5ee55e-d853-46af-ab2f-61184241d21f\") " pod="openshift-must-gather-wjwjd/crc-debug-wjlcz" Feb 14 20:02:54 crc kubenswrapper[4897]: I0214 20:02:54.008991 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xv4n\" (UniqueName: \"kubernetes.io/projected/ce5ee55e-d853-46af-ab2f-61184241d21f-kube-api-access-2xv4n\") pod \"crc-debug-wjlcz\" (UID: \"ce5ee55e-d853-46af-ab2f-61184241d21f\") " pod="openshift-must-gather-wjwjd/crc-debug-wjlcz" Feb 14 20:02:54 crc kubenswrapper[4897]: I0214 20:02:54.112271 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xv4n\" (UniqueName: \"kubernetes.io/projected/ce5ee55e-d853-46af-ab2f-61184241d21f-kube-api-access-2xv4n\") pod \"crc-debug-wjlcz\" (UID: \"ce5ee55e-d853-46af-ab2f-61184241d21f\") " pod="openshift-must-gather-wjwjd/crc-debug-wjlcz" Feb 14 20:02:54 crc kubenswrapper[4897]: I0214 20:02:54.112512 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce5ee55e-d853-46af-ab2f-61184241d21f-host\") pod \"crc-debug-wjlcz\" (UID: \"ce5ee55e-d853-46af-ab2f-61184241d21f\") " pod="openshift-must-gather-wjwjd/crc-debug-wjlcz" Feb 14 20:02:54 crc kubenswrapper[4897]: I0214 20:02:54.112834 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce5ee55e-d853-46af-ab2f-61184241d21f-host\") pod \"crc-debug-wjlcz\" (UID: \"ce5ee55e-d853-46af-ab2f-61184241d21f\") " pod="openshift-must-gather-wjwjd/crc-debug-wjlcz" Feb 14 20:02:54 crc kubenswrapper[4897]: I0214 20:02:54.979683 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xv4n\" (UniqueName: \"kubernetes.io/projected/ce5ee55e-d853-46af-ab2f-61184241d21f-kube-api-access-2xv4n\") pod \"crc-debug-wjlcz\" (UID: \"ce5ee55e-d853-46af-ab2f-61184241d21f\") " pod="openshift-must-gather-wjwjd/crc-debug-wjlcz" Feb 14 20:02:55 crc kubenswrapper[4897]: I0214 20:02:55.158064 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwjd/crc-debug-wjlcz" Feb 14 20:02:55 crc kubenswrapper[4897]: I0214 20:02:55.212656 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wjwjd/crc-debug-wjlcz" event={"ID":"ce5ee55e-d853-46af-ab2f-61184241d21f","Type":"ContainerStarted","Data":"2052fc19e4cc8e45c725187f0c7e567401c5449877227775816bd33c76e4eb04"} Feb 14 20:02:56 crc kubenswrapper[4897]: I0214 20:02:56.224135 4897 generic.go:334] "Generic (PLEG): container finished" podID="9748a754-75f5-4f7d-9e7b-a6135dd3778d" containerID="4a2ab4d17858d582748edaafa439d45f133d6351b1fe7558ddad33188c7b1b13" exitCode=0 Feb 14 20:02:56 crc kubenswrapper[4897]: I0214 20:02:56.224233 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" event={"ID":"9748a754-75f5-4f7d-9e7b-a6135dd3778d","Type":"ContainerDied","Data":"4a2ab4d17858d582748edaafa439d45f133d6351b1fe7558ddad33188c7b1b13"} Feb 14 20:02:56 crc kubenswrapper[4897]: I0214 20:02:56.226413 4897 generic.go:334] "Generic (PLEG): container finished" podID="ce5ee55e-d853-46af-ab2f-61184241d21f" containerID="a433a1858ec1d14318e6ee846b8c017f9d1d7ec5ed17a6a369bebfdf2bf3addb" exitCode=0 Feb 14 20:02:56 crc kubenswrapper[4897]: I0214 20:02:56.226441 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wjwjd/crc-debug-wjlcz" event={"ID":"ce5ee55e-d853-46af-ab2f-61184241d21f","Type":"ContainerDied","Data":"a433a1858ec1d14318e6ee846b8c017f9d1d7ec5ed17a6a369bebfdf2bf3addb"} Feb 14 20:02:57 crc kubenswrapper[4897]: I0214 20:02:57.177488 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wjwjd/crc-debug-wjlcz"] Feb 14 20:02:57 crc kubenswrapper[4897]: I0214 20:02:57.204073 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wjwjd/crc-debug-wjlcz"] Feb 14 20:02:57 crc kubenswrapper[4897]: I0214 20:02:57.245362 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" event={"ID":"9748a754-75f5-4f7d-9e7b-a6135dd3778d","Type":"ContainerStarted","Data":"eb0d0f3ab0ff885f522fca70bc848c28ae8ee10e6f1b2e5db88ae7406b589741"} Feb 14 20:02:57 crc kubenswrapper[4897]: I0214 20:02:57.403833 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwjd/crc-debug-wjlcz" Feb 14 20:02:57 crc kubenswrapper[4897]: I0214 20:02:57.492434 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce5ee55e-d853-46af-ab2f-61184241d21f-host\") pod \"ce5ee55e-d853-46af-ab2f-61184241d21f\" (UID: \"ce5ee55e-d853-46af-ab2f-61184241d21f\") " Feb 14 20:02:57 crc kubenswrapper[4897]: I0214 20:02:57.492486 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xv4n\" (UniqueName: \"kubernetes.io/projected/ce5ee55e-d853-46af-ab2f-61184241d21f-kube-api-access-2xv4n\") pod \"ce5ee55e-d853-46af-ab2f-61184241d21f\" (UID: \"ce5ee55e-d853-46af-ab2f-61184241d21f\") " Feb 14 20:02:57 crc kubenswrapper[4897]: I0214 20:02:57.493043 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce5ee55e-d853-46af-ab2f-61184241d21f-host" (OuterVolumeSpecName: "host") pod "ce5ee55e-d853-46af-ab2f-61184241d21f" (UID: "ce5ee55e-d853-46af-ab2f-61184241d21f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 20:02:57 crc kubenswrapper[4897]: I0214 20:02:57.500772 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce5ee55e-d853-46af-ab2f-61184241d21f-kube-api-access-2xv4n" (OuterVolumeSpecName: "kube-api-access-2xv4n") pod "ce5ee55e-d853-46af-ab2f-61184241d21f" (UID: "ce5ee55e-d853-46af-ab2f-61184241d21f"). InnerVolumeSpecName "kube-api-access-2xv4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 20:02:57 crc kubenswrapper[4897]: I0214 20:02:57.594628 4897 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce5ee55e-d853-46af-ab2f-61184241d21f-host\") on node \"crc\" DevicePath \"\"" Feb 14 20:02:57 crc kubenswrapper[4897]: I0214 20:02:57.594891 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xv4n\" (UniqueName: \"kubernetes.io/projected/ce5ee55e-d853-46af-ab2f-61184241d21f-kube-api-access-2xv4n\") on node \"crc\" DevicePath \"\"" Feb 14 20:02:57 crc kubenswrapper[4897]: I0214 20:02:57.808682 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce5ee55e-d853-46af-ab2f-61184241d21f" path="/var/lib/kubelet/pods/ce5ee55e-d853-46af-ab2f-61184241d21f/volumes" Feb 14 20:02:58 crc kubenswrapper[4897]: I0214 20:02:58.258144 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwjd/crc-debug-wjlcz" Feb 14 20:02:58 crc kubenswrapper[4897]: I0214 20:02:58.258169 4897 scope.go:117] "RemoveContainer" containerID="a433a1858ec1d14318e6ee846b8c017f9d1d7ec5ed17a6a369bebfdf2bf3addb" Feb 14 20:02:58 crc kubenswrapper[4897]: I0214 20:02:58.385811 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wjwjd/crc-debug-7cllg"] Feb 14 20:02:58 crc kubenswrapper[4897]: E0214 20:02:58.386624 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce5ee55e-d853-46af-ab2f-61184241d21f" containerName="container-00" Feb 14 20:02:58 crc kubenswrapper[4897]: I0214 20:02:58.386739 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce5ee55e-d853-46af-ab2f-61184241d21f" containerName="container-00" Feb 14 20:02:58 crc kubenswrapper[4897]: I0214 20:02:58.387430 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce5ee55e-d853-46af-ab2f-61184241d21f" containerName="container-00" Feb 14 20:02:58 crc kubenswrapper[4897]: I0214 20:02:58.388494 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwjd/crc-debug-7cllg" Feb 14 20:02:58 crc kubenswrapper[4897]: I0214 20:02:58.518981 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f81d3a1-ce2e-4369-bc50-13fc46a13823-host\") pod \"crc-debug-7cllg\" (UID: \"5f81d3a1-ce2e-4369-bc50-13fc46a13823\") " pod="openshift-must-gather-wjwjd/crc-debug-7cllg" Feb 14 20:02:58 crc kubenswrapper[4897]: I0214 20:02:58.519696 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f2vf\" (UniqueName: \"kubernetes.io/projected/5f81d3a1-ce2e-4369-bc50-13fc46a13823-kube-api-access-8f2vf\") pod \"crc-debug-7cllg\" (UID: \"5f81d3a1-ce2e-4369-bc50-13fc46a13823\") " pod="openshift-must-gather-wjwjd/crc-debug-7cllg" Feb 14 20:02:58 crc kubenswrapper[4897]: I0214 20:02:58.621957 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f2vf\" (UniqueName: \"kubernetes.io/projected/5f81d3a1-ce2e-4369-bc50-13fc46a13823-kube-api-access-8f2vf\") pod \"crc-debug-7cllg\" (UID: \"5f81d3a1-ce2e-4369-bc50-13fc46a13823\") " pod="openshift-must-gather-wjwjd/crc-debug-7cllg" Feb 14 20:02:58 crc kubenswrapper[4897]: I0214 20:02:58.622532 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f81d3a1-ce2e-4369-bc50-13fc46a13823-host\") pod \"crc-debug-7cllg\" (UID: \"5f81d3a1-ce2e-4369-bc50-13fc46a13823\") " pod="openshift-must-gather-wjwjd/crc-debug-7cllg" Feb 14 20:02:58 crc kubenswrapper[4897]: I0214 20:02:58.622598 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f81d3a1-ce2e-4369-bc50-13fc46a13823-host\") pod \"crc-debug-7cllg\" (UID: \"5f81d3a1-ce2e-4369-bc50-13fc46a13823\") " pod="openshift-must-gather-wjwjd/crc-debug-7cllg" Feb 14 20:02:58 crc kubenswrapper[4897]: I0214 20:02:58.661168 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f2vf\" (UniqueName: \"kubernetes.io/projected/5f81d3a1-ce2e-4369-bc50-13fc46a13823-kube-api-access-8f2vf\") pod \"crc-debug-7cllg\" (UID: \"5f81d3a1-ce2e-4369-bc50-13fc46a13823\") " pod="openshift-must-gather-wjwjd/crc-debug-7cllg" Feb 14 20:02:58 crc kubenswrapper[4897]: I0214 20:02:58.704989 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwjd/crc-debug-7cllg" Feb 14 20:02:58 crc kubenswrapper[4897]: I0214 20:02:58.795221 4897 scope.go:117] "RemoveContainer" containerID="a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" Feb 14 20:02:58 crc kubenswrapper[4897]: E0214 20:02:58.795677 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:02:59 crc kubenswrapper[4897]: I0214 20:02:59.279949 4897 generic.go:334] "Generic (PLEG): container finished" podID="5f81d3a1-ce2e-4369-bc50-13fc46a13823" containerID="64081e5e1f1db2e8511aaf4bd35e4f3a4912ea8cc07a3a41b73e6641fdd81051" exitCode=0 Feb 14 20:02:59 crc kubenswrapper[4897]: I0214 20:02:59.280586 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wjwjd/crc-debug-7cllg" event={"ID":"5f81d3a1-ce2e-4369-bc50-13fc46a13823","Type":"ContainerDied","Data":"64081e5e1f1db2e8511aaf4bd35e4f3a4912ea8cc07a3a41b73e6641fdd81051"} Feb 14 20:02:59 crc kubenswrapper[4897]: I0214 20:02:59.280639 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wjwjd/crc-debug-7cllg" event={"ID":"5f81d3a1-ce2e-4369-bc50-13fc46a13823","Type":"ContainerStarted","Data":"84706ea6acec944e3d39c7c2d098ece6b574c4a8c2dd60a82a4debc1317e11a5"} Feb 14 20:02:59 crc kubenswrapper[4897]: I0214 20:02:59.329877 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wjwjd/crc-debug-7cllg"] Feb 14 20:02:59 crc kubenswrapper[4897]: I0214 20:02:59.348433 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wjwjd/crc-debug-7cllg"] Feb 14 20:03:00 crc kubenswrapper[4897]: I0214 20:03:00.462960 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwjd/crc-debug-7cllg" Feb 14 20:03:00 crc kubenswrapper[4897]: I0214 20:03:00.581473 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f2vf\" (UniqueName: \"kubernetes.io/projected/5f81d3a1-ce2e-4369-bc50-13fc46a13823-kube-api-access-8f2vf\") pod \"5f81d3a1-ce2e-4369-bc50-13fc46a13823\" (UID: \"5f81d3a1-ce2e-4369-bc50-13fc46a13823\") " Feb 14 20:03:00 crc kubenswrapper[4897]: I0214 20:03:00.581615 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f81d3a1-ce2e-4369-bc50-13fc46a13823-host\") pod \"5f81d3a1-ce2e-4369-bc50-13fc46a13823\" (UID: \"5f81d3a1-ce2e-4369-bc50-13fc46a13823\") " Feb 14 20:03:00 crc kubenswrapper[4897]: I0214 20:03:00.581683 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f81d3a1-ce2e-4369-bc50-13fc46a13823-host" (OuterVolumeSpecName: "host") pod "5f81d3a1-ce2e-4369-bc50-13fc46a13823" (UID: "5f81d3a1-ce2e-4369-bc50-13fc46a13823"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 20:03:00 crc kubenswrapper[4897]: I0214 20:03:00.582918 4897 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f81d3a1-ce2e-4369-bc50-13fc46a13823-host\") on node \"crc\" DevicePath \"\"" Feb 14 20:03:00 crc kubenswrapper[4897]: I0214 20:03:00.590228 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f81d3a1-ce2e-4369-bc50-13fc46a13823-kube-api-access-8f2vf" (OuterVolumeSpecName: "kube-api-access-8f2vf") pod "5f81d3a1-ce2e-4369-bc50-13fc46a13823" (UID: "5f81d3a1-ce2e-4369-bc50-13fc46a13823"). InnerVolumeSpecName "kube-api-access-8f2vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 20:03:00 crc kubenswrapper[4897]: I0214 20:03:00.685640 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f2vf\" (UniqueName: \"kubernetes.io/projected/5f81d3a1-ce2e-4369-bc50-13fc46a13823-kube-api-access-8f2vf\") on node \"crc\" DevicePath \"\"" Feb 14 20:03:01 crc kubenswrapper[4897]: I0214 20:03:01.313716 4897 scope.go:117] "RemoveContainer" containerID="64081e5e1f1db2e8511aaf4bd35e4f3a4912ea8cc07a3a41b73e6641fdd81051" Feb 14 20:03:01 crc kubenswrapper[4897]: I0214 20:03:01.313771 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwjd/crc-debug-7cllg" Feb 14 20:03:01 crc kubenswrapper[4897]: I0214 20:03:01.814459 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f81d3a1-ce2e-4369-bc50-13fc46a13823" path="/var/lib/kubelet/pods/5f81d3a1-ce2e-4369-bc50-13fc46a13823/volumes" Feb 14 20:03:05 crc kubenswrapper[4897]: I0214 20:03:05.015493 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 20:03:05 crc kubenswrapper[4897]: I0214 20:03:05.016052 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 20:03:10 crc kubenswrapper[4897]: I0214 20:03:10.794025 4897 scope.go:117] "RemoveContainer" containerID="a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" Feb 14 20:03:11 crc kubenswrapper[4897]: I0214 20:03:11.455321 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerStarted","Data":"0920c69bdd62f6bfbe3c53d6427630f4e3c45b27232e9664bc51391f7b5b4491"} Feb 14 20:03:25 crc kubenswrapper[4897]: I0214 20:03:25.024205 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 20:03:25 crc kubenswrapper[4897]: I0214 20:03:25.031950 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7cfcf6657f-wsnmf" Feb 14 20:03:38 crc kubenswrapper[4897]: I0214 20:03:38.233944 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_05b9fa1a-7c1c-464e-a03e-8067e2bb6c80/aodh-api/0.log" Feb 14 20:03:38 crc kubenswrapper[4897]: I0214 20:03:38.440565 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_05b9fa1a-7c1c-464e-a03e-8067e2bb6c80/aodh-evaluator/0.log" Feb 14 20:03:38 crc kubenswrapper[4897]: I0214 20:03:38.470261 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_05b9fa1a-7c1c-464e-a03e-8067e2bb6c80/aodh-listener/0.log" Feb 14 20:03:38 crc kubenswrapper[4897]: I0214 20:03:38.551054 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_05b9fa1a-7c1c-464e-a03e-8067e2bb6c80/aodh-notifier/0.log" Feb 14 20:03:38 crc kubenswrapper[4897]: I0214 20:03:38.627395 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-75dc4484db-pr977_79135975-c59e-4ea0-8487-7d47e4d5d632/barbican-api/0.log" Feb 14 20:03:38 crc kubenswrapper[4897]: I0214 20:03:38.671331 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-75dc4484db-pr977_79135975-c59e-4ea0-8487-7d47e4d5d632/barbican-api-log/0.log" Feb 14 20:03:38 crc kubenswrapper[4897]: I0214 20:03:38.757979 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7c4d46dc74-lkxxb_cecda2fd-aafa-4261-9947-e07a96c39aa5/barbican-keystone-listener/0.log" Feb 14 20:03:38 crc kubenswrapper[4897]: I0214 20:03:38.871170 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7c4d46dc74-lkxxb_cecda2fd-aafa-4261-9947-e07a96c39aa5/barbican-keystone-listener-log/0.log" Feb 14 20:03:38 crc kubenswrapper[4897]: I0214 20:03:38.970588 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6576bd4d47-rhqmj_8a7e158b-1796-4311-89ce-c05a5f1acd87/barbican-worker-log/0.log" Feb 14 20:03:38 crc kubenswrapper[4897]: I0214 20:03:38.993662 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6576bd4d47-rhqmj_8a7e158b-1796-4311-89ce-c05a5f1acd87/barbican-worker/0.log" Feb 14 20:03:39 crc kubenswrapper[4897]: I0214 20:03:39.114459 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-r8ztk_afff7c3d-a238-49b8-8b7c-d041c4eb9ac2/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 20:03:39 crc kubenswrapper[4897]: I0214 20:03:39.238633 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_289311f5-ac62-4fe6-b260-8bda0a09331b/ceilometer-central-agent/1.log" Feb 14 20:03:39 crc kubenswrapper[4897]: I0214 20:03:39.319505 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_289311f5-ac62-4fe6-b260-8bda0a09331b/ceilometer-notification-agent/0.log" Feb 14 20:03:39 crc kubenswrapper[4897]: I0214 20:03:39.359580 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_289311f5-ac62-4fe6-b260-8bda0a09331b/ceilometer-central-agent/0.log" Feb 14 20:03:39 crc kubenswrapper[4897]: I0214 20:03:39.442917 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_289311f5-ac62-4fe6-b260-8bda0a09331b/proxy-httpd/0.log" Feb 14 20:03:39 crc kubenswrapper[4897]: I0214 20:03:39.477535 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_289311f5-ac62-4fe6-b260-8bda0a09331b/sg-core/0.log" Feb 14 20:03:39 crc kubenswrapper[4897]: I0214 20:03:39.616781 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5c87e5c6-bedb-4830-9ad3-96d9eda6f476/cinder-api/0.log" Feb 14 20:03:39 crc kubenswrapper[4897]: I0214 20:03:39.652256 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5c87e5c6-bedb-4830-9ad3-96d9eda6f476/cinder-api-log/0.log" Feb 14 20:03:39 crc kubenswrapper[4897]: I0214 20:03:39.808856 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e95d0e1a-6046-4ec7-8422-0858aca3bca9/cinder-scheduler/1.log" Feb 14 20:03:39 crc kubenswrapper[4897]: I0214 20:03:39.873614 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e95d0e1a-6046-4ec7-8422-0858aca3bca9/cinder-scheduler/0.log" Feb 14 20:03:39 crc kubenswrapper[4897]: I0214 20:03:39.909806 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e95d0e1a-6046-4ec7-8422-0858aca3bca9/probe/0.log" Feb 14 20:03:40 crc kubenswrapper[4897]: I0214 20:03:40.036810 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-hmfrr_56912149-5519-4b45-8e6e-4585b86ee278/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 20:03:40 crc kubenswrapper[4897]: I0214 20:03:40.120742 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-4h9mv_f4d9e4e2-d6b3-4618-9535-35fd4379f2a2/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 20:03:40 crc kubenswrapper[4897]: I0214 20:03:40.222915 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-ghds2_31859f8b-6460-470d-b9e5-56b33ef4a88d/init/0.log" Feb 14 20:03:40 crc kubenswrapper[4897]: I0214 20:03:40.397758 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-ghds2_31859f8b-6460-470d-b9e5-56b33ef4a88d/init/0.log" Feb 14 20:03:40 crc kubenswrapper[4897]: I0214 20:03:40.477873 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-pfjzp_1587215e-5d70-4aa9-b4a6-e3f84ae07453/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 20:03:40 crc kubenswrapper[4897]: I0214 20:03:40.484971 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-ghds2_31859f8b-6460-470d-b9e5-56b33ef4a88d/dnsmasq-dns/0.log" Feb 14 20:03:40 crc kubenswrapper[4897]: I0214 20:03:40.704558 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c42ac74f-f937-4f5a-973e-a97c0ec3986a/glance-log/0.log" Feb 14 20:03:40 crc kubenswrapper[4897]: I0214 20:03:40.713955 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c42ac74f-f937-4f5a-973e-a97c0ec3986a/glance-httpd/0.log" Feb 14 20:03:40 crc kubenswrapper[4897]: I0214 20:03:40.860052 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_46937bb4-8832-4a52-a593-bee2fc6e292b/glance-httpd/0.log" Feb 14 20:03:40 crc kubenswrapper[4897]: I0214 20:03:40.887024 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_46937bb4-8832-4a52-a593-bee2fc6e292b/glance-log/0.log" Feb 14 20:03:41 crc kubenswrapper[4897]: I0214 20:03:41.382984 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-6dff47865f-dwdfs_8d94bdc7-c732-4513-878f-0d7f8ae186ca/heat-api/0.log" Feb 14 20:03:41 crc kubenswrapper[4897]: I0214 20:03:41.848304 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-sffff_39b7fda9-b6bc-4834-97ce-fc21c8fa6b85/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 20:03:41 crc kubenswrapper[4897]: I0214 20:03:41.869155 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-5858fcf85c-g8zcx_ffd0f657-d81f-4767-b645-685963cf78ca/heat-engine/0.log" Feb 14 20:03:41 crc kubenswrapper[4897]: I0214 20:03:41.993207 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-59bb6b8559-n8bq2_aa628683-cd13-40e1-a275-1bf56d130479/heat-cfnapi/0.log" Feb 14 20:03:42 crc kubenswrapper[4897]: I0214 20:03:42.086428 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-mbp8l_b7ad74b7-7e30-4bfd-b608-a4c89a5286c1/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 20:03:42 crc kubenswrapper[4897]: I0214 20:03:42.243640 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29518321-kgzj8_05076870-08f8-472c-bd61-0f74afdb9e47/keystone-cron/0.log" Feb 14 20:03:42 crc kubenswrapper[4897]: I0214 20:03:42.371126 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_48ec6bd3-236f-4982-8dfa-e5c72c4d67bc/kube-state-metrics/0.log" Feb 14 20:03:42 crc kubenswrapper[4897]: I0214 20:03:42.593932 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-68lkb_f9698ab0-7eea-4fe4-be5a-b864ed73c28f/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 20:03:42 crc kubenswrapper[4897]: I0214 20:03:42.674894 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-8wz95_0d55ecd3-26e4-46ef-9ab2-addd80af57d7/logging-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 20:03:42 crc kubenswrapper[4897]: I0214 20:03:42.846594 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-798cbbdc78-n5tht_78184439-943a-4776-834b-f797a20bb2c1/keystone-api/0.log" Feb 14 20:03:42 crc kubenswrapper[4897]: I0214 20:03:42.942790 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_ce461153-c9cf-4a4a-a546-bf3a5effc936/mysqld-exporter/0.log" Feb 14 20:03:43 crc kubenswrapper[4897]: I0214 20:03:43.667223 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-567589579f-jbtqc_ec334ed0-f181-451a-8f76-12defbfc2460/neutron-httpd/0.log" Feb 14 20:03:43 crc kubenswrapper[4897]: I0214 20:03:43.675021 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-567589579f-jbtqc_ec334ed0-f181-451a-8f76-12defbfc2460/neutron-api/0.log" Feb 14 20:03:43 crc kubenswrapper[4897]: I0214 20:03:43.901513 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-lxsgs_dc59b218-0f6d-4dcf-8809-74df47d30b47/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 20:03:44 crc kubenswrapper[4897]: I0214 20:03:44.213643 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ee0355d7-cd7c-4073-8996-b6e54e93319d/nova-api-log/0.log" Feb 14 20:03:44 crc kubenswrapper[4897]: I0214 20:03:44.314454 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_520a8b04-bc67-440f-958b-166905cd4e0a/nova-cell0-conductor-conductor/0.log" Feb 14 20:03:44 crc kubenswrapper[4897]: I0214 20:03:44.562871 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_fd17288a-e92b-4fea-9b86-9cf6c22f1b34/nova-cell1-conductor-conductor/0.log" Feb 14 20:03:44 crc kubenswrapper[4897]: I0214 20:03:44.571910 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ee0355d7-cd7c-4073-8996-b6e54e93319d/nova-api-api/0.log" Feb 14 20:03:44 crc kubenswrapper[4897]: I0214 20:03:44.646002 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_ea23c74e-626a-4a73-8056-0b261563e5da/nova-cell1-novncproxy-novncproxy/0.log" Feb 14 20:03:44 crc kubenswrapper[4897]: I0214 20:03:44.947987 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-2h9bx_f5be1414-fd81-4c71-80b7-94a96048bd6b/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 20:03:44 crc kubenswrapper[4897]: I0214 20:03:44.948994 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_964cb23c-1cc7-43f9-8ce3-b5c280f5cd28/nova-metadata-log/0.log" Feb 14 20:03:45 crc kubenswrapper[4897]: I0214 20:03:45.284766 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fdda6cd9-a603-4bb0-8595-3d128fc9e324/mysql-bootstrap/0.log" Feb 14 20:03:45 crc kubenswrapper[4897]: I0214 20:03:45.370273 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_965f9d5d-41a1-413c-a99a-09596c896734/nova-scheduler-scheduler/0.log" Feb 14 20:03:45 crc kubenswrapper[4897]: I0214 20:03:45.513617 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fdda6cd9-a603-4bb0-8595-3d128fc9e324/galera/1.log" Feb 14 20:03:45 crc kubenswrapper[4897]: I0214 20:03:45.523205 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fdda6cd9-a603-4bb0-8595-3d128fc9e324/mysql-bootstrap/0.log" Feb 14 20:03:45 crc kubenswrapper[4897]: I0214 20:03:45.585682 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fdda6cd9-a603-4bb0-8595-3d128fc9e324/galera/0.log" Feb 14 20:03:45 crc kubenswrapper[4897]: I0214 20:03:45.741085 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9a8b3d12-d5db-435a-ba48-fbe1e31fef96/mysql-bootstrap/0.log" Feb 14 20:03:46 crc kubenswrapper[4897]: I0214 20:03:46.215056 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9a8b3d12-d5db-435a-ba48-fbe1e31fef96/mysql-bootstrap/0.log" Feb 14 20:03:46 crc kubenswrapper[4897]: I0214 20:03:46.258873 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9a8b3d12-d5db-435a-ba48-fbe1e31fef96/galera/0.log" Feb 14 20:03:46 crc kubenswrapper[4897]: I0214 20:03:46.316622 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9a8b3d12-d5db-435a-ba48-fbe1e31fef96/galera/1.log" Feb 14 20:03:46 crc kubenswrapper[4897]: I0214 20:03:46.485469 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_58bd1c73-7683-4665-92cc-2dbb8a1658a3/openstackclient/0.log" Feb 14 20:03:46 crc kubenswrapper[4897]: I0214 20:03:46.667437 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-9ql27_73e940f4-b0ed-44a0-8ec6-ade047f3b0b4/openstack-network-exporter/0.log" Feb 14 20:03:46 crc kubenswrapper[4897]: I0214 20:03:46.817617 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-8jqrb_643a69d8-25d7-4261-8848-0793ca7368fb/ovsdb-server-init/0.log" Feb 14 20:03:46 crc kubenswrapper[4897]: I0214 20:03:46.943663 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_964cb23c-1cc7-43f9-8ce3-b5c280f5cd28/nova-metadata-metadata/0.log" Feb 14 20:03:47 crc kubenswrapper[4897]: I0214 20:03:47.030892 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-8jqrb_643a69d8-25d7-4261-8848-0793ca7368fb/ovsdb-server/0.log" Feb 14 20:03:47 crc kubenswrapper[4897]: I0214 20:03:47.091810 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-8jqrb_643a69d8-25d7-4261-8848-0793ca7368fb/ovs-vswitchd/0.log" Feb 14 20:03:47 crc kubenswrapper[4897]: I0214 20:03:47.102656 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-8jqrb_643a69d8-25d7-4261-8848-0793ca7368fb/ovsdb-server-init/0.log" Feb 14 20:03:47 crc kubenswrapper[4897]: I0214 20:03:47.264749 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-wlxqg_c6a557e7-f135-4a79-9525-aed106fd814c/ovn-controller/0.log" Feb 14 20:03:47 crc kubenswrapper[4897]: I0214 20:03:47.418932 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-mrdns_690f39d6-bd85-4b27-97f5-148d4976aebb/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 20:03:47 crc kubenswrapper[4897]: I0214 20:03:47.561444 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d9e0766f-fee2-48be-b8d6-1b04e52fe8ee/openstack-network-exporter/0.log" Feb 14 20:03:47 crc kubenswrapper[4897]: I0214 20:03:47.657326 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d9e0766f-fee2-48be-b8d6-1b04e52fe8ee/ovn-northd/0.log" Feb 14 20:03:47 crc kubenswrapper[4897]: I0214 20:03:47.719462 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1d77a004-19c2-43a0-bbe7-6e94f0d05a4e/openstack-network-exporter/0.log" Feb 14 20:03:47 crc kubenswrapper[4897]: I0214 20:03:47.789432 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1d77a004-19c2-43a0-bbe7-6e94f0d05a4e/ovsdbserver-nb/0.log" Feb 14 20:03:47 crc kubenswrapper[4897]: I0214 20:03:47.914577 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_bbbc45ca-578f-42e4-b2e9-596c8b2587a1/openstack-network-exporter/0.log" Feb 14 20:03:47 crc kubenswrapper[4897]: I0214 20:03:47.980522 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_bbbc45ca-578f-42e4-b2e9-596c8b2587a1/ovsdbserver-sb/0.log" Feb 14 20:03:48 crc kubenswrapper[4897]: I0214 20:03:48.199999 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-bb56bbbfb-v5pf9_df22fdf1-e5d3-4d8b-9385-4f3abeda71ee/placement-api/0.log" Feb 14 20:03:48 crc kubenswrapper[4897]: I0214 20:03:48.266995 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3c77ebc2-8dc3-4b0f-8f95-b3208b853935/init-config-reloader/0.log" Feb 14 20:03:48 crc kubenswrapper[4897]: I0214 20:03:48.292747 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-bb56bbbfb-v5pf9_df22fdf1-e5d3-4d8b-9385-4f3abeda71ee/placement-log/0.log" Feb 14 20:03:48 crc kubenswrapper[4897]: I0214 20:03:48.486758 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3c77ebc2-8dc3-4b0f-8f95-b3208b853935/prometheus/0.log" Feb 14 20:03:48 crc kubenswrapper[4897]: I0214 20:03:48.500517 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3c77ebc2-8dc3-4b0f-8f95-b3208b853935/config-reloader/0.log" Feb 14 20:03:48 crc kubenswrapper[4897]: I0214 20:03:48.525375 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3c77ebc2-8dc3-4b0f-8f95-b3208b853935/init-config-reloader/0.log" Feb 14 20:03:48 crc kubenswrapper[4897]: I0214 20:03:48.571486 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3c77ebc2-8dc3-4b0f-8f95-b3208b853935/thanos-sidecar/0.log" Feb 14 20:03:48 crc kubenswrapper[4897]: I0214 20:03:48.667752 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_292d0d53-8176-4764-84c5-a899eb11ab99/setup-container/0.log" Feb 14 20:03:48 crc kubenswrapper[4897]: I0214 20:03:48.840823 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_292d0d53-8176-4764-84c5-a899eb11ab99/rabbitmq/0.log" Feb 14 20:03:48 crc kubenswrapper[4897]: I0214 20:03:48.882985 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_292d0d53-8176-4764-84c5-a899eb11ab99/setup-container/0.log" Feb 14 20:03:48 crc kubenswrapper[4897]: I0214 20:03:48.937159 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8a19f01a-c85c-492f-a991-b0a499611db3/setup-container/0.log" Feb 14 20:03:49 crc kubenswrapper[4897]: I0214 20:03:49.147484 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8a19f01a-c85c-492f-a991-b0a499611db3/setup-container/0.log" Feb 14 20:03:49 crc kubenswrapper[4897]: I0214 20:03:49.243574 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8a19f01a-c85c-492f-a991-b0a499611db3/rabbitmq/0.log" Feb 14 20:03:49 crc kubenswrapper[4897]: I0214 20:03:49.246744 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe/setup-container/0.log" Feb 14 20:03:49 crc kubenswrapper[4897]: I0214 20:03:49.446273 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe/setup-container/0.log" Feb 14 20:03:49 crc kubenswrapper[4897]: I0214 20:03:49.515298 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_01d977aa-d8e9-4a2d-9a3f-efc2000f3ebe/rabbitmq/0.log" Feb 14 20:03:49 crc kubenswrapper[4897]: I0214 20:03:49.562344 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_540a20b2-a6ae-4527-bb75-b6d570169dc2/setup-container/0.log" Feb 14 20:03:49 crc kubenswrapper[4897]: I0214 20:03:49.789928 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_540a20b2-a6ae-4527-bb75-b6d570169dc2/setup-container/0.log" Feb 14 20:03:49 crc kubenswrapper[4897]: I0214 20:03:49.811850 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-fv29s_e1552b46-d09d-4156-97e3-0887c5071664/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 20:03:49 crc kubenswrapper[4897]: I0214 20:03:49.821612 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_540a20b2-a6ae-4527-bb75-b6d570169dc2/rabbitmq/0.log" Feb 14 20:03:50 crc kubenswrapper[4897]: I0214 20:03:50.017700 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-58kld_c6adeab7-7f81-44b5-8a1d-072f7c050466/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 20:03:50 crc kubenswrapper[4897]: I0214 20:03:50.086205 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-kshf2_53f34fde-c1f7-4d7c-906e-eb55326f4789/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 20:03:50 crc kubenswrapper[4897]: I0214 20:03:50.283829 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-446bh_7f79cb40-76f5-40cd-9af1-82758f503ae7/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 20:03:50 crc kubenswrapper[4897]: I0214 20:03:50.373214 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-hl94t_0a68fa67-8186-4606-96b5-fc7ddfd97530/ssh-known-hosts-edpm-deployment/0.log" Feb 14 20:03:50 crc kubenswrapper[4897]: I0214 20:03:50.577156 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-697fc44bdc-wm8v2_a7e768f3-e3b8-4197-aaeb-8b1013320b47/proxy-server/0.log" Feb 14 20:03:50 crc kubenswrapper[4897]: I0214 20:03:50.704802 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-nm7qg_18272353-8a77-4df9-baab-a4c2a6e6d0cb/swift-ring-rebalance/0.log" Feb 14 20:03:50 crc kubenswrapper[4897]: I0214 20:03:50.707228 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-697fc44bdc-wm8v2_a7e768f3-e3b8-4197-aaeb-8b1013320b47/proxy-httpd/0.log" Feb 14 20:03:50 crc kubenswrapper[4897]: I0214 20:03:50.834723 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_674b3cbc-fa6f-4475-bebd-314f24beaaa0/account-auditor/0.log" Feb 14 20:03:50 crc kubenswrapper[4897]: I0214 20:03:50.913380 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_674b3cbc-fa6f-4475-bebd-314f24beaaa0/account-reaper/0.log" Feb 14 20:03:51 crc kubenswrapper[4897]: I0214 20:03:51.021075 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_674b3cbc-fa6f-4475-bebd-314f24beaaa0/account-replicator/0.log" Feb 14 20:03:51 crc kubenswrapper[4897]: I0214 20:03:51.061415 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_674b3cbc-fa6f-4475-bebd-314f24beaaa0/account-server/0.log" Feb 14 20:03:51 crc kubenswrapper[4897]: I0214 20:03:51.074713 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_674b3cbc-fa6f-4475-bebd-314f24beaaa0/container-auditor/0.log" Feb 14 20:03:51 crc kubenswrapper[4897]: I0214 20:03:51.178482 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_674b3cbc-fa6f-4475-bebd-314f24beaaa0/container-replicator/0.log" Feb 14 20:03:51 crc kubenswrapper[4897]: I0214 20:03:51.271662 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_674b3cbc-fa6f-4475-bebd-314f24beaaa0/container-server/0.log" Feb 14 20:03:51 crc kubenswrapper[4897]: I0214 20:03:51.325016 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_674b3cbc-fa6f-4475-bebd-314f24beaaa0/container-updater/0.log" Feb 14 20:03:51 crc kubenswrapper[4897]: I0214 20:03:51.325476 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_674b3cbc-fa6f-4475-bebd-314f24beaaa0/object-auditor/0.log" Feb 14 20:03:51 crc kubenswrapper[4897]: I0214 20:03:51.402593 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_674b3cbc-fa6f-4475-bebd-314f24beaaa0/object-expirer/0.log" Feb 14 20:03:51 crc kubenswrapper[4897]: I0214 20:03:51.547709 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_674b3cbc-fa6f-4475-bebd-314f24beaaa0/object-updater/0.log" Feb 14 20:03:51 crc kubenswrapper[4897]: I0214 20:03:51.551788 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_674b3cbc-fa6f-4475-bebd-314f24beaaa0/object-server/0.log" Feb 14 20:03:51 crc kubenswrapper[4897]: I0214 20:03:51.567098 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_674b3cbc-fa6f-4475-bebd-314f24beaaa0/object-replicator/0.log" Feb 14 20:03:51 crc kubenswrapper[4897]: I0214 20:03:51.639114 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_674b3cbc-fa6f-4475-bebd-314f24beaaa0/rsync/0.log" Feb 14 20:03:51 crc kubenswrapper[4897]: I0214 20:03:51.815828 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_674b3cbc-fa6f-4475-bebd-314f24beaaa0/swift-recon-cron/0.log" Feb 14 20:03:51 crc kubenswrapper[4897]: I0214 20:03:51.877628 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-76ncx_2c56a9d8-a9ad-465f-a8b2-f341f5c3ce00/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 20:03:52 crc kubenswrapper[4897]: I0214 20:03:52.027926 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-pnr7w_40ebae8a-773a-4b42-9385-81e545bff644/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 20:03:52 crc kubenswrapper[4897]: I0214 20:03:52.271848 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_0a79556a-2b24-4bba-a50a-87428533496f/test-operator-logs-container/0.log" Feb 14 20:03:52 crc kubenswrapper[4897]: I0214 20:03:52.406396 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-tshsh_9c7ad489-e9fd-47b8-aab8-7042415968af/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 20:03:52 crc kubenswrapper[4897]: I0214 20:03:52.504659 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_1ccac56d-8e29-4241-99ef-bb65d3ff373f/tempest-tests-tempest-tests-runner/0.log" Feb 14 20:03:59 crc kubenswrapper[4897]: I0214 20:03:59.791788 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_429062cc-8ca1-4e1f-a1b3-d84bbd4d15df/memcached/0.log" Feb 14 20:04:19 crc kubenswrapper[4897]: I0214 20:04:19.729435 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t_cb7d19c8-0cd4-48cb-bea2-1178ad5801c7/util/0.log" Feb 14 20:04:19 crc kubenswrapper[4897]: I0214 20:04:19.958877 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t_cb7d19c8-0cd4-48cb-bea2-1178ad5801c7/util/0.log" Feb 14 20:04:19 crc kubenswrapper[4897]: I0214 20:04:19.959287 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t_cb7d19c8-0cd4-48cb-bea2-1178ad5801c7/pull/0.log" Feb 14 20:04:19 crc kubenswrapper[4897]: I0214 20:04:19.979186 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t_cb7d19c8-0cd4-48cb-bea2-1178ad5801c7/pull/0.log" Feb 14 20:04:20 crc kubenswrapper[4897]: I0214 20:04:20.133934 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t_cb7d19c8-0cd4-48cb-bea2-1178ad5801c7/util/0.log" Feb 14 20:04:20 crc kubenswrapper[4897]: I0214 20:04:20.134511 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t_cb7d19c8-0cd4-48cb-bea2-1178ad5801c7/pull/0.log" Feb 14 20:04:20 crc kubenswrapper[4897]: I0214 20:04:20.139996 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5615ee41e0de954bdabf4365434fbee274e379add01ffb213406a272b1xmv6t_cb7d19c8-0cd4-48cb-bea2-1178ad5801c7/extract/0.log" Feb 14 20:04:20 crc kubenswrapper[4897]: I0214 20:04:20.618743 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-drm7d_fe513351-3f7b-436d-9218-a66a6f579948/manager/0.log" Feb 14 20:04:20 crc kubenswrapper[4897]: I0214 20:04:20.942261 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-wsghb_0128668e-be83-412e-96e6-8c158ab45cc5/manager/0.log" Feb 14 20:04:21 crc kubenswrapper[4897]: I0214 20:04:21.245541 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-5v2tq_10c98e4f-ae22-481b-992d-6804a1b5d0cc/manager/0.log" Feb 14 20:04:21 crc kubenswrapper[4897]: I0214 20:04:21.407293 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-tsqnc_de1e8e22-10a4-4d2a-855f-4c7bb6a49096/manager/0.log" Feb 14 20:04:21 crc kubenswrapper[4897]: I0214 20:04:21.899965 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-fzgws_a2a15c49-cac6-4772-be07-69fd7597b692/manager/1.log" Feb 14 20:04:22 crc kubenswrapper[4897]: I0214 20:04:22.097840 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-fzgws_a2a15c49-cac6-4772-be07-69fd7597b692/manager/0.log" Feb 14 20:04:22 crc kubenswrapper[4897]: I0214 20:04:22.176680 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-9ht86_bd9aef55-ad36-4675-a79a-a1829c9b3b3e/manager/0.log" Feb 14 20:04:22 crc kubenswrapper[4897]: I0214 20:04:22.463988 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-nwjnd_5e11063d-aac7-4fea-91d9-0b560622ccb9/manager/0.log" Feb 14 20:04:22 crc kubenswrapper[4897]: I0214 20:04:22.695259 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-5dg28_6fe73ade-8031-493c-9628-018ad436c7a5/manager/0.log" Feb 14 20:04:22 crc kubenswrapper[4897]: I0214 20:04:22.935117 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-rtvvf_8238fbef-1e59-4430-af92-1be3d70c4d84/manager/0.log" Feb 14 20:04:22 crc kubenswrapper[4897]: I0214 20:04:22.976502 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-ts22t_8dffc7df-2563-4f02-8dfc-83ab824af909/manager/0.log" Feb 14 20:04:23 crc kubenswrapper[4897]: I0214 20:04:23.188952 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-bl5g8_7c6ab7c6-c333-41db-ba23-f89b3eff3eef/manager/0.log" Feb 14 20:04:23 crc kubenswrapper[4897]: I0214 20:04:23.469661 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-gvcdc_088b6f9e-b5ec-48f5-a7bb-4d9ef0db9820/manager/0.log" Feb 14 20:04:24 crc kubenswrapper[4897]: I0214 20:04:24.202222 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9csghqz_afb3d9d3-a3e1-4aac-89ef-a7128579e6e9/manager/0.log" Feb 14 20:04:24 crc kubenswrapper[4897]: I0214 20:04:24.674602 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-99cb98555-5nrbh_55ee13ff-72a6-4bdb-8461-fb545f66b881/operator/0.log" Feb 14 20:04:24 crc kubenswrapper[4897]: I0214 20:04:24.867068 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-bdg8n_afb2923f-489f-4ce0-bd55-f95a6c59f809/registry-server/1.log" Feb 14 20:04:24 crc kubenswrapper[4897]: I0214 20:04:24.871786 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-bdg8n_afb2923f-489f-4ce0-bd55-f95a6c59f809/registry-server/0.log" Feb 14 20:04:25 crc kubenswrapper[4897]: I0214 20:04:25.138422 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-bh95f_d2543021-51cc-4cbe-9293-a6e02894e1f4/manager/0.log" Feb 14 20:04:25 crc kubenswrapper[4897]: I0214 20:04:25.405440 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-gfrd9_cd0646ca-c695-4387-ba4b-cc9a3d85b460/manager/0.log" Feb 14 20:04:25 crc kubenswrapper[4897]: I0214 20:04:25.958087 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-wdv5h_fc708ffc-dcb4-4ac0-9982-4cf347cd505d/operator/0.log" Feb 14 20:04:26 crc kubenswrapper[4897]: I0214 20:04:26.198887 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-m5nfk_0c6cb6a4-76e1-4568-9fc5-6069cc28b9e6/manager/0.log" Feb 14 20:04:26 crc kubenswrapper[4897]: I0214 20:04:26.340099 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-qbz5t_fd2bec18-d3b4-4ee2-bfb9-34e5d1ddff3a/manager/0.log" Feb 14 20:04:26 crc kubenswrapper[4897]: I0214 20:04:26.527759 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-7fnnb_f8e83507-87e8-44e6-a08d-f1f45f8b4ee0/manager/0.log" Feb 14 20:04:26 crc kubenswrapper[4897]: I0214 20:04:26.718702 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-vv2k7_26f58f32-c15c-49c7-8756-fc2bae972a2d/manager/0.log" Feb 14 20:04:26 crc kubenswrapper[4897]: I0214 20:04:26.755103 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-778945c4f9-cbw2h_4243feec-23ed-4292-9291-7ad01f7d12a6/manager/0.log" Feb 14 20:04:26 crc kubenswrapper[4897]: I0214 20:04:26.834276 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58f847fcbd-9djqq_949ed147-ec0c-4e17-bc34-4d27018a9567/manager/0.log" Feb 14 20:04:32 crc kubenswrapper[4897]: I0214 20:04:32.790809 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-2lvr5_48e0b91f-f946-4ecc-b36c-fc280e728f77/manager/0.log" Feb 14 20:04:48 crc kubenswrapper[4897]: I0214 20:04:48.958312 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-2hqm2_9baea172-0e9d-4866-917e-c5e0a57e1413/control-plane-machine-set-operator/0.log" Feb 14 20:04:49 crc kubenswrapper[4897]: I0214 20:04:49.149943 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zh576_c6a281f2-1a7e-419e-8736-57c1a3bae82e/machine-api-operator/0.log" Feb 14 20:04:49 crc kubenswrapper[4897]: I0214 20:04:49.154677 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zh576_c6a281f2-1a7e-419e-8736-57c1a3bae82e/kube-rbac-proxy/0.log" Feb 14 20:05:02 crc kubenswrapper[4897]: I0214 20:05:02.661159 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-slgqx_89273d01-2f22-4f94-8217-2b51d8b1319b/cert-manager-controller/0.log" Feb 14 20:05:02 crc kubenswrapper[4897]: I0214 20:05:02.836666 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-5jj96_6fe45416-c3cc-40b0-bffb-d43af376cebe/cert-manager-cainjector/0.log" Feb 14 20:05:02 crc kubenswrapper[4897]: I0214 20:05:02.905277 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-pmlmt_0b1febb3-dc70-4cd5-9a48-024547405da7/cert-manager-webhook/0.log" Feb 14 20:05:16 crc kubenswrapper[4897]: I0214 20:05:16.332838 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-2vrtf_d29745b2-a844-4447-bd55-859d755cf733/nmstate-console-plugin/0.log" Feb 14 20:05:16 crc kubenswrapper[4897]: I0214 20:05:16.557953 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-d5lnt_ff7e179e-a00c-436b-bf50-c14810288beb/nmstate-handler/0.log" Feb 14 20:05:16 crc kubenswrapper[4897]: I0214 20:05:16.635920 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-gg4wk_9cc05bdf-cb61-462f-a326-9f8058bfa699/kube-rbac-proxy/0.log" Feb 14 20:05:16 crc kubenswrapper[4897]: I0214 20:05:16.665660 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-gg4wk_9cc05bdf-cb61-462f-a326-9f8058bfa699/nmstate-metrics/0.log" Feb 14 20:05:16 crc kubenswrapper[4897]: I0214 20:05:16.805351 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-lprfm_02f39a6b-a277-4235-a912-61b98953c097/nmstate-operator/0.log" Feb 14 20:05:16 crc kubenswrapper[4897]: I0214 20:05:16.858981 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-tf6nv_c70ba798-8c12-43e8-a0e2-d54617b6bb84/nmstate-webhook/0.log" Feb 14 20:05:31 crc kubenswrapper[4897]: I0214 20:05:31.727019 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 20:05:31 crc kubenswrapper[4897]: I0214 20:05:31.728012 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 20:05:32 crc kubenswrapper[4897]: I0214 20:05:32.484478 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-78d86b9dcc-fgbpn_ab082f7b-c89d-4db4-a04f-e2db844fa022/manager/0.log" Feb 14 20:05:32 crc kubenswrapper[4897]: I0214 20:05:32.528187 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-78d86b9dcc-fgbpn_ab082f7b-c89d-4db4-a04f-e2db844fa022/kube-rbac-proxy/0.log" Feb 14 20:05:47 crc kubenswrapper[4897]: I0214 20:05:47.899407 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-nttxw_3d91a41b-7d8f-4ad4-9005-1a3bf7c40156/prometheus-operator/0.log" Feb 14 20:05:48 crc kubenswrapper[4897]: I0214 20:05:48.022830 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2_869c7f86-090e-405c-9147-0815dbdd87c2/prometheus-operator-admission-webhook/0.log" Feb 14 20:05:49 crc kubenswrapper[4897]: I0214 20:05:49.217435 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg_3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5/prometheus-operator-admission-webhook/0.log" Feb 14 20:05:49 crc kubenswrapper[4897]: I0214 20:05:49.360047 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9t57n_7f9fcba2-5e97-421b-8868-b497df246731/operator/0.log" Feb 14 20:05:49 crc kubenswrapper[4897]: I0214 20:05:49.538750 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-t9xgt_7683e04b-bb89-48c2-bff0-75d052f26e7f/observability-ui-dashboards/0.log" Feb 14 20:05:49 crc kubenswrapper[4897]: I0214 20:05:49.585782 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-q66h9_b37fa061-9005-4aec-8681-c1107aad5075/perses-operator/0.log" Feb 14 20:06:01 crc kubenswrapper[4897]: I0214 20:06:01.726539 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 20:06:01 crc kubenswrapper[4897]: I0214 20:06:01.727216 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 20:06:07 crc kubenswrapper[4897]: I0214 20:06:07.892008 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-x9vg8_2b81a3a6-44a0-4196-a84f-0eb00c65ce57/cluster-logging-operator/0.log" Feb 14 20:06:08 crc kubenswrapper[4897]: I0214 20:06:08.075804 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-9q6vx_d86c8472-f6f6-46c3-9a79-6abfb848be75/collector/0.log" Feb 14 20:06:08 crc kubenswrapper[4897]: I0214 20:06:08.163808 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_9e988817-cbfc-4faf-a31e-bf357c1c4691/loki-compactor/0.log" Feb 14 20:06:08 crc kubenswrapper[4897]: I0214 20:06:08.337548 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-lx9b2_0f4eb68c-7592-4025-a9a0-d5ed85aeec3c/loki-distributor/0.log" Feb 14 20:06:08 crc kubenswrapper[4897]: I0214 20:06:08.363096 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-c7757d78c-ctkkw_969ba5ce-9b29-41f2-ba75-76f548daa534/gateway/0.log" Feb 14 20:06:08 crc kubenswrapper[4897]: I0214 20:06:08.468447 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-c7757d78c-ctkkw_969ba5ce-9b29-41f2-ba75-76f548daa534/opa/0.log" Feb 14 20:06:08 crc kubenswrapper[4897]: I0214 20:06:08.554136 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-c7757d78c-fb7zn_cec4c0da-107d-4f6d-946d-2ffe925883e4/gateway/0.log" Feb 14 20:06:08 crc kubenswrapper[4897]: I0214 20:06:08.601338 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-c7757d78c-fb7zn_cec4c0da-107d-4f6d-946d-2ffe925883e4/opa/0.log" Feb 14 20:06:08 crc kubenswrapper[4897]: I0214 20:06:08.749732 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_62b896b4-5861-4fa8-ac40-642f2d8688b5/loki-index-gateway/0.log" Feb 14 20:06:08 crc kubenswrapper[4897]: I0214 20:06:08.864841 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_740f1f83-6c75-4e47-a5c5-6a0ef1d40cca/loki-ingester/0.log" Feb 14 20:06:09 crc kubenswrapper[4897]: I0214 20:06:09.031946 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-jw9nh_74485545-1349-4cd2-9764-72af83ba9aa1/loki-querier/0.log" Feb 14 20:06:09 crc kubenswrapper[4897]: I0214 20:06:09.082153 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-zhtld_fed2ea1c-038a-40eb-a753-68705d1ae150/loki-query-frontend/0.log" Feb 14 20:06:25 crc kubenswrapper[4897]: I0214 20:06:25.598851 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-mdj4b_4bfd25e6-cec1-440a-8bfb-3ce2bcb4ab8b/kube-rbac-proxy/0.log" Feb 14 20:06:25 crc kubenswrapper[4897]: I0214 20:06:25.743450 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-mdj4b_4bfd25e6-cec1-440a-8bfb-3ce2bcb4ab8b/controller/0.log" Feb 14 20:06:25 crc kubenswrapper[4897]: I0214 20:06:25.819819 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ks77p_1b139a41-dd2e-42ba-a86d-01ade60da46f/cp-frr-files/0.log" Feb 14 20:06:26 crc kubenswrapper[4897]: I0214 20:06:26.002865 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ks77p_1b139a41-dd2e-42ba-a86d-01ade60da46f/cp-reloader/0.log" Feb 14 20:06:26 crc kubenswrapper[4897]: I0214 20:06:26.052010 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ks77p_1b139a41-dd2e-42ba-a86d-01ade60da46f/cp-metrics/0.log" Feb 14 20:06:26 crc kubenswrapper[4897]: I0214 20:06:26.076509 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ks77p_1b139a41-dd2e-42ba-a86d-01ade60da46f/cp-frr-files/0.log" Feb 14 20:06:26 crc kubenswrapper[4897]: I0214 20:06:26.093413 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ks77p_1b139a41-dd2e-42ba-a86d-01ade60da46f/cp-reloader/0.log" Feb 14 20:06:26 crc kubenswrapper[4897]: I0214 20:06:26.245103 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ks77p_1b139a41-dd2e-42ba-a86d-01ade60da46f/cp-frr-files/0.log" Feb 14 20:06:26 crc kubenswrapper[4897]: I0214 20:06:26.280072 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ks77p_1b139a41-dd2e-42ba-a86d-01ade60da46f/cp-metrics/0.log" Feb 14 20:06:26 crc kubenswrapper[4897]: I0214 20:06:26.287656 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ks77p_1b139a41-dd2e-42ba-a86d-01ade60da46f/cp-metrics/0.log" Feb 14 20:06:26 crc kubenswrapper[4897]: I0214 20:06:26.290437 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ks77p_1b139a41-dd2e-42ba-a86d-01ade60da46f/cp-reloader/0.log" Feb 14 20:06:26 crc kubenswrapper[4897]: I0214 20:06:26.459180 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ks77p_1b139a41-dd2e-42ba-a86d-01ade60da46f/cp-frr-files/0.log" Feb 14 20:06:26 crc kubenswrapper[4897]: I0214 20:06:26.469696 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ks77p_1b139a41-dd2e-42ba-a86d-01ade60da46f/cp-reloader/0.log" Feb 14 20:06:26 crc kubenswrapper[4897]: I0214 20:06:26.491938 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ks77p_1b139a41-dd2e-42ba-a86d-01ade60da46f/cp-metrics/0.log" Feb 14 20:06:26 crc kubenswrapper[4897]: I0214 20:06:26.525615 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ks77p_1b139a41-dd2e-42ba-a86d-01ade60da46f/controller/0.log" Feb 14 20:06:26 crc kubenswrapper[4897]: I0214 20:06:26.668570 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ks77p_1b139a41-dd2e-42ba-a86d-01ade60da46f/frr-metrics/0.log" Feb 14 20:06:26 crc kubenswrapper[4897]: I0214 20:06:26.690293 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ks77p_1b139a41-dd2e-42ba-a86d-01ade60da46f/frr/1.log" Feb 14 20:06:26 crc kubenswrapper[4897]: I0214 20:06:26.734785 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ks77p_1b139a41-dd2e-42ba-a86d-01ade60da46f/kube-rbac-proxy/0.log" Feb 14 20:06:26 crc kubenswrapper[4897]: I0214 20:06:26.909613 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ks77p_1b139a41-dd2e-42ba-a86d-01ade60da46f/kube-rbac-proxy-frr/0.log" Feb 14 20:06:26 crc kubenswrapper[4897]: I0214 20:06:26.953869 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ks77p_1b139a41-dd2e-42ba-a86d-01ade60da46f/reloader/0.log" Feb 14 20:06:27 crc kubenswrapper[4897]: I0214 20:06:27.120020 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-n6ptt_7ea0a9e9-940c-4856-8fd0-f19994e3b810/frr-k8s-webhook-server/0.log" Feb 14 20:06:27 crc kubenswrapper[4897]: I0214 20:06:27.227024 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7cc9d46ffd-mbftl_1ef9cd33-5ad0-494f-9d50-177eadf0483f/manager/0.log" Feb 14 20:06:27 crc kubenswrapper[4897]: I0214 20:06:27.432972 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-c8d485b4-vdmjx_de593d8b-e41e-4a52-bead-28e46be05e4d/webhook-server/0.log" Feb 14 20:06:27 crc kubenswrapper[4897]: I0214 20:06:27.600443 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-4r6x6_ae82eac1-c909-47f2-b4b5-2f3f1267345e/kube-rbac-proxy/0.log" Feb 14 20:06:28 crc kubenswrapper[4897]: I0214 20:06:28.208928 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-4r6x6_ae82eac1-c909-47f2-b4b5-2f3f1267345e/speaker/0.log" Feb 14 20:06:28 crc kubenswrapper[4897]: I0214 20:06:28.275706 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ks77p_1b139a41-dd2e-42ba-a86d-01ade60da46f/frr/0.log" Feb 14 20:06:31 crc kubenswrapper[4897]: I0214 20:06:31.725623 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 20:06:31 crc kubenswrapper[4897]: I0214 20:06:31.725926 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 20:06:31 crc kubenswrapper[4897]: I0214 20:06:31.725970 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 20:06:31 crc kubenswrapper[4897]: I0214 20:06:31.728905 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0920c69bdd62f6bfbe3c53d6427630f4e3c45b27232e9664bc51391f7b5b4491"} pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 20:06:31 crc kubenswrapper[4897]: I0214 20:06:31.729298 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" containerID="cri-o://0920c69bdd62f6bfbe3c53d6427630f4e3c45b27232e9664bc51391f7b5b4491" gracePeriod=600 Feb 14 20:06:32 crc kubenswrapper[4897]: I0214 20:06:32.837464 4897 generic.go:334] "Generic (PLEG): container finished" podID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerID="0920c69bdd62f6bfbe3c53d6427630f4e3c45b27232e9664bc51391f7b5b4491" exitCode=0 Feb 14 20:06:32 crc kubenswrapper[4897]: I0214 20:06:32.837540 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerDied","Data":"0920c69bdd62f6bfbe3c53d6427630f4e3c45b27232e9664bc51391f7b5b4491"} Feb 14 20:06:32 crc kubenswrapper[4897]: I0214 20:06:32.837803 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerStarted","Data":"30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c"} Feb 14 20:06:32 crc kubenswrapper[4897]: I0214 20:06:32.837827 4897 scope.go:117] "RemoveContainer" containerID="a5f1057bccf5866787b041744b3d8033e7205643bbe0f33a6bbaaa76aba0f23d" Feb 14 20:06:43 crc kubenswrapper[4897]: I0214 20:06:43.346018 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7_b1d83377-16af-4d9a-ad7d-3d0c2059b951/util/0.log" Feb 14 20:06:43 crc kubenswrapper[4897]: I0214 20:06:43.518559 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7_b1d83377-16af-4d9a-ad7d-3d0c2059b951/pull/0.log" Feb 14 20:06:43 crc kubenswrapper[4897]: I0214 20:06:43.575827 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7_b1d83377-16af-4d9a-ad7d-3d0c2059b951/util/0.log" Feb 14 20:06:43 crc kubenswrapper[4897]: I0214 20:06:43.627513 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7_b1d83377-16af-4d9a-ad7d-3d0c2059b951/pull/0.log" Feb 14 20:06:43 crc kubenswrapper[4897]: I0214 20:06:43.777745 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7_b1d83377-16af-4d9a-ad7d-3d0c2059b951/pull/0.log" Feb 14 20:06:43 crc kubenswrapper[4897]: I0214 20:06:43.788215 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7_b1d83377-16af-4d9a-ad7d-3d0c2059b951/util/0.log" Feb 14 20:06:43 crc kubenswrapper[4897]: I0214 20:06:43.827591 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19cc6x7_b1d83377-16af-4d9a-ad7d-3d0c2059b951/extract/0.log" Feb 14 20:06:43 crc kubenswrapper[4897]: I0214 20:06:43.973123 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj_ba87404e-9bf2-4003-a612-0461c1af3db2/util/0.log" Feb 14 20:06:44 crc kubenswrapper[4897]: I0214 20:06:44.133001 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj_ba87404e-9bf2-4003-a612-0461c1af3db2/util/0.log" Feb 14 20:06:44 crc kubenswrapper[4897]: I0214 20:06:44.144140 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj_ba87404e-9bf2-4003-a612-0461c1af3db2/pull/0.log" Feb 14 20:06:44 crc kubenswrapper[4897]: I0214 20:06:44.192819 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj_ba87404e-9bf2-4003-a612-0461c1af3db2/pull/0.log" Feb 14 20:06:44 crc kubenswrapper[4897]: I0214 20:06:44.385537 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj_ba87404e-9bf2-4003-a612-0461c1af3db2/extract/0.log" Feb 14 20:06:44 crc kubenswrapper[4897]: I0214 20:06:44.413261 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj_ba87404e-9bf2-4003-a612-0461c1af3db2/util/0.log" Feb 14 20:06:44 crc kubenswrapper[4897]: I0214 20:06:44.432319 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mtpzj_ba87404e-9bf2-4003-a612-0461c1af3db2/pull/0.log" Feb 14 20:06:44 crc kubenswrapper[4897]: I0214 20:06:44.572319 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv_700ecb41-d155-4a7c-94c0-91daf79fef82/util/0.log" Feb 14 20:06:44 crc kubenswrapper[4897]: I0214 20:06:44.751756 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv_700ecb41-d155-4a7c-94c0-91daf79fef82/pull/0.log" Feb 14 20:06:44 crc kubenswrapper[4897]: I0214 20:06:44.756546 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv_700ecb41-d155-4a7c-94c0-91daf79fef82/util/0.log" Feb 14 20:06:44 crc kubenswrapper[4897]: I0214 20:06:44.793703 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv_700ecb41-d155-4a7c-94c0-91daf79fef82/pull/0.log" Feb 14 20:06:44 crc kubenswrapper[4897]: I0214 20:06:44.955651 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv_700ecb41-d155-4a7c-94c0-91daf79fef82/pull/0.log" Feb 14 20:06:44 crc kubenswrapper[4897]: I0214 20:06:44.959569 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv_700ecb41-d155-4a7c-94c0-91daf79fef82/util/0.log" Feb 14 20:06:45 crc kubenswrapper[4897]: I0214 20:06:45.000413 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135d8wv_700ecb41-d155-4a7c-94c0-91daf79fef82/extract/0.log" Feb 14 20:06:45 crc kubenswrapper[4897]: I0214 20:06:45.135005 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vgcv6_3e2a05b2-5d93-4252-a08b-6b35f225e167/extract-utilities/0.log" Feb 14 20:06:45 crc kubenswrapper[4897]: I0214 20:06:45.281341 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vgcv6_3e2a05b2-5d93-4252-a08b-6b35f225e167/extract-content/0.log" Feb 14 20:06:45 crc kubenswrapper[4897]: I0214 20:06:45.304434 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vgcv6_3e2a05b2-5d93-4252-a08b-6b35f225e167/extract-content/0.log" Feb 14 20:06:45 crc kubenswrapper[4897]: I0214 20:06:45.309227 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vgcv6_3e2a05b2-5d93-4252-a08b-6b35f225e167/extract-utilities/0.log" Feb 14 20:06:45 crc kubenswrapper[4897]: I0214 20:06:45.460223 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vgcv6_3e2a05b2-5d93-4252-a08b-6b35f225e167/extract-utilities/0.log" Feb 14 20:06:45 crc kubenswrapper[4897]: I0214 20:06:45.491196 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vgcv6_3e2a05b2-5d93-4252-a08b-6b35f225e167/extract-content/0.log" Feb 14 20:06:45 crc kubenswrapper[4897]: I0214 20:06:45.752871 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w9dlm_93aca208-9cef-49a3-917c-2bb7c314d537/extract-utilities/0.log" Feb 14 20:06:46 crc kubenswrapper[4897]: I0214 20:06:46.231770 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vgcv6_3e2a05b2-5d93-4252-a08b-6b35f225e167/registry-server/0.log" Feb 14 20:06:46 crc kubenswrapper[4897]: I0214 20:06:46.436467 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w9dlm_93aca208-9cef-49a3-917c-2bb7c314d537/extract-content/0.log" Feb 14 20:06:46 crc kubenswrapper[4897]: I0214 20:06:46.455874 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w9dlm_93aca208-9cef-49a3-917c-2bb7c314d537/extract-utilities/0.log" Feb 14 20:06:46 crc kubenswrapper[4897]: I0214 20:06:46.493455 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w9dlm_93aca208-9cef-49a3-917c-2bb7c314d537/extract-content/0.log" Feb 14 20:06:46 crc kubenswrapper[4897]: I0214 20:06:46.657941 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w9dlm_93aca208-9cef-49a3-917c-2bb7c314d537/extract-utilities/0.log" Feb 14 20:06:46 crc kubenswrapper[4897]: I0214 20:06:46.688184 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w9dlm_93aca208-9cef-49a3-917c-2bb7c314d537/extract-content/0.log" Feb 14 20:06:46 crc kubenswrapper[4897]: I0214 20:06:46.911455 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh_dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e/util/0.log" Feb 14 20:06:47 crc kubenswrapper[4897]: I0214 20:06:47.130586 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh_dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e/pull/0.log" Feb 14 20:06:47 crc kubenswrapper[4897]: I0214 20:06:47.132382 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh_dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e/util/0.log" Feb 14 20:06:47 crc kubenswrapper[4897]: I0214 20:06:47.136465 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh_dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e/pull/0.log" Feb 14 20:06:47 crc kubenswrapper[4897]: I0214 20:06:47.569454 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh_dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e/util/0.log" Feb 14 20:06:47 crc kubenswrapper[4897]: I0214 20:06:47.599132 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w9dlm_93aca208-9cef-49a3-917c-2bb7c314d537/registry-server/0.log" Feb 14 20:06:47 crc kubenswrapper[4897]: I0214 20:06:47.614063 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh_dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e/extract/0.log" Feb 14 20:06:47 crc kubenswrapper[4897]: I0214 20:06:47.619134 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989bpsvh_dd2fdb9b-baae-4e3c-93bf-dcd940d8c50e/pull/0.log" Feb 14 20:06:47 crc kubenswrapper[4897]: I0214 20:06:47.740276 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2_4669d0a9-6bb7-4e10-9e83-88038ec23e72/util/0.log" Feb 14 20:06:48 crc kubenswrapper[4897]: I0214 20:06:48.452358 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2_4669d0a9-6bb7-4e10-9e83-88038ec23e72/pull/0.log" Feb 14 20:06:48 crc kubenswrapper[4897]: I0214 20:06:48.489503 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2_4669d0a9-6bb7-4e10-9e83-88038ec23e72/util/0.log" Feb 14 20:06:48 crc kubenswrapper[4897]: I0214 20:06:48.494908 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2_4669d0a9-6bb7-4e10-9e83-88038ec23e72/pull/0.log" Feb 14 20:06:48 crc kubenswrapper[4897]: I0214 20:06:48.588693 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2_4669d0a9-6bb7-4e10-9e83-88038ec23e72/util/0.log" Feb 14 20:06:48 crc kubenswrapper[4897]: I0214 20:06:48.625201 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2_4669d0a9-6bb7-4e10-9e83-88038ec23e72/pull/0.log" Feb 14 20:06:48 crc kubenswrapper[4897]: I0214 20:06:48.668347 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ndtpt_c87321f8-a781-4a08-93e8-2280f2ee57b8/marketplace-operator/0.log" Feb 14 20:06:48 crc kubenswrapper[4897]: I0214 20:06:48.694285 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavcmb2_4669d0a9-6bb7-4e10-9e83-88038ec23e72/extract/0.log" Feb 14 20:06:48 crc kubenswrapper[4897]: I0214 20:06:48.818149 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zqcpc_ac059afa-1f7b-480b-8650-c227c33ba696/extract-utilities/0.log" Feb 14 20:06:49 crc kubenswrapper[4897]: I0214 20:06:49.038978 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zqcpc_ac059afa-1f7b-480b-8650-c227c33ba696/extract-utilities/0.log" Feb 14 20:06:49 crc kubenswrapper[4897]: I0214 20:06:49.045422 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zqcpc_ac059afa-1f7b-480b-8650-c227c33ba696/extract-content/0.log" Feb 14 20:06:49 crc kubenswrapper[4897]: I0214 20:06:49.056136 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zqcpc_ac059afa-1f7b-480b-8650-c227c33ba696/extract-content/0.log" Feb 14 20:06:49 crc kubenswrapper[4897]: I0214 20:06:49.213381 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zqcpc_ac059afa-1f7b-480b-8650-c227c33ba696/extract-utilities/0.log" Feb 14 20:06:49 crc kubenswrapper[4897]: I0214 20:06:49.236169 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-79v5s_170e914d-6f55-4d61-bb7d-36dae4e4b002/extract-utilities/0.log" Feb 14 20:06:49 crc kubenswrapper[4897]: I0214 20:06:49.286568 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zqcpc_ac059afa-1f7b-480b-8650-c227c33ba696/extract-content/0.log" Feb 14 20:06:49 crc kubenswrapper[4897]: I0214 20:06:49.417065 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zqcpc_ac059afa-1f7b-480b-8650-c227c33ba696/registry-server/0.log" Feb 14 20:06:49 crc kubenswrapper[4897]: I0214 20:06:49.507854 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-79v5s_170e914d-6f55-4d61-bb7d-36dae4e4b002/extract-utilities/0.log" Feb 14 20:06:49 crc kubenswrapper[4897]: I0214 20:06:49.520733 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-79v5s_170e914d-6f55-4d61-bb7d-36dae4e4b002/extract-content/0.log" Feb 14 20:06:49 crc kubenswrapper[4897]: I0214 20:06:49.533862 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-79v5s_170e914d-6f55-4d61-bb7d-36dae4e4b002/extract-content/0.log" Feb 14 20:06:49 crc kubenswrapper[4897]: I0214 20:06:49.684820 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-79v5s_170e914d-6f55-4d61-bb7d-36dae4e4b002/extract-content/0.log" Feb 14 20:06:49 crc kubenswrapper[4897]: I0214 20:06:49.701170 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-79v5s_170e914d-6f55-4d61-bb7d-36dae4e4b002/extract-utilities/0.log" Feb 14 20:06:50 crc kubenswrapper[4897]: I0214 20:06:50.415267 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-79v5s_170e914d-6f55-4d61-bb7d-36dae4e4b002/registry-server/0.log" Feb 14 20:07:04 crc kubenswrapper[4897]: I0214 20:07:04.938435 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-nttxw_3d91a41b-7d8f-4ad4-9005-1a3bf7c40156/prometheus-operator/0.log" Feb 14 20:07:04 crc kubenswrapper[4897]: I0214 20:07:04.939257 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-d5786dfdc-jmkl2_869c7f86-090e-405c-9147-0815dbdd87c2/prometheus-operator-admission-webhook/0.log" Feb 14 20:07:04 crc kubenswrapper[4897]: I0214 20:07:04.995216 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-d5786dfdc-w5jvg_3d01fb0e-8e21-4c41-90ea-2644e1d4f2f5/prometheus-operator-admission-webhook/0.log" Feb 14 20:07:05 crc kubenswrapper[4897]: I0214 20:07:05.110373 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-t9xgt_7683e04b-bb89-48c2-bff0-75d052f26e7f/observability-ui-dashboards/0.log" Feb 14 20:07:05 crc kubenswrapper[4897]: I0214 20:07:05.118152 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9t57n_7f9fcba2-5e97-421b-8868-b497df246731/operator/0.log" Feb 14 20:07:05 crc kubenswrapper[4897]: I0214 20:07:05.140904 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-q66h9_b37fa061-9005-4aec-8681-c1107aad5075/perses-operator/0.log" Feb 14 20:07:21 crc kubenswrapper[4897]: I0214 20:07:21.478129 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-78d86b9dcc-fgbpn_ab082f7b-c89d-4db4-a04f-e2db844fa022/kube-rbac-proxy/0.log" Feb 14 20:07:21 crc kubenswrapper[4897]: I0214 20:07:21.507096 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-78d86b9dcc-fgbpn_ab082f7b-c89d-4db4-a04f-e2db844fa022/manager/0.log" Feb 14 20:07:31 crc kubenswrapper[4897]: E0214 20:07:31.039297 4897 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.41:56882->38.102.83.41:34573: write tcp 38.102.83.41:56882->38.102.83.41:34573: write: broken pipe Feb 14 20:08:44 crc kubenswrapper[4897]: I0214 20:08:44.482571 4897 scope.go:117] "RemoveContainer" containerID="6533b50994edfc3f46ec8281020f80245049cf1fc931aee8e2aaad8a07f26a1b" Feb 14 20:09:01 crc kubenswrapper[4897]: I0214 20:09:01.726330 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 20:09:01 crc kubenswrapper[4897]: I0214 20:09:01.728189 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 20:09:19 crc kubenswrapper[4897]: I0214 20:09:19.057987 4897 generic.go:334] "Generic (PLEG): container finished" podID="ea578f80-e5d1-4648-bd64-a8144b08671c" containerID="6d7489eb91351e7bd3c419435d24d744a4cec100ec53dbb6fbfbdcc8064656c4" exitCode=0 Feb 14 20:09:19 crc kubenswrapper[4897]: I0214 20:09:19.058077 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wjwjd/must-gather-whhz4" event={"ID":"ea578f80-e5d1-4648-bd64-a8144b08671c","Type":"ContainerDied","Data":"6d7489eb91351e7bd3c419435d24d744a4cec100ec53dbb6fbfbdcc8064656c4"} Feb 14 20:09:19 crc kubenswrapper[4897]: I0214 20:09:19.059863 4897 scope.go:117] "RemoveContainer" containerID="6d7489eb91351e7bd3c419435d24d744a4cec100ec53dbb6fbfbdcc8064656c4" Feb 14 20:09:19 crc kubenswrapper[4897]: I0214 20:09:19.572281 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wjwjd_must-gather-whhz4_ea578f80-e5d1-4648-bd64-a8144b08671c/gather/0.log" Feb 14 20:09:27 crc kubenswrapper[4897]: I0214 20:09:27.858951 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wjwjd/must-gather-whhz4"] Feb 14 20:09:27 crc kubenswrapper[4897]: I0214 20:09:27.859715 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-wjwjd/must-gather-whhz4" podUID="ea578f80-e5d1-4648-bd64-a8144b08671c" containerName="copy" containerID="cri-o://73b5bb6ca2d0b021f05a26cece0dab82e874e1053214c682811180ebef7f88ec" gracePeriod=2 Feb 14 20:09:27 crc kubenswrapper[4897]: I0214 20:09:27.880524 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wjwjd/must-gather-whhz4"] Feb 14 20:09:28 crc kubenswrapper[4897]: I0214 20:09:28.189149 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wjwjd_must-gather-whhz4_ea578f80-e5d1-4648-bd64-a8144b08671c/copy/0.log" Feb 14 20:09:28 crc kubenswrapper[4897]: I0214 20:09:28.189556 4897 generic.go:334] "Generic (PLEG): container finished" podID="ea578f80-e5d1-4648-bd64-a8144b08671c" containerID="73b5bb6ca2d0b021f05a26cece0dab82e874e1053214c682811180ebef7f88ec" exitCode=143 Feb 14 20:09:29 crc kubenswrapper[4897]: I0214 20:09:29.721307 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wjwjd_must-gather-whhz4_ea578f80-e5d1-4648-bd64-a8144b08671c/copy/0.log" Feb 14 20:09:29 crc kubenswrapper[4897]: I0214 20:09:29.724221 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwjd/must-gather-whhz4" Feb 14 20:09:29 crc kubenswrapper[4897]: I0214 20:09:29.842447 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ea578f80-e5d1-4648-bd64-a8144b08671c-must-gather-output\") pod \"ea578f80-e5d1-4648-bd64-a8144b08671c\" (UID: \"ea578f80-e5d1-4648-bd64-a8144b08671c\") " Feb 14 20:09:29 crc kubenswrapper[4897]: I0214 20:09:29.843006 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66rp4\" (UniqueName: \"kubernetes.io/projected/ea578f80-e5d1-4648-bd64-a8144b08671c-kube-api-access-66rp4\") pod \"ea578f80-e5d1-4648-bd64-a8144b08671c\" (UID: \"ea578f80-e5d1-4648-bd64-a8144b08671c\") " Feb 14 20:09:29 crc kubenswrapper[4897]: I0214 20:09:29.854371 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea578f80-e5d1-4648-bd64-a8144b08671c-kube-api-access-66rp4" (OuterVolumeSpecName: "kube-api-access-66rp4") pod "ea578f80-e5d1-4648-bd64-a8144b08671c" (UID: "ea578f80-e5d1-4648-bd64-a8144b08671c"). InnerVolumeSpecName "kube-api-access-66rp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 20:09:29 crc kubenswrapper[4897]: I0214 20:09:29.947737 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66rp4\" (UniqueName: \"kubernetes.io/projected/ea578f80-e5d1-4648-bd64-a8144b08671c-kube-api-access-66rp4\") on node \"crc\" DevicePath \"\"" Feb 14 20:09:30 crc kubenswrapper[4897]: I0214 20:09:30.058355 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea578f80-e5d1-4648-bd64-a8144b08671c-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ea578f80-e5d1-4648-bd64-a8144b08671c" (UID: "ea578f80-e5d1-4648-bd64-a8144b08671c"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 20:09:30 crc kubenswrapper[4897]: I0214 20:09:30.152725 4897 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ea578f80-e5d1-4648-bd64-a8144b08671c-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 14 20:09:30 crc kubenswrapper[4897]: I0214 20:09:30.227280 4897 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wjwjd_must-gather-whhz4_ea578f80-e5d1-4648-bd64-a8144b08671c/copy/0.log" Feb 14 20:09:30 crc kubenswrapper[4897]: I0214 20:09:30.228267 4897 scope.go:117] "RemoveContainer" containerID="73b5bb6ca2d0b021f05a26cece0dab82e874e1053214c682811180ebef7f88ec" Feb 14 20:09:30 crc kubenswrapper[4897]: I0214 20:09:30.228370 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwjd/must-gather-whhz4" Feb 14 20:09:30 crc kubenswrapper[4897]: I0214 20:09:30.266661 4897 scope.go:117] "RemoveContainer" containerID="6d7489eb91351e7bd3c419435d24d744a4cec100ec53dbb6fbfbdcc8064656c4" Feb 14 20:09:31 crc kubenswrapper[4897]: I0214 20:09:31.726468 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 20:09:31 crc kubenswrapper[4897]: I0214 20:09:31.726885 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 20:09:31 crc kubenswrapper[4897]: I0214 20:09:31.811391 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea578f80-e5d1-4648-bd64-a8144b08671c" path="/var/lib/kubelet/pods/ea578f80-e5d1-4648-bd64-a8144b08671c/volumes" Feb 14 20:09:57 crc kubenswrapper[4897]: I0214 20:09:57.294074 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fzlzq"] Feb 14 20:09:57 crc kubenswrapper[4897]: E0214 20:09:57.295295 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea578f80-e5d1-4648-bd64-a8144b08671c" containerName="gather" Feb 14 20:09:57 crc kubenswrapper[4897]: I0214 20:09:57.295314 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea578f80-e5d1-4648-bd64-a8144b08671c" containerName="gather" Feb 14 20:09:57 crc kubenswrapper[4897]: E0214 20:09:57.295350 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea578f80-e5d1-4648-bd64-a8144b08671c" containerName="copy" Feb 14 20:09:57 crc kubenswrapper[4897]: I0214 20:09:57.295358 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea578f80-e5d1-4648-bd64-a8144b08671c" containerName="copy" Feb 14 20:09:57 crc kubenswrapper[4897]: E0214 20:09:57.295374 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f81d3a1-ce2e-4369-bc50-13fc46a13823" containerName="container-00" Feb 14 20:09:57 crc kubenswrapper[4897]: I0214 20:09:57.295383 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f81d3a1-ce2e-4369-bc50-13fc46a13823" containerName="container-00" Feb 14 20:09:57 crc kubenswrapper[4897]: I0214 20:09:57.295773 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea578f80-e5d1-4648-bd64-a8144b08671c" containerName="copy" Feb 14 20:09:57 crc kubenswrapper[4897]: I0214 20:09:57.295800 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea578f80-e5d1-4648-bd64-a8144b08671c" containerName="gather" Feb 14 20:09:57 crc kubenswrapper[4897]: I0214 20:09:57.295825 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f81d3a1-ce2e-4369-bc50-13fc46a13823" containerName="container-00" Feb 14 20:09:57 crc kubenswrapper[4897]: I0214 20:09:57.300296 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fzlzq" Feb 14 20:09:57 crc kubenswrapper[4897]: I0214 20:09:57.308790 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fzlzq"] Feb 14 20:09:57 crc kubenswrapper[4897]: I0214 20:09:57.405231 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/476f1ebf-fa52-49c7-b00b-d7d055386de6-utilities\") pod \"certified-operators-fzlzq\" (UID: \"476f1ebf-fa52-49c7-b00b-d7d055386de6\") " pod="openshift-marketplace/certified-operators-fzlzq" Feb 14 20:09:57 crc kubenswrapper[4897]: I0214 20:09:57.405324 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vrl9\" (UniqueName: \"kubernetes.io/projected/476f1ebf-fa52-49c7-b00b-d7d055386de6-kube-api-access-8vrl9\") pod \"certified-operators-fzlzq\" (UID: \"476f1ebf-fa52-49c7-b00b-d7d055386de6\") " pod="openshift-marketplace/certified-operators-fzlzq" Feb 14 20:09:57 crc kubenswrapper[4897]: I0214 20:09:57.405526 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/476f1ebf-fa52-49c7-b00b-d7d055386de6-catalog-content\") pod \"certified-operators-fzlzq\" (UID: \"476f1ebf-fa52-49c7-b00b-d7d055386de6\") " pod="openshift-marketplace/certified-operators-fzlzq" Feb 14 20:09:57 crc kubenswrapper[4897]: I0214 20:09:57.508654 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/476f1ebf-fa52-49c7-b00b-d7d055386de6-utilities\") pod \"certified-operators-fzlzq\" (UID: \"476f1ebf-fa52-49c7-b00b-d7d055386de6\") " pod="openshift-marketplace/certified-operators-fzlzq" Feb 14 20:09:57 crc kubenswrapper[4897]: I0214 20:09:57.508726 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vrl9\" (UniqueName: \"kubernetes.io/projected/476f1ebf-fa52-49c7-b00b-d7d055386de6-kube-api-access-8vrl9\") pod \"certified-operators-fzlzq\" (UID: \"476f1ebf-fa52-49c7-b00b-d7d055386de6\") " pod="openshift-marketplace/certified-operators-fzlzq" Feb 14 20:09:57 crc kubenswrapper[4897]: I0214 20:09:57.508957 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/476f1ebf-fa52-49c7-b00b-d7d055386de6-catalog-content\") pod \"certified-operators-fzlzq\" (UID: \"476f1ebf-fa52-49c7-b00b-d7d055386de6\") " pod="openshift-marketplace/certified-operators-fzlzq" Feb 14 20:09:57 crc kubenswrapper[4897]: I0214 20:09:57.509209 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/476f1ebf-fa52-49c7-b00b-d7d055386de6-utilities\") pod \"certified-operators-fzlzq\" (UID: \"476f1ebf-fa52-49c7-b00b-d7d055386de6\") " pod="openshift-marketplace/certified-operators-fzlzq" Feb 14 20:09:57 crc kubenswrapper[4897]: I0214 20:09:57.509423 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/476f1ebf-fa52-49c7-b00b-d7d055386de6-catalog-content\") pod \"certified-operators-fzlzq\" (UID: \"476f1ebf-fa52-49c7-b00b-d7d055386de6\") " pod="openshift-marketplace/certified-operators-fzlzq" Feb 14 20:09:57 crc kubenswrapper[4897]: I0214 20:09:57.531007 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vrl9\" (UniqueName: \"kubernetes.io/projected/476f1ebf-fa52-49c7-b00b-d7d055386de6-kube-api-access-8vrl9\") pod \"certified-operators-fzlzq\" (UID: \"476f1ebf-fa52-49c7-b00b-d7d055386de6\") " pod="openshift-marketplace/certified-operators-fzlzq" Feb 14 20:09:57 crc kubenswrapper[4897]: I0214 20:09:57.632427 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fzlzq" Feb 14 20:09:58 crc kubenswrapper[4897]: I0214 20:09:58.165732 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fzlzq"] Feb 14 20:09:58 crc kubenswrapper[4897]: W0214 20:09:58.387179 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod476f1ebf_fa52_49c7_b00b_d7d055386de6.slice/crio-079f4e96e2de76f55e307ac7b267a9c254b15f531343e49face534ea427fe071 WatchSource:0}: Error finding container 079f4e96e2de76f55e307ac7b267a9c254b15f531343e49face534ea427fe071: Status 404 returned error can't find the container with id 079f4e96e2de76f55e307ac7b267a9c254b15f531343e49face534ea427fe071 Feb 14 20:09:58 crc kubenswrapper[4897]: I0214 20:09:58.573220 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzlzq" event={"ID":"476f1ebf-fa52-49c7-b00b-d7d055386de6","Type":"ContainerStarted","Data":"079f4e96e2de76f55e307ac7b267a9c254b15f531343e49face534ea427fe071"} Feb 14 20:09:59 crc kubenswrapper[4897]: I0214 20:09:59.590219 4897 generic.go:334] "Generic (PLEG): container finished" podID="476f1ebf-fa52-49c7-b00b-d7d055386de6" containerID="10fa5613a896859cb86f6a0b035742344be1465436ab6eea9952daa192b16ef6" exitCode=0 Feb 14 20:09:59 crc kubenswrapper[4897]: I0214 20:09:59.590312 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzlzq" event={"ID":"476f1ebf-fa52-49c7-b00b-d7d055386de6","Type":"ContainerDied","Data":"10fa5613a896859cb86f6a0b035742344be1465436ab6eea9952daa192b16ef6"} Feb 14 20:09:59 crc kubenswrapper[4897]: I0214 20:09:59.592716 4897 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 20:10:01 crc kubenswrapper[4897]: I0214 20:10:01.619087 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzlzq" event={"ID":"476f1ebf-fa52-49c7-b00b-d7d055386de6","Type":"ContainerStarted","Data":"567db54b91a1205cf8047d31834bbd31c35fc7ad1095f99de9e88018442b7ba2"} Feb 14 20:10:01 crc kubenswrapper[4897]: I0214 20:10:01.726097 4897 patch_prober.go:28] interesting pod/machine-config-daemon-k5mzq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 20:10:01 crc kubenswrapper[4897]: I0214 20:10:01.726166 4897 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 20:10:01 crc kubenswrapper[4897]: I0214 20:10:01.726211 4897 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" Feb 14 20:10:01 crc kubenswrapper[4897]: I0214 20:10:01.727468 4897 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c"} pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 20:10:01 crc kubenswrapper[4897]: I0214 20:10:01.727549 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerName="machine-config-daemon" containerID="cri-o://30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c" gracePeriod=600 Feb 14 20:10:02 crc kubenswrapper[4897]: E0214 20:10:02.386943 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:10:02 crc kubenswrapper[4897]: I0214 20:10:02.632974 4897 generic.go:334] "Generic (PLEG): container finished" podID="476f1ebf-fa52-49c7-b00b-d7d055386de6" containerID="567db54b91a1205cf8047d31834bbd31c35fc7ad1095f99de9e88018442b7ba2" exitCode=0 Feb 14 20:10:02 crc kubenswrapper[4897]: I0214 20:10:02.633065 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzlzq" event={"ID":"476f1ebf-fa52-49c7-b00b-d7d055386de6","Type":"ContainerDied","Data":"567db54b91a1205cf8047d31834bbd31c35fc7ad1095f99de9e88018442b7ba2"} Feb 14 20:10:02 crc kubenswrapper[4897]: I0214 20:10:02.635599 4897 generic.go:334] "Generic (PLEG): container finished" podID="9f885c6c-b913-48e3-93fc-abf932515ea9" containerID="30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c" exitCode=0 Feb 14 20:10:02 crc kubenswrapper[4897]: I0214 20:10:02.635652 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerDied","Data":"30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c"} Feb 14 20:10:02 crc kubenswrapper[4897]: I0214 20:10:02.635691 4897 scope.go:117] "RemoveContainer" containerID="0920c69bdd62f6bfbe3c53d6427630f4e3c45b27232e9664bc51391f7b5b4491" Feb 14 20:10:02 crc kubenswrapper[4897]: I0214 20:10:02.637544 4897 scope.go:117] "RemoveContainer" containerID="30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c" Feb 14 20:10:02 crc kubenswrapper[4897]: E0214 20:10:02.637953 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:10:03 crc kubenswrapper[4897]: I0214 20:10:03.651851 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzlzq" event={"ID":"476f1ebf-fa52-49c7-b00b-d7d055386de6","Type":"ContainerStarted","Data":"c542a805445695158b52f7a14006a13a973351c934028c069d056fef635b730e"} Feb 14 20:10:03 crc kubenswrapper[4897]: I0214 20:10:03.681812 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fzlzq" podStartSLOduration=3.234063219 podStartE2EDuration="6.681792686s" podCreationTimestamp="2026-02-14 20:09:57 +0000 UTC" firstStartedPulling="2026-02-14 20:09:59.592392142 +0000 UTC m=+5252.568800635" lastFinishedPulling="2026-02-14 20:10:03.040121609 +0000 UTC m=+5256.016530102" observedRunningTime="2026-02-14 20:10:03.671304464 +0000 UTC m=+5256.647712957" watchObservedRunningTime="2026-02-14 20:10:03.681792686 +0000 UTC m=+5256.658201169" Feb 14 20:10:07 crc kubenswrapper[4897]: I0214 20:10:07.632562 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fzlzq" Feb 14 20:10:07 crc kubenswrapper[4897]: I0214 20:10:07.633696 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fzlzq" Feb 14 20:10:08 crc kubenswrapper[4897]: I0214 20:10:08.698946 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fzlzq" podUID="476f1ebf-fa52-49c7-b00b-d7d055386de6" containerName="registry-server" probeResult="failure" output=< Feb 14 20:10:08 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 20:10:08 crc kubenswrapper[4897]: > Feb 14 20:10:13 crc kubenswrapper[4897]: I0214 20:10:13.793926 4897 scope.go:117] "RemoveContainer" containerID="30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c" Feb 14 20:10:13 crc kubenswrapper[4897]: E0214 20:10:13.794809 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:10:17 crc kubenswrapper[4897]: I0214 20:10:17.723182 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fzlzq" Feb 14 20:10:17 crc kubenswrapper[4897]: I0214 20:10:17.823629 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fzlzq" Feb 14 20:10:17 crc kubenswrapper[4897]: I0214 20:10:17.980337 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fzlzq"] Feb 14 20:10:18 crc kubenswrapper[4897]: I0214 20:10:18.878274 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fzlzq" podUID="476f1ebf-fa52-49c7-b00b-d7d055386de6" containerName="registry-server" containerID="cri-o://c542a805445695158b52f7a14006a13a973351c934028c069d056fef635b730e" gracePeriod=2 Feb 14 20:10:19 crc kubenswrapper[4897]: I0214 20:10:19.528274 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fzlzq" Feb 14 20:10:19 crc kubenswrapper[4897]: I0214 20:10:19.633622 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/476f1ebf-fa52-49c7-b00b-d7d055386de6-catalog-content\") pod \"476f1ebf-fa52-49c7-b00b-d7d055386de6\" (UID: \"476f1ebf-fa52-49c7-b00b-d7d055386de6\") " Feb 14 20:10:19 crc kubenswrapper[4897]: I0214 20:10:19.633706 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vrl9\" (UniqueName: \"kubernetes.io/projected/476f1ebf-fa52-49c7-b00b-d7d055386de6-kube-api-access-8vrl9\") pod \"476f1ebf-fa52-49c7-b00b-d7d055386de6\" (UID: \"476f1ebf-fa52-49c7-b00b-d7d055386de6\") " Feb 14 20:10:19 crc kubenswrapper[4897]: I0214 20:10:19.633748 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/476f1ebf-fa52-49c7-b00b-d7d055386de6-utilities\") pod \"476f1ebf-fa52-49c7-b00b-d7d055386de6\" (UID: \"476f1ebf-fa52-49c7-b00b-d7d055386de6\") " Feb 14 20:10:19 crc kubenswrapper[4897]: I0214 20:10:19.635077 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/476f1ebf-fa52-49c7-b00b-d7d055386de6-utilities" (OuterVolumeSpecName: "utilities") pod "476f1ebf-fa52-49c7-b00b-d7d055386de6" (UID: "476f1ebf-fa52-49c7-b00b-d7d055386de6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 20:10:19 crc kubenswrapper[4897]: I0214 20:10:19.644461 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/476f1ebf-fa52-49c7-b00b-d7d055386de6-kube-api-access-8vrl9" (OuterVolumeSpecName: "kube-api-access-8vrl9") pod "476f1ebf-fa52-49c7-b00b-d7d055386de6" (UID: "476f1ebf-fa52-49c7-b00b-d7d055386de6"). InnerVolumeSpecName "kube-api-access-8vrl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 20:10:19 crc kubenswrapper[4897]: I0214 20:10:19.678628 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/476f1ebf-fa52-49c7-b00b-d7d055386de6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "476f1ebf-fa52-49c7-b00b-d7d055386de6" (UID: "476f1ebf-fa52-49c7-b00b-d7d055386de6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 20:10:19 crc kubenswrapper[4897]: I0214 20:10:19.736177 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/476f1ebf-fa52-49c7-b00b-d7d055386de6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 20:10:19 crc kubenswrapper[4897]: I0214 20:10:19.736213 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vrl9\" (UniqueName: \"kubernetes.io/projected/476f1ebf-fa52-49c7-b00b-d7d055386de6-kube-api-access-8vrl9\") on node \"crc\" DevicePath \"\"" Feb 14 20:10:19 crc kubenswrapper[4897]: I0214 20:10:19.736227 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/476f1ebf-fa52-49c7-b00b-d7d055386de6-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 20:10:19 crc kubenswrapper[4897]: I0214 20:10:19.890048 4897 generic.go:334] "Generic (PLEG): container finished" podID="476f1ebf-fa52-49c7-b00b-d7d055386de6" containerID="c542a805445695158b52f7a14006a13a973351c934028c069d056fef635b730e" exitCode=0 Feb 14 20:10:19 crc kubenswrapper[4897]: I0214 20:10:19.890095 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzlzq" event={"ID":"476f1ebf-fa52-49c7-b00b-d7d055386de6","Type":"ContainerDied","Data":"c542a805445695158b52f7a14006a13a973351c934028c069d056fef635b730e"} Feb 14 20:10:19 crc kubenswrapper[4897]: I0214 20:10:19.890325 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzlzq" event={"ID":"476f1ebf-fa52-49c7-b00b-d7d055386de6","Type":"ContainerDied","Data":"079f4e96e2de76f55e307ac7b267a9c254b15f531343e49face534ea427fe071"} Feb 14 20:10:19 crc kubenswrapper[4897]: I0214 20:10:19.890349 4897 scope.go:117] "RemoveContainer" containerID="c542a805445695158b52f7a14006a13a973351c934028c069d056fef635b730e" Feb 14 20:10:19 crc kubenswrapper[4897]: I0214 20:10:19.890119 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fzlzq" Feb 14 20:10:19 crc kubenswrapper[4897]: I0214 20:10:19.915916 4897 scope.go:117] "RemoveContainer" containerID="567db54b91a1205cf8047d31834bbd31c35fc7ad1095f99de9e88018442b7ba2" Feb 14 20:10:19 crc kubenswrapper[4897]: I0214 20:10:19.918996 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fzlzq"] Feb 14 20:10:19 crc kubenswrapper[4897]: I0214 20:10:19.932343 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fzlzq"] Feb 14 20:10:19 crc kubenswrapper[4897]: I0214 20:10:19.945555 4897 scope.go:117] "RemoveContainer" containerID="10fa5613a896859cb86f6a0b035742344be1465436ab6eea9952daa192b16ef6" Feb 14 20:10:20 crc kubenswrapper[4897]: I0214 20:10:20.001980 4897 scope.go:117] "RemoveContainer" containerID="c542a805445695158b52f7a14006a13a973351c934028c069d056fef635b730e" Feb 14 20:10:20 crc kubenswrapper[4897]: E0214 20:10:20.003541 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c542a805445695158b52f7a14006a13a973351c934028c069d056fef635b730e\": container with ID starting with c542a805445695158b52f7a14006a13a973351c934028c069d056fef635b730e not found: ID does not exist" containerID="c542a805445695158b52f7a14006a13a973351c934028c069d056fef635b730e" Feb 14 20:10:20 crc kubenswrapper[4897]: I0214 20:10:20.003972 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c542a805445695158b52f7a14006a13a973351c934028c069d056fef635b730e"} err="failed to get container status \"c542a805445695158b52f7a14006a13a973351c934028c069d056fef635b730e\": rpc error: code = NotFound desc = could not find container \"c542a805445695158b52f7a14006a13a973351c934028c069d056fef635b730e\": container with ID starting with c542a805445695158b52f7a14006a13a973351c934028c069d056fef635b730e not found: ID does not exist" Feb 14 20:10:20 crc kubenswrapper[4897]: I0214 20:10:20.004006 4897 scope.go:117] "RemoveContainer" containerID="567db54b91a1205cf8047d31834bbd31c35fc7ad1095f99de9e88018442b7ba2" Feb 14 20:10:20 crc kubenswrapper[4897]: E0214 20:10:20.005417 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"567db54b91a1205cf8047d31834bbd31c35fc7ad1095f99de9e88018442b7ba2\": container with ID starting with 567db54b91a1205cf8047d31834bbd31c35fc7ad1095f99de9e88018442b7ba2 not found: ID does not exist" containerID="567db54b91a1205cf8047d31834bbd31c35fc7ad1095f99de9e88018442b7ba2" Feb 14 20:10:20 crc kubenswrapper[4897]: I0214 20:10:20.005478 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"567db54b91a1205cf8047d31834bbd31c35fc7ad1095f99de9e88018442b7ba2"} err="failed to get container status \"567db54b91a1205cf8047d31834bbd31c35fc7ad1095f99de9e88018442b7ba2\": rpc error: code = NotFound desc = could not find container \"567db54b91a1205cf8047d31834bbd31c35fc7ad1095f99de9e88018442b7ba2\": container with ID starting with 567db54b91a1205cf8047d31834bbd31c35fc7ad1095f99de9e88018442b7ba2 not found: ID does not exist" Feb 14 20:10:20 crc kubenswrapper[4897]: I0214 20:10:20.005508 4897 scope.go:117] "RemoveContainer" containerID="10fa5613a896859cb86f6a0b035742344be1465436ab6eea9952daa192b16ef6" Feb 14 20:10:20 crc kubenswrapper[4897]: E0214 20:10:20.006498 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10fa5613a896859cb86f6a0b035742344be1465436ab6eea9952daa192b16ef6\": container with ID starting with 10fa5613a896859cb86f6a0b035742344be1465436ab6eea9952daa192b16ef6 not found: ID does not exist" containerID="10fa5613a896859cb86f6a0b035742344be1465436ab6eea9952daa192b16ef6" Feb 14 20:10:20 crc kubenswrapper[4897]: I0214 20:10:20.006530 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10fa5613a896859cb86f6a0b035742344be1465436ab6eea9952daa192b16ef6"} err="failed to get container status \"10fa5613a896859cb86f6a0b035742344be1465436ab6eea9952daa192b16ef6\": rpc error: code = NotFound desc = could not find container \"10fa5613a896859cb86f6a0b035742344be1465436ab6eea9952daa192b16ef6\": container with ID starting with 10fa5613a896859cb86f6a0b035742344be1465436ab6eea9952daa192b16ef6 not found: ID does not exist" Feb 14 20:10:21 crc kubenswrapper[4897]: I0214 20:10:21.814674 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="476f1ebf-fa52-49c7-b00b-d7d055386de6" path="/var/lib/kubelet/pods/476f1ebf-fa52-49c7-b00b-d7d055386de6/volumes" Feb 14 20:10:25 crc kubenswrapper[4897]: I0214 20:10:25.795648 4897 scope.go:117] "RemoveContainer" containerID="30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c" Feb 14 20:10:25 crc kubenswrapper[4897]: E0214 20:10:25.797087 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:10:38 crc kubenswrapper[4897]: I0214 20:10:38.793818 4897 scope.go:117] "RemoveContainer" containerID="30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c" Feb 14 20:10:38 crc kubenswrapper[4897]: E0214 20:10:38.794716 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:10:51 crc kubenswrapper[4897]: I0214 20:10:51.793943 4897 scope.go:117] "RemoveContainer" containerID="30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c" Feb 14 20:10:51 crc kubenswrapper[4897]: E0214 20:10:51.794820 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:10:53 crc kubenswrapper[4897]: I0214 20:10:53.920063 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dt4dv"] Feb 14 20:10:53 crc kubenswrapper[4897]: E0214 20:10:53.921932 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="476f1ebf-fa52-49c7-b00b-d7d055386de6" containerName="registry-server" Feb 14 20:10:53 crc kubenswrapper[4897]: I0214 20:10:53.921953 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="476f1ebf-fa52-49c7-b00b-d7d055386de6" containerName="registry-server" Feb 14 20:10:53 crc kubenswrapper[4897]: E0214 20:10:53.921994 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="476f1ebf-fa52-49c7-b00b-d7d055386de6" containerName="extract-utilities" Feb 14 20:10:53 crc kubenswrapper[4897]: I0214 20:10:53.922004 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="476f1ebf-fa52-49c7-b00b-d7d055386de6" containerName="extract-utilities" Feb 14 20:10:53 crc kubenswrapper[4897]: E0214 20:10:53.922097 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="476f1ebf-fa52-49c7-b00b-d7d055386de6" containerName="extract-content" Feb 14 20:10:53 crc kubenswrapper[4897]: I0214 20:10:53.922107 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="476f1ebf-fa52-49c7-b00b-d7d055386de6" containerName="extract-content" Feb 14 20:10:53 crc kubenswrapper[4897]: I0214 20:10:53.922480 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="476f1ebf-fa52-49c7-b00b-d7d055386de6" containerName="registry-server" Feb 14 20:10:53 crc kubenswrapper[4897]: I0214 20:10:53.925226 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dt4dv" Feb 14 20:10:53 crc kubenswrapper[4897]: I0214 20:10:53.953844 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dt4dv"] Feb 14 20:10:54 crc kubenswrapper[4897]: I0214 20:10:54.006444 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad55399-3cdc-4b71-88a1-0a81d39d37dc-catalog-content\") pod \"community-operators-dt4dv\" (UID: \"dad55399-3cdc-4b71-88a1-0a81d39d37dc\") " pod="openshift-marketplace/community-operators-dt4dv" Feb 14 20:10:54 crc kubenswrapper[4897]: I0214 20:10:54.006534 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad55399-3cdc-4b71-88a1-0a81d39d37dc-utilities\") pod \"community-operators-dt4dv\" (UID: \"dad55399-3cdc-4b71-88a1-0a81d39d37dc\") " pod="openshift-marketplace/community-operators-dt4dv" Feb 14 20:10:54 crc kubenswrapper[4897]: I0214 20:10:54.006950 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb749\" (UniqueName: \"kubernetes.io/projected/dad55399-3cdc-4b71-88a1-0a81d39d37dc-kube-api-access-lb749\") pod \"community-operators-dt4dv\" (UID: \"dad55399-3cdc-4b71-88a1-0a81d39d37dc\") " pod="openshift-marketplace/community-operators-dt4dv" Feb 14 20:10:54 crc kubenswrapper[4897]: I0214 20:10:54.110489 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad55399-3cdc-4b71-88a1-0a81d39d37dc-catalog-content\") pod \"community-operators-dt4dv\" (UID: \"dad55399-3cdc-4b71-88a1-0a81d39d37dc\") " pod="openshift-marketplace/community-operators-dt4dv" Feb 14 20:10:54 crc kubenswrapper[4897]: I0214 20:10:54.110594 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad55399-3cdc-4b71-88a1-0a81d39d37dc-utilities\") pod \"community-operators-dt4dv\" (UID: \"dad55399-3cdc-4b71-88a1-0a81d39d37dc\") " pod="openshift-marketplace/community-operators-dt4dv" Feb 14 20:10:54 crc kubenswrapper[4897]: I0214 20:10:54.110912 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb749\" (UniqueName: \"kubernetes.io/projected/dad55399-3cdc-4b71-88a1-0a81d39d37dc-kube-api-access-lb749\") pod \"community-operators-dt4dv\" (UID: \"dad55399-3cdc-4b71-88a1-0a81d39d37dc\") " pod="openshift-marketplace/community-operators-dt4dv" Feb 14 20:10:54 crc kubenswrapper[4897]: I0214 20:10:54.111386 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad55399-3cdc-4b71-88a1-0a81d39d37dc-catalog-content\") pod \"community-operators-dt4dv\" (UID: \"dad55399-3cdc-4b71-88a1-0a81d39d37dc\") " pod="openshift-marketplace/community-operators-dt4dv" Feb 14 20:10:54 crc kubenswrapper[4897]: I0214 20:10:54.111800 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad55399-3cdc-4b71-88a1-0a81d39d37dc-utilities\") pod \"community-operators-dt4dv\" (UID: \"dad55399-3cdc-4b71-88a1-0a81d39d37dc\") " pod="openshift-marketplace/community-operators-dt4dv" Feb 14 20:10:54 crc kubenswrapper[4897]: I0214 20:10:54.133971 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb749\" (UniqueName: \"kubernetes.io/projected/dad55399-3cdc-4b71-88a1-0a81d39d37dc-kube-api-access-lb749\") pod \"community-operators-dt4dv\" (UID: \"dad55399-3cdc-4b71-88a1-0a81d39d37dc\") " pod="openshift-marketplace/community-operators-dt4dv" Feb 14 20:10:54 crc kubenswrapper[4897]: I0214 20:10:54.262754 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dt4dv" Feb 14 20:10:54 crc kubenswrapper[4897]: I0214 20:10:54.761451 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dt4dv"] Feb 14 20:10:56 crc kubenswrapper[4897]: I0214 20:10:56.449233 4897 generic.go:334] "Generic (PLEG): container finished" podID="dad55399-3cdc-4b71-88a1-0a81d39d37dc" containerID="4700255c3f1f6fb07917d678f0996cde9c29a7f745321ca5156d06ae8bb88ca2" exitCode=0 Feb 14 20:10:56 crc kubenswrapper[4897]: I0214 20:10:56.449967 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dt4dv" event={"ID":"dad55399-3cdc-4b71-88a1-0a81d39d37dc","Type":"ContainerDied","Data":"4700255c3f1f6fb07917d678f0996cde9c29a7f745321ca5156d06ae8bb88ca2"} Feb 14 20:10:56 crc kubenswrapper[4897]: I0214 20:10:56.450128 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dt4dv" event={"ID":"dad55399-3cdc-4b71-88a1-0a81d39d37dc","Type":"ContainerStarted","Data":"289f61c4edd354add4df5444d2637429a52bb7868b9713e1efbd6c32aff45bd7"} Feb 14 20:10:58 crc kubenswrapper[4897]: I0214 20:10:58.472315 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dt4dv" event={"ID":"dad55399-3cdc-4b71-88a1-0a81d39d37dc","Type":"ContainerStarted","Data":"e1f0ae1254582b66246ff6ca2f393b77ef2bfa156ecdde9a647b4468f023b1ea"} Feb 14 20:10:59 crc kubenswrapper[4897]: I0214 20:10:59.269342 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5qs8z"] Feb 14 20:10:59 crc kubenswrapper[4897]: I0214 20:10:59.274467 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qs8z" Feb 14 20:10:59 crc kubenswrapper[4897]: I0214 20:10:59.282977 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5qs8z"] Feb 14 20:10:59 crc kubenswrapper[4897]: I0214 20:10:59.347079 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q2jf\" (UniqueName: \"kubernetes.io/projected/34767ec5-83b6-4d47-b0bd-180d09eb6dcb-kube-api-access-9q2jf\") pod \"redhat-operators-5qs8z\" (UID: \"34767ec5-83b6-4d47-b0bd-180d09eb6dcb\") " pod="openshift-marketplace/redhat-operators-5qs8z" Feb 14 20:10:59 crc kubenswrapper[4897]: I0214 20:10:59.347298 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34767ec5-83b6-4d47-b0bd-180d09eb6dcb-catalog-content\") pod \"redhat-operators-5qs8z\" (UID: \"34767ec5-83b6-4d47-b0bd-180d09eb6dcb\") " pod="openshift-marketplace/redhat-operators-5qs8z" Feb 14 20:10:59 crc kubenswrapper[4897]: I0214 20:10:59.347440 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34767ec5-83b6-4d47-b0bd-180d09eb6dcb-utilities\") pod \"redhat-operators-5qs8z\" (UID: \"34767ec5-83b6-4d47-b0bd-180d09eb6dcb\") " pod="openshift-marketplace/redhat-operators-5qs8z" Feb 14 20:10:59 crc kubenswrapper[4897]: I0214 20:10:59.450299 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34767ec5-83b6-4d47-b0bd-180d09eb6dcb-utilities\") pod \"redhat-operators-5qs8z\" (UID: \"34767ec5-83b6-4d47-b0bd-180d09eb6dcb\") " pod="openshift-marketplace/redhat-operators-5qs8z" Feb 14 20:10:59 crc kubenswrapper[4897]: I0214 20:10:59.450423 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q2jf\" (UniqueName: \"kubernetes.io/projected/34767ec5-83b6-4d47-b0bd-180d09eb6dcb-kube-api-access-9q2jf\") pod \"redhat-operators-5qs8z\" (UID: \"34767ec5-83b6-4d47-b0bd-180d09eb6dcb\") " pod="openshift-marketplace/redhat-operators-5qs8z" Feb 14 20:10:59 crc kubenswrapper[4897]: I0214 20:10:59.450547 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34767ec5-83b6-4d47-b0bd-180d09eb6dcb-catalog-content\") pod \"redhat-operators-5qs8z\" (UID: \"34767ec5-83b6-4d47-b0bd-180d09eb6dcb\") " pod="openshift-marketplace/redhat-operators-5qs8z" Feb 14 20:10:59 crc kubenswrapper[4897]: I0214 20:10:59.450866 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34767ec5-83b6-4d47-b0bd-180d09eb6dcb-utilities\") pod \"redhat-operators-5qs8z\" (UID: \"34767ec5-83b6-4d47-b0bd-180d09eb6dcb\") " pod="openshift-marketplace/redhat-operators-5qs8z" Feb 14 20:10:59 crc kubenswrapper[4897]: I0214 20:10:59.451317 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34767ec5-83b6-4d47-b0bd-180d09eb6dcb-catalog-content\") pod \"redhat-operators-5qs8z\" (UID: \"34767ec5-83b6-4d47-b0bd-180d09eb6dcb\") " pod="openshift-marketplace/redhat-operators-5qs8z" Feb 14 20:10:59 crc kubenswrapper[4897]: I0214 20:10:59.470941 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q2jf\" (UniqueName: \"kubernetes.io/projected/34767ec5-83b6-4d47-b0bd-180d09eb6dcb-kube-api-access-9q2jf\") pod \"redhat-operators-5qs8z\" (UID: \"34767ec5-83b6-4d47-b0bd-180d09eb6dcb\") " pod="openshift-marketplace/redhat-operators-5qs8z" Feb 14 20:10:59 crc kubenswrapper[4897]: I0214 20:10:59.485096 4897 generic.go:334] "Generic (PLEG): container finished" podID="dad55399-3cdc-4b71-88a1-0a81d39d37dc" containerID="e1f0ae1254582b66246ff6ca2f393b77ef2bfa156ecdde9a647b4468f023b1ea" exitCode=0 Feb 14 20:10:59 crc kubenswrapper[4897]: I0214 20:10:59.486241 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dt4dv" event={"ID":"dad55399-3cdc-4b71-88a1-0a81d39d37dc","Type":"ContainerDied","Data":"e1f0ae1254582b66246ff6ca2f393b77ef2bfa156ecdde9a647b4468f023b1ea"} Feb 14 20:10:59 crc kubenswrapper[4897]: I0214 20:10:59.606334 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qs8z" Feb 14 20:11:00 crc kubenswrapper[4897]: I0214 20:11:00.074672 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5qs8z"] Feb 14 20:11:00 crc kubenswrapper[4897]: W0214 20:11:00.094833 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34767ec5_83b6_4d47_b0bd_180d09eb6dcb.slice/crio-5324cde78eb61183ce1e8bc06e6a296454bbf44fdfcc2655f722d80e766736cf WatchSource:0}: Error finding container 5324cde78eb61183ce1e8bc06e6a296454bbf44fdfcc2655f722d80e766736cf: Status 404 returned error can't find the container with id 5324cde78eb61183ce1e8bc06e6a296454bbf44fdfcc2655f722d80e766736cf Feb 14 20:11:00 crc kubenswrapper[4897]: I0214 20:11:00.494110 4897 generic.go:334] "Generic (PLEG): container finished" podID="34767ec5-83b6-4d47-b0bd-180d09eb6dcb" containerID="8dc1d1eb4b17f87a10bd43b1eaa24194778fc018d6019916654a991af42603a3" exitCode=0 Feb 14 20:11:00 crc kubenswrapper[4897]: I0214 20:11:00.494866 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qs8z" event={"ID":"34767ec5-83b6-4d47-b0bd-180d09eb6dcb","Type":"ContainerDied","Data":"8dc1d1eb4b17f87a10bd43b1eaa24194778fc018d6019916654a991af42603a3"} Feb 14 20:11:00 crc kubenswrapper[4897]: I0214 20:11:00.494895 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qs8z" event={"ID":"34767ec5-83b6-4d47-b0bd-180d09eb6dcb","Type":"ContainerStarted","Data":"5324cde78eb61183ce1e8bc06e6a296454bbf44fdfcc2655f722d80e766736cf"} Feb 14 20:11:00 crc kubenswrapper[4897]: I0214 20:11:00.496661 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dt4dv" event={"ID":"dad55399-3cdc-4b71-88a1-0a81d39d37dc","Type":"ContainerStarted","Data":"dea4f93f03bd2948543ba36835ddf5b92e0cd64a8b7bddd1836696219100b98b"} Feb 14 20:11:00 crc kubenswrapper[4897]: I0214 20:11:00.546295 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dt4dv" podStartSLOduration=4.083408266 podStartE2EDuration="7.546279246s" podCreationTimestamp="2026-02-14 20:10:53 +0000 UTC" firstStartedPulling="2026-02-14 20:10:56.452986533 +0000 UTC m=+5309.429395006" lastFinishedPulling="2026-02-14 20:10:59.915857503 +0000 UTC m=+5312.892265986" observedRunningTime="2026-02-14 20:11:00.543535582 +0000 UTC m=+5313.519944065" watchObservedRunningTime="2026-02-14 20:11:00.546279246 +0000 UTC m=+5313.522687719" Feb 14 20:11:01 crc kubenswrapper[4897]: I0214 20:11:01.509470 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qs8z" event={"ID":"34767ec5-83b6-4d47-b0bd-180d09eb6dcb","Type":"ContainerStarted","Data":"cedccf30c5de173af47d12ccdc7f70381400a2f586d9b38cd6d2ab140486b0d3"} Feb 14 20:11:04 crc kubenswrapper[4897]: I0214 20:11:04.263477 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dt4dv" Feb 14 20:11:04 crc kubenswrapper[4897]: I0214 20:11:04.263920 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dt4dv" Feb 14 20:11:05 crc kubenswrapper[4897]: I0214 20:11:05.338508 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-dt4dv" podUID="dad55399-3cdc-4b71-88a1-0a81d39d37dc" containerName="registry-server" probeResult="failure" output=< Feb 14 20:11:05 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 20:11:05 crc kubenswrapper[4897]: > Feb 14 20:11:05 crc kubenswrapper[4897]: I0214 20:11:05.571750 4897 generic.go:334] "Generic (PLEG): container finished" podID="34767ec5-83b6-4d47-b0bd-180d09eb6dcb" containerID="cedccf30c5de173af47d12ccdc7f70381400a2f586d9b38cd6d2ab140486b0d3" exitCode=0 Feb 14 20:11:05 crc kubenswrapper[4897]: I0214 20:11:05.571832 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qs8z" event={"ID":"34767ec5-83b6-4d47-b0bd-180d09eb6dcb","Type":"ContainerDied","Data":"cedccf30c5de173af47d12ccdc7f70381400a2f586d9b38cd6d2ab140486b0d3"} Feb 14 20:11:05 crc kubenswrapper[4897]: I0214 20:11:05.795444 4897 scope.go:117] "RemoveContainer" containerID="30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c" Feb 14 20:11:05 crc kubenswrapper[4897]: E0214 20:11:05.795916 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:11:06 crc kubenswrapper[4897]: I0214 20:11:06.583843 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qs8z" event={"ID":"34767ec5-83b6-4d47-b0bd-180d09eb6dcb","Type":"ContainerStarted","Data":"56a120ac6545aea2106d77f96e1ae19147742fd64975d526c7f60beb553a9cdd"} Feb 14 20:11:06 crc kubenswrapper[4897]: I0214 20:11:06.608588 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5qs8z" podStartSLOduration=2.125981474 podStartE2EDuration="7.608560316s" podCreationTimestamp="2026-02-14 20:10:59 +0000 UTC" firstStartedPulling="2026-02-14 20:11:00.496253853 +0000 UTC m=+5313.472662326" lastFinishedPulling="2026-02-14 20:11:05.978832685 +0000 UTC m=+5318.955241168" observedRunningTime="2026-02-14 20:11:06.600261762 +0000 UTC m=+5319.576670255" watchObservedRunningTime="2026-02-14 20:11:06.608560316 +0000 UTC m=+5319.584968839" Feb 14 20:11:07 crc kubenswrapper[4897]: I0214 20:11:07.474513 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4vwnn"] Feb 14 20:11:07 crc kubenswrapper[4897]: I0214 20:11:07.478265 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vwnn" Feb 14 20:11:07 crc kubenswrapper[4897]: I0214 20:11:07.493420 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vwnn"] Feb 14 20:11:07 crc kubenswrapper[4897]: I0214 20:11:07.548860 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f5c7888-5418-4d01-bebb-e96886381bcc-utilities\") pod \"redhat-marketplace-4vwnn\" (UID: \"0f5c7888-5418-4d01-bebb-e96886381bcc\") " pod="openshift-marketplace/redhat-marketplace-4vwnn" Feb 14 20:11:07 crc kubenswrapper[4897]: I0214 20:11:07.549323 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f5c7888-5418-4d01-bebb-e96886381bcc-catalog-content\") pod \"redhat-marketplace-4vwnn\" (UID: \"0f5c7888-5418-4d01-bebb-e96886381bcc\") " pod="openshift-marketplace/redhat-marketplace-4vwnn" Feb 14 20:11:07 crc kubenswrapper[4897]: I0214 20:11:07.549457 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cvff\" (UniqueName: \"kubernetes.io/projected/0f5c7888-5418-4d01-bebb-e96886381bcc-kube-api-access-8cvff\") pod \"redhat-marketplace-4vwnn\" (UID: \"0f5c7888-5418-4d01-bebb-e96886381bcc\") " pod="openshift-marketplace/redhat-marketplace-4vwnn" Feb 14 20:11:07 crc kubenswrapper[4897]: I0214 20:11:07.652255 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f5c7888-5418-4d01-bebb-e96886381bcc-utilities\") pod \"redhat-marketplace-4vwnn\" (UID: \"0f5c7888-5418-4d01-bebb-e96886381bcc\") " pod="openshift-marketplace/redhat-marketplace-4vwnn" Feb 14 20:11:07 crc kubenswrapper[4897]: I0214 20:11:07.652524 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f5c7888-5418-4d01-bebb-e96886381bcc-catalog-content\") pod \"redhat-marketplace-4vwnn\" (UID: \"0f5c7888-5418-4d01-bebb-e96886381bcc\") " pod="openshift-marketplace/redhat-marketplace-4vwnn" Feb 14 20:11:07 crc kubenswrapper[4897]: I0214 20:11:07.652594 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cvff\" (UniqueName: \"kubernetes.io/projected/0f5c7888-5418-4d01-bebb-e96886381bcc-kube-api-access-8cvff\") pod \"redhat-marketplace-4vwnn\" (UID: \"0f5c7888-5418-4d01-bebb-e96886381bcc\") " pod="openshift-marketplace/redhat-marketplace-4vwnn" Feb 14 20:11:07 crc kubenswrapper[4897]: I0214 20:11:07.652937 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f5c7888-5418-4d01-bebb-e96886381bcc-utilities\") pod \"redhat-marketplace-4vwnn\" (UID: \"0f5c7888-5418-4d01-bebb-e96886381bcc\") " pod="openshift-marketplace/redhat-marketplace-4vwnn" Feb 14 20:11:07 crc kubenswrapper[4897]: I0214 20:11:07.653168 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f5c7888-5418-4d01-bebb-e96886381bcc-catalog-content\") pod \"redhat-marketplace-4vwnn\" (UID: \"0f5c7888-5418-4d01-bebb-e96886381bcc\") " pod="openshift-marketplace/redhat-marketplace-4vwnn" Feb 14 20:11:07 crc kubenswrapper[4897]: I0214 20:11:07.671862 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cvff\" (UniqueName: \"kubernetes.io/projected/0f5c7888-5418-4d01-bebb-e96886381bcc-kube-api-access-8cvff\") pod \"redhat-marketplace-4vwnn\" (UID: \"0f5c7888-5418-4d01-bebb-e96886381bcc\") " pod="openshift-marketplace/redhat-marketplace-4vwnn" Feb 14 20:11:07 crc kubenswrapper[4897]: I0214 20:11:07.796910 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vwnn" Feb 14 20:11:08 crc kubenswrapper[4897]: W0214 20:11:08.268581 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f5c7888_5418_4d01_bebb_e96886381bcc.slice/crio-31111dd764431da3461a57717484c13f43feab775bc5b142834db1f28ce4870e WatchSource:0}: Error finding container 31111dd764431da3461a57717484c13f43feab775bc5b142834db1f28ce4870e: Status 404 returned error can't find the container with id 31111dd764431da3461a57717484c13f43feab775bc5b142834db1f28ce4870e Feb 14 20:11:08 crc kubenswrapper[4897]: I0214 20:11:08.273769 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vwnn"] Feb 14 20:11:08 crc kubenswrapper[4897]: I0214 20:11:08.605220 4897 generic.go:334] "Generic (PLEG): container finished" podID="0f5c7888-5418-4d01-bebb-e96886381bcc" containerID="307d9fd47c1f682af23f38b9e5e6567957de3900f881b703b43bea663b2bc637" exitCode=0 Feb 14 20:11:08 crc kubenswrapper[4897]: I0214 20:11:08.605306 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vwnn" event={"ID":"0f5c7888-5418-4d01-bebb-e96886381bcc","Type":"ContainerDied","Data":"307d9fd47c1f682af23f38b9e5e6567957de3900f881b703b43bea663b2bc637"} Feb 14 20:11:08 crc kubenswrapper[4897]: I0214 20:11:08.605480 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vwnn" event={"ID":"0f5c7888-5418-4d01-bebb-e96886381bcc","Type":"ContainerStarted","Data":"31111dd764431da3461a57717484c13f43feab775bc5b142834db1f28ce4870e"} Feb 14 20:11:09 crc kubenswrapper[4897]: I0214 20:11:09.606549 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5qs8z" Feb 14 20:11:09 crc kubenswrapper[4897]: I0214 20:11:09.607315 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5qs8z" Feb 14 20:11:09 crc kubenswrapper[4897]: I0214 20:11:09.634527 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vwnn" event={"ID":"0f5c7888-5418-4d01-bebb-e96886381bcc","Type":"ContainerStarted","Data":"d58c3613441c2139f00fee3dfb84ce7702964194d53f961ac90cc7b835c83341"} Feb 14 20:11:10 crc kubenswrapper[4897]: I0214 20:11:10.646438 4897 generic.go:334] "Generic (PLEG): container finished" podID="0f5c7888-5418-4d01-bebb-e96886381bcc" containerID="d58c3613441c2139f00fee3dfb84ce7702964194d53f961ac90cc7b835c83341" exitCode=0 Feb 14 20:11:10 crc kubenswrapper[4897]: I0214 20:11:10.646621 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vwnn" event={"ID":"0f5c7888-5418-4d01-bebb-e96886381bcc","Type":"ContainerDied","Data":"d58c3613441c2139f00fee3dfb84ce7702964194d53f961ac90cc7b835c83341"} Feb 14 20:11:10 crc kubenswrapper[4897]: I0214 20:11:10.676557 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5qs8z" podUID="34767ec5-83b6-4d47-b0bd-180d09eb6dcb" containerName="registry-server" probeResult="failure" output=< Feb 14 20:11:10 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 20:11:10 crc kubenswrapper[4897]: > Feb 14 20:11:11 crc kubenswrapper[4897]: I0214 20:11:11.660498 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vwnn" event={"ID":"0f5c7888-5418-4d01-bebb-e96886381bcc","Type":"ContainerStarted","Data":"24ad41e68be71ca7e1049a75cdb434accdb701507f0e119a9df8b7ee4afca775"} Feb 14 20:11:11 crc kubenswrapper[4897]: I0214 20:11:11.683859 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4vwnn" podStartSLOduration=2.105876008 podStartE2EDuration="4.683842931s" podCreationTimestamp="2026-02-14 20:11:07 +0000 UTC" firstStartedPulling="2026-02-14 20:11:08.607436219 +0000 UTC m=+5321.583844702" lastFinishedPulling="2026-02-14 20:11:11.185403142 +0000 UTC m=+5324.161811625" observedRunningTime="2026-02-14 20:11:11.674067941 +0000 UTC m=+5324.650476424" watchObservedRunningTime="2026-02-14 20:11:11.683842931 +0000 UTC m=+5324.660251414" Feb 14 20:11:14 crc kubenswrapper[4897]: I0214 20:11:14.328577 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dt4dv" Feb 14 20:11:14 crc kubenswrapper[4897]: I0214 20:11:14.387441 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dt4dv" Feb 14 20:11:14 crc kubenswrapper[4897]: I0214 20:11:14.654005 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dt4dv"] Feb 14 20:11:15 crc kubenswrapper[4897]: I0214 20:11:15.702130 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dt4dv" podUID="dad55399-3cdc-4b71-88a1-0a81d39d37dc" containerName="registry-server" containerID="cri-o://dea4f93f03bd2948543ba36835ddf5b92e0cd64a8b7bddd1836696219100b98b" gracePeriod=2 Feb 14 20:11:16 crc kubenswrapper[4897]: I0214 20:11:16.251058 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dt4dv" Feb 14 20:11:16 crc kubenswrapper[4897]: I0214 20:11:16.395332 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad55399-3cdc-4b71-88a1-0a81d39d37dc-utilities\") pod \"dad55399-3cdc-4b71-88a1-0a81d39d37dc\" (UID: \"dad55399-3cdc-4b71-88a1-0a81d39d37dc\") " Feb 14 20:11:16 crc kubenswrapper[4897]: I0214 20:11:16.395422 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lb749\" (UniqueName: \"kubernetes.io/projected/dad55399-3cdc-4b71-88a1-0a81d39d37dc-kube-api-access-lb749\") pod \"dad55399-3cdc-4b71-88a1-0a81d39d37dc\" (UID: \"dad55399-3cdc-4b71-88a1-0a81d39d37dc\") " Feb 14 20:11:16 crc kubenswrapper[4897]: I0214 20:11:16.395493 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad55399-3cdc-4b71-88a1-0a81d39d37dc-catalog-content\") pod \"dad55399-3cdc-4b71-88a1-0a81d39d37dc\" (UID: \"dad55399-3cdc-4b71-88a1-0a81d39d37dc\") " Feb 14 20:11:16 crc kubenswrapper[4897]: I0214 20:11:16.396442 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dad55399-3cdc-4b71-88a1-0a81d39d37dc-utilities" (OuterVolumeSpecName: "utilities") pod "dad55399-3cdc-4b71-88a1-0a81d39d37dc" (UID: "dad55399-3cdc-4b71-88a1-0a81d39d37dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 20:11:16 crc kubenswrapper[4897]: I0214 20:11:16.407908 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dad55399-3cdc-4b71-88a1-0a81d39d37dc-kube-api-access-lb749" (OuterVolumeSpecName: "kube-api-access-lb749") pod "dad55399-3cdc-4b71-88a1-0a81d39d37dc" (UID: "dad55399-3cdc-4b71-88a1-0a81d39d37dc"). InnerVolumeSpecName "kube-api-access-lb749". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 20:11:16 crc kubenswrapper[4897]: I0214 20:11:16.450727 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dad55399-3cdc-4b71-88a1-0a81d39d37dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dad55399-3cdc-4b71-88a1-0a81d39d37dc" (UID: "dad55399-3cdc-4b71-88a1-0a81d39d37dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 20:11:16 crc kubenswrapper[4897]: I0214 20:11:16.498733 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lb749\" (UniqueName: \"kubernetes.io/projected/dad55399-3cdc-4b71-88a1-0a81d39d37dc-kube-api-access-lb749\") on node \"crc\" DevicePath \"\"" Feb 14 20:11:16 crc kubenswrapper[4897]: I0214 20:11:16.498767 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad55399-3cdc-4b71-88a1-0a81d39d37dc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 20:11:16 crc kubenswrapper[4897]: I0214 20:11:16.498778 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad55399-3cdc-4b71-88a1-0a81d39d37dc-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 20:11:16 crc kubenswrapper[4897]: I0214 20:11:16.716014 4897 generic.go:334] "Generic (PLEG): container finished" podID="dad55399-3cdc-4b71-88a1-0a81d39d37dc" containerID="dea4f93f03bd2948543ba36835ddf5b92e0cd64a8b7bddd1836696219100b98b" exitCode=0 Feb 14 20:11:16 crc kubenswrapper[4897]: I0214 20:11:16.716084 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dt4dv" Feb 14 20:11:16 crc kubenswrapper[4897]: I0214 20:11:16.716105 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dt4dv" event={"ID":"dad55399-3cdc-4b71-88a1-0a81d39d37dc","Type":"ContainerDied","Data":"dea4f93f03bd2948543ba36835ddf5b92e0cd64a8b7bddd1836696219100b98b"} Feb 14 20:11:16 crc kubenswrapper[4897]: I0214 20:11:16.716172 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dt4dv" event={"ID":"dad55399-3cdc-4b71-88a1-0a81d39d37dc","Type":"ContainerDied","Data":"289f61c4edd354add4df5444d2637429a52bb7868b9713e1efbd6c32aff45bd7"} Feb 14 20:11:16 crc kubenswrapper[4897]: I0214 20:11:16.716202 4897 scope.go:117] "RemoveContainer" containerID="dea4f93f03bd2948543ba36835ddf5b92e0cd64a8b7bddd1836696219100b98b" Feb 14 20:11:16 crc kubenswrapper[4897]: I0214 20:11:16.754964 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dt4dv"] Feb 14 20:11:16 crc kubenswrapper[4897]: I0214 20:11:16.763257 4897 scope.go:117] "RemoveContainer" containerID="e1f0ae1254582b66246ff6ca2f393b77ef2bfa156ecdde9a647b4468f023b1ea" Feb 14 20:11:16 crc kubenswrapper[4897]: I0214 20:11:16.771002 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dt4dv"] Feb 14 20:11:16 crc kubenswrapper[4897]: I0214 20:11:16.788253 4897 scope.go:117] "RemoveContainer" containerID="4700255c3f1f6fb07917d678f0996cde9c29a7f745321ca5156d06ae8bb88ca2" Feb 14 20:11:16 crc kubenswrapper[4897]: I0214 20:11:16.843525 4897 scope.go:117] "RemoveContainer" containerID="dea4f93f03bd2948543ba36835ddf5b92e0cd64a8b7bddd1836696219100b98b" Feb 14 20:11:16 crc kubenswrapper[4897]: E0214 20:11:16.843961 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dea4f93f03bd2948543ba36835ddf5b92e0cd64a8b7bddd1836696219100b98b\": container with ID starting with dea4f93f03bd2948543ba36835ddf5b92e0cd64a8b7bddd1836696219100b98b not found: ID does not exist" containerID="dea4f93f03bd2948543ba36835ddf5b92e0cd64a8b7bddd1836696219100b98b" Feb 14 20:11:16 crc kubenswrapper[4897]: I0214 20:11:16.844000 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dea4f93f03bd2948543ba36835ddf5b92e0cd64a8b7bddd1836696219100b98b"} err="failed to get container status \"dea4f93f03bd2948543ba36835ddf5b92e0cd64a8b7bddd1836696219100b98b\": rpc error: code = NotFound desc = could not find container \"dea4f93f03bd2948543ba36835ddf5b92e0cd64a8b7bddd1836696219100b98b\": container with ID starting with dea4f93f03bd2948543ba36835ddf5b92e0cd64a8b7bddd1836696219100b98b not found: ID does not exist" Feb 14 20:11:16 crc kubenswrapper[4897]: I0214 20:11:16.844227 4897 scope.go:117] "RemoveContainer" containerID="e1f0ae1254582b66246ff6ca2f393b77ef2bfa156ecdde9a647b4468f023b1ea" Feb 14 20:11:16 crc kubenswrapper[4897]: E0214 20:11:16.844555 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1f0ae1254582b66246ff6ca2f393b77ef2bfa156ecdde9a647b4468f023b1ea\": container with ID starting with e1f0ae1254582b66246ff6ca2f393b77ef2bfa156ecdde9a647b4468f023b1ea not found: ID does not exist" containerID="e1f0ae1254582b66246ff6ca2f393b77ef2bfa156ecdde9a647b4468f023b1ea" Feb 14 20:11:16 crc kubenswrapper[4897]: I0214 20:11:16.844580 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1f0ae1254582b66246ff6ca2f393b77ef2bfa156ecdde9a647b4468f023b1ea"} err="failed to get container status \"e1f0ae1254582b66246ff6ca2f393b77ef2bfa156ecdde9a647b4468f023b1ea\": rpc error: code = NotFound desc = could not find container \"e1f0ae1254582b66246ff6ca2f393b77ef2bfa156ecdde9a647b4468f023b1ea\": container with ID starting with e1f0ae1254582b66246ff6ca2f393b77ef2bfa156ecdde9a647b4468f023b1ea not found: ID does not exist" Feb 14 20:11:16 crc kubenswrapper[4897]: I0214 20:11:16.844595 4897 scope.go:117] "RemoveContainer" containerID="4700255c3f1f6fb07917d678f0996cde9c29a7f745321ca5156d06ae8bb88ca2" Feb 14 20:11:16 crc kubenswrapper[4897]: E0214 20:11:16.844824 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4700255c3f1f6fb07917d678f0996cde9c29a7f745321ca5156d06ae8bb88ca2\": container with ID starting with 4700255c3f1f6fb07917d678f0996cde9c29a7f745321ca5156d06ae8bb88ca2 not found: ID does not exist" containerID="4700255c3f1f6fb07917d678f0996cde9c29a7f745321ca5156d06ae8bb88ca2" Feb 14 20:11:16 crc kubenswrapper[4897]: I0214 20:11:16.844858 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4700255c3f1f6fb07917d678f0996cde9c29a7f745321ca5156d06ae8bb88ca2"} err="failed to get container status \"4700255c3f1f6fb07917d678f0996cde9c29a7f745321ca5156d06ae8bb88ca2\": rpc error: code = NotFound desc = could not find container \"4700255c3f1f6fb07917d678f0996cde9c29a7f745321ca5156d06ae8bb88ca2\": container with ID starting with 4700255c3f1f6fb07917d678f0996cde9c29a7f745321ca5156d06ae8bb88ca2 not found: ID does not exist" Feb 14 20:11:17 crc kubenswrapper[4897]: I0214 20:11:17.805531 4897 scope.go:117] "RemoveContainer" containerID="30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c" Feb 14 20:11:17 crc kubenswrapper[4897]: E0214 20:11:17.820101 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:11:17 crc kubenswrapper[4897]: I0214 20:11:17.832632 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dad55399-3cdc-4b71-88a1-0a81d39d37dc" path="/var/lib/kubelet/pods/dad55399-3cdc-4b71-88a1-0a81d39d37dc/volumes" Feb 14 20:11:17 crc kubenswrapper[4897]: I0214 20:11:17.833721 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4vwnn" Feb 14 20:11:17 crc kubenswrapper[4897]: I0214 20:11:17.833748 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4vwnn" Feb 14 20:11:18 crc kubenswrapper[4897]: I0214 20:11:18.861244 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-4vwnn" podUID="0f5c7888-5418-4d01-bebb-e96886381bcc" containerName="registry-server" probeResult="failure" output=< Feb 14 20:11:18 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 20:11:18 crc kubenswrapper[4897]: > Feb 14 20:11:20 crc kubenswrapper[4897]: I0214 20:11:20.659322 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5qs8z" podUID="34767ec5-83b6-4d47-b0bd-180d09eb6dcb" containerName="registry-server" probeResult="failure" output=< Feb 14 20:11:20 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 20:11:20 crc kubenswrapper[4897]: > Feb 14 20:11:27 crc kubenswrapper[4897]: I0214 20:11:27.872336 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4vwnn" Feb 14 20:11:27 crc kubenswrapper[4897]: I0214 20:11:27.933570 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4vwnn" Feb 14 20:11:28 crc kubenswrapper[4897]: I0214 20:11:28.114925 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vwnn"] Feb 14 20:11:29 crc kubenswrapper[4897]: I0214 20:11:29.895719 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4vwnn" podUID="0f5c7888-5418-4d01-bebb-e96886381bcc" containerName="registry-server" containerID="cri-o://24ad41e68be71ca7e1049a75cdb434accdb701507f0e119a9df8b7ee4afca775" gracePeriod=2 Feb 14 20:11:30 crc kubenswrapper[4897]: I0214 20:11:30.679622 4897 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5qs8z" podUID="34767ec5-83b6-4d47-b0bd-180d09eb6dcb" containerName="registry-server" probeResult="failure" output=< Feb 14 20:11:30 crc kubenswrapper[4897]: timeout: failed to connect service ":50051" within 1s Feb 14 20:11:30 crc kubenswrapper[4897]: > Feb 14 20:11:30 crc kubenswrapper[4897]: I0214 20:11:30.794199 4897 scope.go:117] "RemoveContainer" containerID="30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c" Feb 14 20:11:30 crc kubenswrapper[4897]: E0214 20:11:30.794856 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:11:30 crc kubenswrapper[4897]: I0214 20:11:30.911563 4897 generic.go:334] "Generic (PLEG): container finished" podID="0f5c7888-5418-4d01-bebb-e96886381bcc" containerID="24ad41e68be71ca7e1049a75cdb434accdb701507f0e119a9df8b7ee4afca775" exitCode=0 Feb 14 20:11:30 crc kubenswrapper[4897]: I0214 20:11:30.911606 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vwnn" event={"ID":"0f5c7888-5418-4d01-bebb-e96886381bcc","Type":"ContainerDied","Data":"24ad41e68be71ca7e1049a75cdb434accdb701507f0e119a9df8b7ee4afca775"} Feb 14 20:11:31 crc kubenswrapper[4897]: I0214 20:11:31.120582 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vwnn" Feb 14 20:11:31 crc kubenswrapper[4897]: I0214 20:11:31.269997 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cvff\" (UniqueName: \"kubernetes.io/projected/0f5c7888-5418-4d01-bebb-e96886381bcc-kube-api-access-8cvff\") pod \"0f5c7888-5418-4d01-bebb-e96886381bcc\" (UID: \"0f5c7888-5418-4d01-bebb-e96886381bcc\") " Feb 14 20:11:31 crc kubenswrapper[4897]: I0214 20:11:31.270438 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f5c7888-5418-4d01-bebb-e96886381bcc-catalog-content\") pod \"0f5c7888-5418-4d01-bebb-e96886381bcc\" (UID: \"0f5c7888-5418-4d01-bebb-e96886381bcc\") " Feb 14 20:11:31 crc kubenswrapper[4897]: I0214 20:11:31.270986 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f5c7888-5418-4d01-bebb-e96886381bcc-utilities\") pod \"0f5c7888-5418-4d01-bebb-e96886381bcc\" (UID: \"0f5c7888-5418-4d01-bebb-e96886381bcc\") " Feb 14 20:11:31 crc kubenswrapper[4897]: I0214 20:11:31.272967 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f5c7888-5418-4d01-bebb-e96886381bcc-utilities" (OuterVolumeSpecName: "utilities") pod "0f5c7888-5418-4d01-bebb-e96886381bcc" (UID: "0f5c7888-5418-4d01-bebb-e96886381bcc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 20:11:31 crc kubenswrapper[4897]: I0214 20:11:31.278698 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f5c7888-5418-4d01-bebb-e96886381bcc-kube-api-access-8cvff" (OuterVolumeSpecName: "kube-api-access-8cvff") pod "0f5c7888-5418-4d01-bebb-e96886381bcc" (UID: "0f5c7888-5418-4d01-bebb-e96886381bcc"). InnerVolumeSpecName "kube-api-access-8cvff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 20:11:31 crc kubenswrapper[4897]: I0214 20:11:31.303117 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f5c7888-5418-4d01-bebb-e96886381bcc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f5c7888-5418-4d01-bebb-e96886381bcc" (UID: "0f5c7888-5418-4d01-bebb-e96886381bcc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 20:11:31 crc kubenswrapper[4897]: I0214 20:11:31.374107 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f5c7888-5418-4d01-bebb-e96886381bcc-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 20:11:31 crc kubenswrapper[4897]: I0214 20:11:31.374136 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cvff\" (UniqueName: \"kubernetes.io/projected/0f5c7888-5418-4d01-bebb-e96886381bcc-kube-api-access-8cvff\") on node \"crc\" DevicePath \"\"" Feb 14 20:11:31 crc kubenswrapper[4897]: I0214 20:11:31.374150 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f5c7888-5418-4d01-bebb-e96886381bcc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 20:11:31 crc kubenswrapper[4897]: I0214 20:11:31.926658 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vwnn" event={"ID":"0f5c7888-5418-4d01-bebb-e96886381bcc","Type":"ContainerDied","Data":"31111dd764431da3461a57717484c13f43feab775bc5b142834db1f28ce4870e"} Feb 14 20:11:31 crc kubenswrapper[4897]: I0214 20:11:31.926719 4897 scope.go:117] "RemoveContainer" containerID="24ad41e68be71ca7e1049a75cdb434accdb701507f0e119a9df8b7ee4afca775" Feb 14 20:11:31 crc kubenswrapper[4897]: I0214 20:11:31.926717 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vwnn" Feb 14 20:11:31 crc kubenswrapper[4897]: I0214 20:11:31.954211 4897 scope.go:117] "RemoveContainer" containerID="d58c3613441c2139f00fee3dfb84ce7702964194d53f961ac90cc7b835c83341" Feb 14 20:11:31 crc kubenswrapper[4897]: I0214 20:11:31.958466 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vwnn"] Feb 14 20:11:31 crc kubenswrapper[4897]: I0214 20:11:31.975870 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vwnn"] Feb 14 20:11:31 crc kubenswrapper[4897]: I0214 20:11:31.979930 4897 scope.go:117] "RemoveContainer" containerID="307d9fd47c1f682af23f38b9e5e6567957de3900f881b703b43bea663b2bc637" Feb 14 20:11:33 crc kubenswrapper[4897]: I0214 20:11:33.807993 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f5c7888-5418-4d01-bebb-e96886381bcc" path="/var/lib/kubelet/pods/0f5c7888-5418-4d01-bebb-e96886381bcc/volumes" Feb 14 20:11:39 crc kubenswrapper[4897]: I0214 20:11:39.683150 4897 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5qs8z" Feb 14 20:11:39 crc kubenswrapper[4897]: I0214 20:11:39.743317 4897 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5qs8z" Feb 14 20:11:39 crc kubenswrapper[4897]: I0214 20:11:39.929818 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5qs8z"] Feb 14 20:11:41 crc kubenswrapper[4897]: I0214 20:11:41.038760 4897 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5qs8z" podUID="34767ec5-83b6-4d47-b0bd-180d09eb6dcb" containerName="registry-server" containerID="cri-o://56a120ac6545aea2106d77f96e1ae19147742fd64975d526c7f60beb553a9cdd" gracePeriod=2 Feb 14 20:11:41 crc kubenswrapper[4897]: I0214 20:11:41.727422 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qs8z" Feb 14 20:11:41 crc kubenswrapper[4897]: I0214 20:11:41.794545 4897 scope.go:117] "RemoveContainer" containerID="30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c" Feb 14 20:11:41 crc kubenswrapper[4897]: E0214 20:11:41.795207 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:11:41 crc kubenswrapper[4897]: I0214 20:11:41.869063 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34767ec5-83b6-4d47-b0bd-180d09eb6dcb-utilities\") pod \"34767ec5-83b6-4d47-b0bd-180d09eb6dcb\" (UID: \"34767ec5-83b6-4d47-b0bd-180d09eb6dcb\") " Feb 14 20:11:41 crc kubenswrapper[4897]: I0214 20:11:41.869162 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34767ec5-83b6-4d47-b0bd-180d09eb6dcb-catalog-content\") pod \"34767ec5-83b6-4d47-b0bd-180d09eb6dcb\" (UID: \"34767ec5-83b6-4d47-b0bd-180d09eb6dcb\") " Feb 14 20:11:41 crc kubenswrapper[4897]: I0214 20:11:41.869443 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q2jf\" (UniqueName: \"kubernetes.io/projected/34767ec5-83b6-4d47-b0bd-180d09eb6dcb-kube-api-access-9q2jf\") pod \"34767ec5-83b6-4d47-b0bd-180d09eb6dcb\" (UID: \"34767ec5-83b6-4d47-b0bd-180d09eb6dcb\") " Feb 14 20:11:41 crc kubenswrapper[4897]: I0214 20:11:41.869965 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34767ec5-83b6-4d47-b0bd-180d09eb6dcb-utilities" (OuterVolumeSpecName: "utilities") pod "34767ec5-83b6-4d47-b0bd-180d09eb6dcb" (UID: "34767ec5-83b6-4d47-b0bd-180d09eb6dcb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 20:11:41 crc kubenswrapper[4897]: I0214 20:11:41.871176 4897 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34767ec5-83b6-4d47-b0bd-180d09eb6dcb-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 20:11:41 crc kubenswrapper[4897]: I0214 20:11:41.877920 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34767ec5-83b6-4d47-b0bd-180d09eb6dcb-kube-api-access-9q2jf" (OuterVolumeSpecName: "kube-api-access-9q2jf") pod "34767ec5-83b6-4d47-b0bd-180d09eb6dcb" (UID: "34767ec5-83b6-4d47-b0bd-180d09eb6dcb"). InnerVolumeSpecName "kube-api-access-9q2jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 20:11:41 crc kubenswrapper[4897]: I0214 20:11:41.973429 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q2jf\" (UniqueName: \"kubernetes.io/projected/34767ec5-83b6-4d47-b0bd-180d09eb6dcb-kube-api-access-9q2jf\") on node \"crc\" DevicePath \"\"" Feb 14 20:11:42 crc kubenswrapper[4897]: I0214 20:11:42.007916 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34767ec5-83b6-4d47-b0bd-180d09eb6dcb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34767ec5-83b6-4d47-b0bd-180d09eb6dcb" (UID: "34767ec5-83b6-4d47-b0bd-180d09eb6dcb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 20:11:42 crc kubenswrapper[4897]: I0214 20:11:42.056412 4897 generic.go:334] "Generic (PLEG): container finished" podID="34767ec5-83b6-4d47-b0bd-180d09eb6dcb" containerID="56a120ac6545aea2106d77f96e1ae19147742fd64975d526c7f60beb553a9cdd" exitCode=0 Feb 14 20:11:42 crc kubenswrapper[4897]: I0214 20:11:42.056500 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qs8z" Feb 14 20:11:42 crc kubenswrapper[4897]: I0214 20:11:42.057023 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qs8z" event={"ID":"34767ec5-83b6-4d47-b0bd-180d09eb6dcb","Type":"ContainerDied","Data":"56a120ac6545aea2106d77f96e1ae19147742fd64975d526c7f60beb553a9cdd"} Feb 14 20:11:42 crc kubenswrapper[4897]: I0214 20:11:42.057142 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qs8z" event={"ID":"34767ec5-83b6-4d47-b0bd-180d09eb6dcb","Type":"ContainerDied","Data":"5324cde78eb61183ce1e8bc06e6a296454bbf44fdfcc2655f722d80e766736cf"} Feb 14 20:11:42 crc kubenswrapper[4897]: I0214 20:11:42.057186 4897 scope.go:117] "RemoveContainer" containerID="56a120ac6545aea2106d77f96e1ae19147742fd64975d526c7f60beb553a9cdd" Feb 14 20:11:42 crc kubenswrapper[4897]: I0214 20:11:42.075568 4897 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34767ec5-83b6-4d47-b0bd-180d09eb6dcb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 20:11:42 crc kubenswrapper[4897]: I0214 20:11:42.093963 4897 scope.go:117] "RemoveContainer" containerID="cedccf30c5de173af47d12ccdc7f70381400a2f586d9b38cd6d2ab140486b0d3" Feb 14 20:11:42 crc kubenswrapper[4897]: I0214 20:11:42.110415 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5qs8z"] Feb 14 20:11:42 crc kubenswrapper[4897]: I0214 20:11:42.123803 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5qs8z"] Feb 14 20:11:42 crc kubenswrapper[4897]: I0214 20:11:42.140335 4897 scope.go:117] "RemoveContainer" containerID="8dc1d1eb4b17f87a10bd43b1eaa24194778fc018d6019916654a991af42603a3" Feb 14 20:11:42 crc kubenswrapper[4897]: I0214 20:11:42.169765 4897 scope.go:117] "RemoveContainer" containerID="56a120ac6545aea2106d77f96e1ae19147742fd64975d526c7f60beb553a9cdd" Feb 14 20:11:42 crc kubenswrapper[4897]: E0214 20:11:42.170520 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56a120ac6545aea2106d77f96e1ae19147742fd64975d526c7f60beb553a9cdd\": container with ID starting with 56a120ac6545aea2106d77f96e1ae19147742fd64975d526c7f60beb553a9cdd not found: ID does not exist" containerID="56a120ac6545aea2106d77f96e1ae19147742fd64975d526c7f60beb553a9cdd" Feb 14 20:11:42 crc kubenswrapper[4897]: I0214 20:11:42.170558 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56a120ac6545aea2106d77f96e1ae19147742fd64975d526c7f60beb553a9cdd"} err="failed to get container status \"56a120ac6545aea2106d77f96e1ae19147742fd64975d526c7f60beb553a9cdd\": rpc error: code = NotFound desc = could not find container \"56a120ac6545aea2106d77f96e1ae19147742fd64975d526c7f60beb553a9cdd\": container with ID starting with 56a120ac6545aea2106d77f96e1ae19147742fd64975d526c7f60beb553a9cdd not found: ID does not exist" Feb 14 20:11:42 crc kubenswrapper[4897]: I0214 20:11:42.170587 4897 scope.go:117] "RemoveContainer" containerID="cedccf30c5de173af47d12ccdc7f70381400a2f586d9b38cd6d2ab140486b0d3" Feb 14 20:11:42 crc kubenswrapper[4897]: E0214 20:11:42.171046 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cedccf30c5de173af47d12ccdc7f70381400a2f586d9b38cd6d2ab140486b0d3\": container with ID starting with cedccf30c5de173af47d12ccdc7f70381400a2f586d9b38cd6d2ab140486b0d3 not found: ID does not exist" containerID="cedccf30c5de173af47d12ccdc7f70381400a2f586d9b38cd6d2ab140486b0d3" Feb 14 20:11:42 crc kubenswrapper[4897]: I0214 20:11:42.171082 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cedccf30c5de173af47d12ccdc7f70381400a2f586d9b38cd6d2ab140486b0d3"} err="failed to get container status \"cedccf30c5de173af47d12ccdc7f70381400a2f586d9b38cd6d2ab140486b0d3\": rpc error: code = NotFound desc = could not find container \"cedccf30c5de173af47d12ccdc7f70381400a2f586d9b38cd6d2ab140486b0d3\": container with ID starting with cedccf30c5de173af47d12ccdc7f70381400a2f586d9b38cd6d2ab140486b0d3 not found: ID does not exist" Feb 14 20:11:42 crc kubenswrapper[4897]: I0214 20:11:42.171099 4897 scope.go:117] "RemoveContainer" containerID="8dc1d1eb4b17f87a10bd43b1eaa24194778fc018d6019916654a991af42603a3" Feb 14 20:11:42 crc kubenswrapper[4897]: E0214 20:11:42.171432 4897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dc1d1eb4b17f87a10bd43b1eaa24194778fc018d6019916654a991af42603a3\": container with ID starting with 8dc1d1eb4b17f87a10bd43b1eaa24194778fc018d6019916654a991af42603a3 not found: ID does not exist" containerID="8dc1d1eb4b17f87a10bd43b1eaa24194778fc018d6019916654a991af42603a3" Feb 14 20:11:42 crc kubenswrapper[4897]: I0214 20:11:42.171464 4897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dc1d1eb4b17f87a10bd43b1eaa24194778fc018d6019916654a991af42603a3"} err="failed to get container status \"8dc1d1eb4b17f87a10bd43b1eaa24194778fc018d6019916654a991af42603a3\": rpc error: code = NotFound desc = could not find container \"8dc1d1eb4b17f87a10bd43b1eaa24194778fc018d6019916654a991af42603a3\": container with ID starting with 8dc1d1eb4b17f87a10bd43b1eaa24194778fc018d6019916654a991af42603a3 not found: ID does not exist" Feb 14 20:11:43 crc kubenswrapper[4897]: I0214 20:11:43.831488 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34767ec5-83b6-4d47-b0bd-180d09eb6dcb" path="/var/lib/kubelet/pods/34767ec5-83b6-4d47-b0bd-180d09eb6dcb/volumes" Feb 14 20:11:56 crc kubenswrapper[4897]: I0214 20:11:56.794510 4897 scope.go:117] "RemoveContainer" containerID="30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c" Feb 14 20:11:56 crc kubenswrapper[4897]: E0214 20:11:56.795383 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:12:10 crc kubenswrapper[4897]: I0214 20:12:10.794844 4897 scope.go:117] "RemoveContainer" containerID="30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c" Feb 14 20:12:10 crc kubenswrapper[4897]: E0214 20:12:10.795726 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:12:22 crc kubenswrapper[4897]: I0214 20:12:22.793793 4897 scope.go:117] "RemoveContainer" containerID="30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c" Feb 14 20:12:22 crc kubenswrapper[4897]: E0214 20:12:22.794579 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:12:35 crc kubenswrapper[4897]: I0214 20:12:35.796181 4897 scope.go:117] "RemoveContainer" containerID="30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c" Feb 14 20:12:35 crc kubenswrapper[4897]: E0214 20:12:35.797350 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:12:46 crc kubenswrapper[4897]: I0214 20:12:46.794507 4897 scope.go:117] "RemoveContainer" containerID="30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c" Feb 14 20:12:46 crc kubenswrapper[4897]: E0214 20:12:46.795705 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:13:00 crc kubenswrapper[4897]: I0214 20:13:00.795401 4897 scope.go:117] "RemoveContainer" containerID="30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c" Feb 14 20:13:00 crc kubenswrapper[4897]: E0214 20:13:00.796540 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:13:11 crc kubenswrapper[4897]: I0214 20:13:11.807696 4897 scope.go:117] "RemoveContainer" containerID="30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c" Feb 14 20:13:11 crc kubenswrapper[4897]: E0214 20:13:11.808669 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:13:23 crc kubenswrapper[4897]: I0214 20:13:23.794507 4897 scope.go:117] "RemoveContainer" containerID="30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c" Feb 14 20:13:23 crc kubenswrapper[4897]: E0214 20:13:23.795619 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:13:36 crc kubenswrapper[4897]: I0214 20:13:36.794907 4897 scope.go:117] "RemoveContainer" containerID="30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c" Feb 14 20:13:36 crc kubenswrapper[4897]: E0214 20:13:36.796896 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:13:49 crc kubenswrapper[4897]: I0214 20:13:49.796102 4897 scope.go:117] "RemoveContainer" containerID="30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c" Feb 14 20:13:49 crc kubenswrapper[4897]: E0214 20:13:49.801501 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:14:00 crc kubenswrapper[4897]: I0214 20:14:00.795352 4897 scope.go:117] "RemoveContainer" containerID="30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c" Feb 14 20:14:00 crc kubenswrapper[4897]: E0214 20:14:00.796481 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:14:11 crc kubenswrapper[4897]: I0214 20:14:11.796236 4897 scope.go:117] "RemoveContainer" containerID="30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c" Feb 14 20:14:11 crc kubenswrapper[4897]: E0214 20:14:11.797908 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:14:24 crc kubenswrapper[4897]: I0214 20:14:24.795673 4897 scope.go:117] "RemoveContainer" containerID="30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c" Feb 14 20:14:24 crc kubenswrapper[4897]: E0214 20:14:24.797010 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:14:39 crc kubenswrapper[4897]: I0214 20:14:39.795345 4897 scope.go:117] "RemoveContainer" containerID="30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c" Feb 14 20:14:39 crc kubenswrapper[4897]: E0214 20:14:39.796624 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:14:54 crc kubenswrapper[4897]: I0214 20:14:54.795182 4897 scope.go:117] "RemoveContainer" containerID="30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c" Feb 14 20:14:54 crc kubenswrapper[4897]: E0214 20:14:54.796638 4897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k5mzq_openshift-machine-config-operator(9f885c6c-b913-48e3-93fc-abf932515ea9)\"" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" podUID="9f885c6c-b913-48e3-93fc-abf932515ea9" Feb 14 20:15:00 crc kubenswrapper[4897]: I0214 20:15:00.244457 4897 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518335-kj9h4"] Feb 14 20:15:00 crc kubenswrapper[4897]: E0214 20:15:00.245788 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5c7888-5418-4d01-bebb-e96886381bcc" containerName="registry-server" Feb 14 20:15:00 crc kubenswrapper[4897]: I0214 20:15:00.245808 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5c7888-5418-4d01-bebb-e96886381bcc" containerName="registry-server" Feb 14 20:15:00 crc kubenswrapper[4897]: E0214 20:15:00.245822 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34767ec5-83b6-4d47-b0bd-180d09eb6dcb" containerName="extract-utilities" Feb 14 20:15:00 crc kubenswrapper[4897]: I0214 20:15:00.245831 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="34767ec5-83b6-4d47-b0bd-180d09eb6dcb" containerName="extract-utilities" Feb 14 20:15:00 crc kubenswrapper[4897]: E0214 20:15:00.245853 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5c7888-5418-4d01-bebb-e96886381bcc" containerName="extract-utilities" Feb 14 20:15:00 crc kubenswrapper[4897]: I0214 20:15:00.245863 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5c7888-5418-4d01-bebb-e96886381bcc" containerName="extract-utilities" Feb 14 20:15:00 crc kubenswrapper[4897]: E0214 20:15:00.245877 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34767ec5-83b6-4d47-b0bd-180d09eb6dcb" containerName="extract-content" Feb 14 20:15:00 crc kubenswrapper[4897]: I0214 20:15:00.245885 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="34767ec5-83b6-4d47-b0bd-180d09eb6dcb" containerName="extract-content" Feb 14 20:15:00 crc kubenswrapper[4897]: E0214 20:15:00.245902 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad55399-3cdc-4b71-88a1-0a81d39d37dc" containerName="registry-server" Feb 14 20:15:00 crc kubenswrapper[4897]: I0214 20:15:00.245911 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad55399-3cdc-4b71-88a1-0a81d39d37dc" containerName="registry-server" Feb 14 20:15:00 crc kubenswrapper[4897]: E0214 20:15:00.245937 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34767ec5-83b6-4d47-b0bd-180d09eb6dcb" containerName="registry-server" Feb 14 20:15:00 crc kubenswrapper[4897]: I0214 20:15:00.245945 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="34767ec5-83b6-4d47-b0bd-180d09eb6dcb" containerName="registry-server" Feb 14 20:15:00 crc kubenswrapper[4897]: E0214 20:15:00.245965 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad55399-3cdc-4b71-88a1-0a81d39d37dc" containerName="extract-utilities" Feb 14 20:15:00 crc kubenswrapper[4897]: I0214 20:15:00.245974 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad55399-3cdc-4b71-88a1-0a81d39d37dc" containerName="extract-utilities" Feb 14 20:15:00 crc kubenswrapper[4897]: E0214 20:15:00.246004 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad55399-3cdc-4b71-88a1-0a81d39d37dc" containerName="extract-content" Feb 14 20:15:00 crc kubenswrapper[4897]: I0214 20:15:00.246012 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad55399-3cdc-4b71-88a1-0a81d39d37dc" containerName="extract-content" Feb 14 20:15:00 crc kubenswrapper[4897]: E0214 20:15:00.246051 4897 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5c7888-5418-4d01-bebb-e96886381bcc" containerName="extract-content" Feb 14 20:15:00 crc kubenswrapper[4897]: I0214 20:15:00.246059 4897 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5c7888-5418-4d01-bebb-e96886381bcc" containerName="extract-content" Feb 14 20:15:00 crc kubenswrapper[4897]: I0214 20:15:00.246330 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="34767ec5-83b6-4d47-b0bd-180d09eb6dcb" containerName="registry-server" Feb 14 20:15:00 crc kubenswrapper[4897]: I0214 20:15:00.246358 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5c7888-5418-4d01-bebb-e96886381bcc" containerName="registry-server" Feb 14 20:15:00 crc kubenswrapper[4897]: I0214 20:15:00.246391 4897 memory_manager.go:354] "RemoveStaleState removing state" podUID="dad55399-3cdc-4b71-88a1-0a81d39d37dc" containerName="registry-server" Feb 14 20:15:00 crc kubenswrapper[4897]: I0214 20:15:00.248025 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518335-kj9h4" Feb 14 20:15:00 crc kubenswrapper[4897]: I0214 20:15:00.261172 4897 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 20:15:00 crc kubenswrapper[4897]: I0214 20:15:00.271387 4897 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 20:15:00 crc kubenswrapper[4897]: I0214 20:15:00.289177 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518335-kj9h4"] Feb 14 20:15:00 crc kubenswrapper[4897]: I0214 20:15:00.370330 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/616205ea-5181-4f16-beef-cf3dddca917a-config-volume\") pod \"collect-profiles-29518335-kj9h4\" (UID: \"616205ea-5181-4f16-beef-cf3dddca917a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518335-kj9h4" Feb 14 20:15:00 crc kubenswrapper[4897]: I0214 20:15:00.371066 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/616205ea-5181-4f16-beef-cf3dddca917a-secret-volume\") pod \"collect-profiles-29518335-kj9h4\" (UID: \"616205ea-5181-4f16-beef-cf3dddca917a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518335-kj9h4" Feb 14 20:15:00 crc kubenswrapper[4897]: I0214 20:15:00.371214 4897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9pxp\" (UniqueName: \"kubernetes.io/projected/616205ea-5181-4f16-beef-cf3dddca917a-kube-api-access-m9pxp\") pod \"collect-profiles-29518335-kj9h4\" (UID: \"616205ea-5181-4f16-beef-cf3dddca917a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518335-kj9h4" Feb 14 20:15:00 crc kubenswrapper[4897]: I0214 20:15:00.472879 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/616205ea-5181-4f16-beef-cf3dddca917a-config-volume\") pod \"collect-profiles-29518335-kj9h4\" (UID: \"616205ea-5181-4f16-beef-cf3dddca917a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518335-kj9h4" Feb 14 20:15:00 crc kubenswrapper[4897]: I0214 20:15:00.473059 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/616205ea-5181-4f16-beef-cf3dddca917a-secret-volume\") pod \"collect-profiles-29518335-kj9h4\" (UID: \"616205ea-5181-4f16-beef-cf3dddca917a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518335-kj9h4" Feb 14 20:15:00 crc kubenswrapper[4897]: I0214 20:15:00.473085 4897 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9pxp\" (UniqueName: \"kubernetes.io/projected/616205ea-5181-4f16-beef-cf3dddca917a-kube-api-access-m9pxp\") pod \"collect-profiles-29518335-kj9h4\" (UID: \"616205ea-5181-4f16-beef-cf3dddca917a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518335-kj9h4" Feb 14 20:15:00 crc kubenswrapper[4897]: I0214 20:15:00.476567 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/616205ea-5181-4f16-beef-cf3dddca917a-config-volume\") pod \"collect-profiles-29518335-kj9h4\" (UID: \"616205ea-5181-4f16-beef-cf3dddca917a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518335-kj9h4" Feb 14 20:15:00 crc kubenswrapper[4897]: I0214 20:15:00.481733 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/616205ea-5181-4f16-beef-cf3dddca917a-secret-volume\") pod \"collect-profiles-29518335-kj9h4\" (UID: \"616205ea-5181-4f16-beef-cf3dddca917a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518335-kj9h4" Feb 14 20:15:00 crc kubenswrapper[4897]: I0214 20:15:00.489951 4897 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9pxp\" (UniqueName: \"kubernetes.io/projected/616205ea-5181-4f16-beef-cf3dddca917a-kube-api-access-m9pxp\") pod \"collect-profiles-29518335-kj9h4\" (UID: \"616205ea-5181-4f16-beef-cf3dddca917a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29518335-kj9h4" Feb 14 20:15:00 crc kubenswrapper[4897]: I0214 20:15:00.586627 4897 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518335-kj9h4" Feb 14 20:15:01 crc kubenswrapper[4897]: I0214 20:15:01.077559 4897 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518335-kj9h4"] Feb 14 20:15:01 crc kubenswrapper[4897]: W0214 20:15:01.086853 4897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod616205ea_5181_4f16_beef_cf3dddca917a.slice/crio-815813c966195dea5e5349e1375873d0d79d74cee83103ccdf8e639eb9553b62 WatchSource:0}: Error finding container 815813c966195dea5e5349e1375873d0d79d74cee83103ccdf8e639eb9553b62: Status 404 returned error can't find the container with id 815813c966195dea5e5349e1375873d0d79d74cee83103ccdf8e639eb9553b62 Feb 14 20:15:01 crc kubenswrapper[4897]: I0214 20:15:01.734189 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518335-kj9h4" event={"ID":"616205ea-5181-4f16-beef-cf3dddca917a","Type":"ContainerStarted","Data":"cb8087060e14dccb4e6e530b632560cb222cf23cb803acc6791874740c5a5fbb"} Feb 14 20:15:01 crc kubenswrapper[4897]: I0214 20:15:01.734452 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518335-kj9h4" event={"ID":"616205ea-5181-4f16-beef-cf3dddca917a","Type":"ContainerStarted","Data":"815813c966195dea5e5349e1375873d0d79d74cee83103ccdf8e639eb9553b62"} Feb 14 20:15:01 crc kubenswrapper[4897]: I0214 20:15:01.753450 4897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29518335-kj9h4" podStartSLOduration=1.753421287 podStartE2EDuration="1.753421287s" podCreationTimestamp="2026-02-14 20:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 20:15:01.750930541 +0000 UTC m=+5554.727339044" watchObservedRunningTime="2026-02-14 20:15:01.753421287 +0000 UTC m=+5554.729829810" Feb 14 20:15:02 crc kubenswrapper[4897]: I0214 20:15:02.751790 4897 generic.go:334] "Generic (PLEG): container finished" podID="616205ea-5181-4f16-beef-cf3dddca917a" containerID="cb8087060e14dccb4e6e530b632560cb222cf23cb803acc6791874740c5a5fbb" exitCode=0 Feb 14 20:15:02 crc kubenswrapper[4897]: I0214 20:15:02.751860 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518335-kj9h4" event={"ID":"616205ea-5181-4f16-beef-cf3dddca917a","Type":"ContainerDied","Data":"cb8087060e14dccb4e6e530b632560cb222cf23cb803acc6791874740c5a5fbb"} Feb 14 20:15:04 crc kubenswrapper[4897]: I0214 20:15:04.180238 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518335-kj9h4" Feb 14 20:15:04 crc kubenswrapper[4897]: I0214 20:15:04.263711 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9pxp\" (UniqueName: \"kubernetes.io/projected/616205ea-5181-4f16-beef-cf3dddca917a-kube-api-access-m9pxp\") pod \"616205ea-5181-4f16-beef-cf3dddca917a\" (UID: \"616205ea-5181-4f16-beef-cf3dddca917a\") " Feb 14 20:15:04 crc kubenswrapper[4897]: I0214 20:15:04.264377 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/616205ea-5181-4f16-beef-cf3dddca917a-secret-volume\") pod \"616205ea-5181-4f16-beef-cf3dddca917a\" (UID: \"616205ea-5181-4f16-beef-cf3dddca917a\") " Feb 14 20:15:04 crc kubenswrapper[4897]: I0214 20:15:04.264428 4897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/616205ea-5181-4f16-beef-cf3dddca917a-config-volume\") pod \"616205ea-5181-4f16-beef-cf3dddca917a\" (UID: \"616205ea-5181-4f16-beef-cf3dddca917a\") " Feb 14 20:15:04 crc kubenswrapper[4897]: I0214 20:15:04.265228 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/616205ea-5181-4f16-beef-cf3dddca917a-config-volume" (OuterVolumeSpecName: "config-volume") pod "616205ea-5181-4f16-beef-cf3dddca917a" (UID: "616205ea-5181-4f16-beef-cf3dddca917a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 20:15:04 crc kubenswrapper[4897]: I0214 20:15:04.276416 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/616205ea-5181-4f16-beef-cf3dddca917a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "616205ea-5181-4f16-beef-cf3dddca917a" (UID: "616205ea-5181-4f16-beef-cf3dddca917a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 20:15:04 crc kubenswrapper[4897]: I0214 20:15:04.276567 4897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/616205ea-5181-4f16-beef-cf3dddca917a-kube-api-access-m9pxp" (OuterVolumeSpecName: "kube-api-access-m9pxp") pod "616205ea-5181-4f16-beef-cf3dddca917a" (UID: "616205ea-5181-4f16-beef-cf3dddca917a"). InnerVolumeSpecName "kube-api-access-m9pxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 20:15:04 crc kubenswrapper[4897]: I0214 20:15:04.366841 4897 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9pxp\" (UniqueName: \"kubernetes.io/projected/616205ea-5181-4f16-beef-cf3dddca917a-kube-api-access-m9pxp\") on node \"crc\" DevicePath \"\"" Feb 14 20:15:04 crc kubenswrapper[4897]: I0214 20:15:04.366875 4897 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/616205ea-5181-4f16-beef-cf3dddca917a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 20:15:04 crc kubenswrapper[4897]: I0214 20:15:04.366885 4897 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/616205ea-5181-4f16-beef-cf3dddca917a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 20:15:04 crc kubenswrapper[4897]: I0214 20:15:04.778366 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29518335-kj9h4" event={"ID":"616205ea-5181-4f16-beef-cf3dddca917a","Type":"ContainerDied","Data":"815813c966195dea5e5349e1375873d0d79d74cee83103ccdf8e639eb9553b62"} Feb 14 20:15:04 crc kubenswrapper[4897]: I0214 20:15:04.778420 4897 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="815813c966195dea5e5349e1375873d0d79d74cee83103ccdf8e639eb9553b62" Feb 14 20:15:04 crc kubenswrapper[4897]: I0214 20:15:04.778488 4897 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29518335-kj9h4" Feb 14 20:15:04 crc kubenswrapper[4897]: I0214 20:15:04.843711 4897 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518290-9j2mr"] Feb 14 20:15:04 crc kubenswrapper[4897]: I0214 20:15:04.852803 4897 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29518290-9j2mr"] Feb 14 20:15:05 crc kubenswrapper[4897]: I0214 20:15:05.819960 4897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e5a598f-2e95-4f4e-a9ba-993823b16b86" path="/var/lib/kubelet/pods/1e5a598f-2e95-4f4e-a9ba-993823b16b86/volumes" Feb 14 20:15:07 crc kubenswrapper[4897]: I0214 20:15:07.802948 4897 scope.go:117] "RemoveContainer" containerID="30d51cef275281b5f0fb0877792d6f1baa0125601f9cb5ab373fc48d036f295c" Feb 14 20:15:08 crc kubenswrapper[4897]: I0214 20:15:08.843142 4897 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k5mzq" event={"ID":"9f885c6c-b913-48e3-93fc-abf932515ea9","Type":"ContainerStarted","Data":"f048a48dd5aef6139e8cbad7a68cf3a08e030ac8847083a747fd736ba8b0b689"} var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515144153743024454 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015144153744017372 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015144140467016513 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015144140470015455 5ustar corecore